How to Avoid AI Project Failure
Transcript
More and more project professionals are managing artificial intelligence (AI) projects, but teams are struggling to make sure they deliver real value. What makes AI projects different—and what are some common reasons they fail? What does it take to lead successful AI projects—and how do you build those must-have skills? We discuss the big issues with two experts at PMI Cognilytica: Kathleen Walch, CPMAI, director, and Ron Schmelzer, CPMAI, director and general manager. Both are based in Columbia, Maryland, USA.
STEVE HENDERSHOT
As more companies launch artificial intelligence initiatives, project managers need to strengthen their AI acumen to ensure those projects deliver genuine value. Today, let’s look at how AI projects are evolving—and how project professionals can develop those must-have AI skills.
In today’s fast-paced and complex business landscape, project professionals lead the way, delivering value while tackling critical challenges and embracing innovative ways of working. On Projectified®, we bring you insights from the project management community to help you thrive in this evolving world of work through real-world stories and strategies, inspiring you to advance your career and make a positive impact.
This is Projectified. I’m Steve Hendershot.
AI ambitions are sky-high for organizations—yet many of their projects quickly fizzle. By the end of 2025, at least 30% of generative AI initiatives will be abandoned after proof of concept, according to Gartner. Those misfires are often caused by poor data quality, inadequate risk controls, escalating costs or unclear business value. It’s all part of the AI learning curve.
Today, we’re looking at ways to accelerate that maturation—so teams can achieve AI project success.
Before we dive in, we have some exciting news. PMI’s CEO, Pierre Le Manh, is launching a new podcast that takes you behind the scenes of real organizational transformation. He’s talking with top leaders across the globe, diving into their strategies, innovations and lessons learned. If you’re curious about what it takes to drive change and lead with impact, this is for you. Check it out at pmi.org/the-shift-code-podcast.
Okay. Today, we’re speaking with two AI experts who are now part of PMI after the acquisition of their firm, Cognilytica. PMI Cognilytica’s research, learning products and Cognitive Project Management for AI, or CPMAI, certification are aimed at equipping organizations and professionals to build AI expertise.
We’re joined by Kathleen Walch, a director at PMI Cognilytica, and Ron Schmelzer, a director and general manager at PMI Cognilytica. Both are based in Columbia, Maryland, in the United States. They’re also the hosts of the AI Today Podcast.
What makes AI projects different from others? And what does it take to lead them? We’ve got answers.
MUSICAL TRANSITION
Common reasons why AI projects fail
STEVE HENDERSHOT
Let’s set the scene with the state of AI projects. The appetite is maxed out. AI projects are getting green-lit left and right, and yet not all of them in this first batch are completely delighting stakeholders once they’re delivered. What’s happening? What’s going on? Is this a case of misaligned expectations or a growth curve in terms of learning how to deliver?
KATHLEEN WALCH
Yeah, that’s a great question. So, we’ve been looking at AI projects for seven, eight years now at this point, since the beginning of when Cognilytica started back in 2017, and about 80-plus percent of AI projects are failing. And we said, “Why is this number so high?” We looked at different projects that were considered failures—and failure can be a number of different reasons. We’ve identified about 10 common reasons. So, one, people are running their AI projects like they’re running their application development projects, and they quickly realize that then they’re going to fail. Why? Because AI projects are data projects. So, you need to follow data methodologies and best practices around data, not necessarily software development. In that same light, data quality and data quantity are two big issues. So, if you don’t have enough data or you have bad quality data, then the old saying, “garbage in is garbage out,” and it is absolutely true with AI projects.
We also see that return on investment isn’t quite there. So, people get so excited with AI projects, and they want to move forward. There’s a lot of hype in the media and the press, and they see other organizations doing it, and they go, “We just want to jump right in.” And they don’t actually think about, “Well, what is that return on investment?” And return is money, time and resources, because we always say that AI projects are not free. They are going to cost time and money, and you’re going to need to devote resources to it. So really look at that and say, “What is the return that I’m looking for? And what problem am I really trying to solve?” And then making sure that AI is the right solution to that problem. Sometimes straight automation is better. Or humans, themselves, may be better. And then we have a few other ones that I’ll let Ron talk about.
RON SCHMELZER
The biggest thing to realize is that with artificial intelligence, we’re asking machines to do things that we had previously asked people to do, or that machines just were not capable of doing, like recognizing language or understanding images or making predictions or doing things autonomously. And when we’re asking machines to do these things, and the way that we’re instructing them to do them is by learning through data, well, data isn’t perfect. People aren’t perfect. Machines, therefore, are going to produce some—very highly, in some cases—unpredictable responses. They’re called probabilistic systems for a reason, because even with the same inputs, we won’t necessarily get the same outputs. And so, when we’re thinking about that from a project perspective, it causes all of these challenges, in part, because we’re not used to running projects where we are, say, depending on some technological component that has this high degree of variability, has all these issues around data dependency.
And so, we really need a different way. It’s almost like we need to separate machines that behave like machines, which we can use traditional styles of IT and project management to handle these very reliable, dependable, repeatable, deterministic systems. And we need another category to handle machines that kind of act like somewhat unpredictable people, and we can’t use the same sort of approaches. So, this is why we have to look at alternate ways of running these sorts of projects.
STEVE HENDERSHOT
With the data hygiene piece, is this something that’s catching project leaders off guard, or does it end up just being so far out of scope to sort of go fix all the data sets that might, theoretically, inform the model that the projects are doomed under that spec?
RON SCHMELZER
If you’ve ever experienced working with these large language models (LLMs) that are so popular right now, one of the biggest problems that people have with the LMs is that they hallucinate. They come up with answers that these large language models feel very confident about but are clearly wrong. And the answer comes down to, of course, scope issues. We need to focus the models on doing what they are trained to do and on the data they’ve been trained on well. And the more that we go beyond that scope, the more that we’re starting to run into this world of problems. And that’s, I think, one of the lessons that project managers and project professionals can learn, is how to manage the scope of projects so that we can get a handle on these data-related issues.
STEVE HENDERSHOT
Early on in the adoption curve of an exciting tech like this, I feel like there are a lot of projects with the not great deliverable of, “Yeah, let’s just try it out. Let’s just do something cool with AI.” And then that quickly morphs to a more practical focus. Where are we in that curve? Of people just wanting to make a starter investment to see what might be possible, and then that naturally evolves into, “Now, let’s make it practical.” But I think as you do that, you start to see smarter scopes.
How to make sure your AI project delivers real ROI
KATHLEEN WALCH
So, we always say, “Make sure that you’re actually solving a real problem.” But it doesn’t need to be a massive problem, right? But you need to be able to show that return. So, first, we need to say, “What problem are we trying to solve?” Make sure that it’s an actual problem and one worth solving, right? That it’s not a little toy project or something really small. Then we say, “Okay, this is a problem we’re solving. Now, what parts of this need to be AI?” And that’s really important to understand, too, because maybe your problem can be solved with, like I said earlier, straight automation, or you can code your way to the problem’s solution.
So, if we know that it needs to be an AI solution, then we can say, “All right. Now, which pattern or patterns of AI are we doing?” And we came up with the seven patterns of AI because a few years ago especially, people were saying, “Well, is this an AI project? Is this not an AI project? I’m not sure.” And it would trip them up. So we said, “Why don’t we go one level deeper and say, ‘Don’t talk about AI. Say what are we trying to solve? Are we trying to create more personalized offerings?’” So the hyperpersonalization pattern. “Are we trying to have machines and humans talk to each other in the language of humans?” That’s the conversational patterns. We think about chatbots in that pattern, or we think about large language models in that pattern. What are we trying to solve? And then we can look at the seven patterns, and if it falls into one or more of those seven patterns, then we know that it’s an AI project.
STEVE HENDERSHOT
So, now we’re getting to it in the sense of, there’s a difference between AI is challenging for project leaders because it’s new and evolving, but also just different for all the reasons that you began to outline there with the CPMAI framework. So, how are AI projects different than other projects? This discipline is still squishy, and organizations don’t have tons of experience. How are they fundamentally different in ways that require some understanding of the seven buckets you outlined?
RON SCHMELZER
We need to understand the unique differences of what makes, say, an AI project different than, say, building a website or implementing an ERP (enterprise resource planning) system or any sort of technical project where we can have well-defined objectives and well-defined features, if you will—build to those features, test to those features, and then deploy them. And then, basically, say the website is done, it’s up, or the ERP system is live.
The problems are, you can’t do that with AI systems, unfortunately, because [with] the AI systems, you can’t do what’s called “set it and forget it.” The AI systems are so dependent on data. And, also, they’re not functionality-driven. It’s not like we’re building a chatbot. What’s the functionality of a chatbot? Well, it’s to respond to questions and provide answers. But the quality of the chatbot is so highly dependent on how it’s been trained, the context of what these chatbots need to respond to, that even if you don’t change the functionality, changes to the data highly change whether or not the chatbot works at all or provides acceptable levels of responses and capabilities. That’s why we need one more level of detail.
CPMAI, I think it’s good to understand it as a framework, as a methodology, a process, if you will—where we can iterate through these projects and ask ourselves a series of questions and do them in a particular order so that we can reduce the risk and increase our likelihood of success for the AI project. As you mentioned, it’s very tempting to start with, “Let’s start with the technology and then let’s start playing with the technology. And then soon, we’ll have a solution to our problem.” Of course, that’s backwards, right? You should start with the business problem, then you should think about, in the case of an AI system, the next step is to think about the data that’s required to solve that problem.
KATHLEEN WALCH
You need to understand how to run and manage AI projects and how they are different than maybe some of those software development projects that you’ve done in the past, or construction projects—whatever it is, whatever type of project professional that you are. It’s important to understand the data needs, and also making sure that you are following that step-by-step approach. It is iterative, and you can go back a phase or two, depending on what your needs are. The reason that the methodology was developed is because we saw people just jumping forward or not doing things in a correct order and then realizing that they need to go back or that it can delay their project for many months.
So, when it comes to data understanding, for example, what data do you need? Do you have access to that data? Is it internal data? Is it third-party data that you need? And if you don’t have access to all of that data, then how do you control that scope and say, “Okay, I’m going to need less data so that I can move forward, because this is the only data that I have access to.” And so, we’ve seen a lot of people move forward and say, “Okay, well, we’re going to move forward with this, and we’re just going to get this data from our data warehouse,” and the people who are controlling that go, “No, you’re not.” And then they go back and forth for months, and before you know it, five months later, you still don’t have that data that you need. It’s important to also get that terminology in place so that you can translate between those business and technology needs to make sure that everybody understands and is moving forward, and that you can talk the language of both.
RON SCHMELZER
People have thought that, in the past, the only people that need to worry about running and managing AI projects are technology-focused individuals who may be data scientists, data engineers, machine learning engineers. And that actually certainly was the case. Back when we started CPMAI and did all this sort of work in 2017, the only people who were really building AI projects and doing things with AI were those highly-skilled people, generally within IT, but sometimes within the line of business who had that data science background and could figure out how to twiddle the knobs of all these technologies and train systems from scratch with GPUs (graphics processing units) to make it work. But something happened, of course, in 2022, and that’s that anybody could make and use AI systems, and now, it is in the hands of everybody. Maybe it’s a salesperson using AI to craft the emails that are going out. Maybe it’s someone in accounts, receivable or payable, starting to use AI to process and handle inbounds. Maybe it’s someone within HR who’s using AI for any part of the recruiting process.
That’s the biggest change that we’ve seen, is that all of a sudden, AI is working its way into every project, into every process. And maybe, with the exception of entirely human-based processes or processes and projects that cannot use AI for one reason or the other, pretty much wherever AI can be used, it looks like it will be used, which means now any project professional is going to be tasked to manage this component of their project, where they’re dependent on these AI systems to provide a core aspect of the value of that project. And they need to manage this resource, otherwise the whole project will not be successful. And we’re focusing on project success.
STEVE HENDERSHOT
You’ve both said throughout this conversation that it’s inevitable that regardless of what you create with AI, you’ll be iterating continually, endlessly. Given that dynamic, there might be an instinct to fail fast—to take whatever data you can get right now and roll something out, even if you’re aware that this would be better with more data, better data, cleaner data and so forth. So how do you walk that line between releasing the beta version to start the iteration clock, versus taking the time to build something that is solid enough in its initial vision and construction and performance to win support and hold up over the long term?
KATHLEEN WALCH
So, we always say, “Think big, start small and iterate often.” So, think about the smallest thing that you could possibly do. Make sure you’re solving that big problem, but then really start small—and start as small as possible, because you want to make sure that you’re showing those quick, early wins. Data just never comes in that really nice, clean, 100% perfect, usable state. So, we know that we’re going to have to do something to that data to make it usable, right? We have to get access to that data, then we need to clean it, prep that data, maybe dedupe it, enhance it in different ways. So, what’s the smallest set that you can start with that you know that you have access to, and then continue on iterations?
So, the first iteration isn’t going to be exactly maybe what you want, but it’s going to solve something. And then you can add on as time goes on. So if we think about a chatbot, for example, a really nice use case that I always like to talk about is the U.S. Postal Service. What do you think the most commonly asked question is that they get? Track a package, right? Because that’s what people want to know. So when they thought about this, they said, “I don’t need to have my chatbot answer 10,000 questions in 50 different languages. I need it to just answer one question: track my package.” They were trying to reduce call center volume, so that was what they were trying to measure on their return on investment to get a chatbot. And they were able to measure that, and they said, “Okay, this was a success. Now let’s bring in the second most frequently asked question, and then we’ll have the chatbot answer that.” Then they made it clear that their chatbot could only answer a few questions, but it was questions that had a lot of high volume for their call centers. And then, with time, you can continue to iterate and continue to add on different questions. And I think that’s how we need to think about this, especially when it comes to some of those data issues. What can we address with the resources that we have right now? And then how do we move forward with that?
Ways to keep your AI knowledge up to date
STEVE HENDERSHOT
So, AI projects are not “set it and forget it” scenarios. The same goes for AI upskilling—the AI acumen we have today isn’t going to hold indefinitely. So, what’s your advice for project leaders to keep their AI knowledge up to date going forward?
RON SCHMELZER
It’s interesting at the rate of change. Technology, especially in the world of AI, is changing so fast that it’s actually really difficult to even plan out, let’s say, a year or two, and say, “Well, we’re going to invest in this technology now because we’re going to implement it over the next year or two.” Because something may come out literally the next day, which maybe invalidates half of what you’re thinking. Like, “Wow, this new thing came out. It’s actually going to change what we were planning on doing because now, we have all these great capabilities out of the box. We don’t even have to maybe even invest at all, or we inherit a lot of that functionality.” So what changes fast and what changes slow? Technology changes fast, but process and methodology don’t change that fast. We were very careful with the methodology to focus on the people and process side and not focus on the technology side.
So, I think there’s two sides. One, methodology should stay stable. Learn the methodology, implement it and even adapt it. There’s no reason that you can’t build on top of it and modify it to suit your specific organizational requirements or enhance it. That’s all perfectly fine. This just doesn’t need to be a dogma here. It just needs to be, “These are the steps that will provide the greatest degree of success but add to it to make it successful for you.”
Everybody needs to pay attention to all the changes that are happening in the AI landscape. It’s very difficult, even for those who follow it all every day, to keep up with it. Keep up with at least the major trends and keep up with the technologies and what’s available to them, because one of the big things is that AI capabilities are going to become embedded. And all of a sudden, your little spreadsheet is now an intelligent AI-enabled spreadsheet. And so even the tools you’re using every day are going to become more and more powerful. So, maintaining that always-learning growth mindset will be very helpful for the project professional of the future.
KATHLEEN WALCH
Yeah, absolutely. We talk a lot about growth mindset, and I know that PMI also has a number of different resources that project professionals can take advantage of. In addition to the CPMAI certification, there’s a lot of different e-learning courses that I highly recommend around generative AI and different workflows and just learnings about that. And also, the PMI AI blog is a good resource as well.
STEVE HENDERSHOT
What advice do you have for project professionals when it comes to setting smart metrics for AI projects, and also ensuring that those projects align with broader organizational goals?
KATHLEEN WALCH
Well, I think it’s really important to think about the ROI, what that return is, as you’re setting those metrics and making sure that it’s a positive ROI—and that you’re actually solving a real business problem, as well.
RON SCHMELZER
Take a look at the iteration time. Keep things really short, like really look at the time between defining a business problem and objective with AI and then coming out with something in the real world that’s actually being tested—that pilot. Try to squeeze that time as short as possible, mainly because the AI landscape is changing fast.
KATHLEEN WALCH
Yeah, and to add to that, what Ron mentioned about pilots—we always talk about proof of concepts versus pilots. And proof of concepts, we say, really don’t prove anything. You can iterate, but make sure that it is out there in the real world with a pilot, that it’s using real-world data and real-world users. Because you can never predict how they’re going to use things until they actually do.
STEVE HENDERSHOT
Last one: What’s next? You guys are uniquely positioned to maybe see around corners. What should project professionals anticipate is the next wave or character shift within AI?
RON SCHMELZER
2025 is really the year of agentic AI. I’m sure you’re going to hear a lot about that. Instead of just prompting systems and getting a response back—that’s like the V1, if you will, of AI systems—we’re now in systems where you can just tell the AI system what you want to accomplish, and it’ll go off and do those things and chain a bunch of things together. That’s certainly a big technology trend.
I think sort of more of a meta trend, especially one for project professionals, is the involvement of the organization in terms of understanding the bigger role of AI success. Organizations are pushing now to say, “Okay, yeah, all this AI stuff is cool, and maybe people are even doing AI in their own daily lives, using it on their phones or their computer.” So, I think people’s experiences with AI are going become much more normalized, I guess, is the best word. So there’s going to be much more of an emphasis on, “Okay, so what? What will this actually help me accomplish that’s going to be transformative?”
So, the big, general story is AI as a transformative agent, helping organizations really transform what they’re trying to do—deliver services better, provide things more efficiently, be more responsive, be more sustainable, have greater governance, reduce the risks. It also elevates the role of the project professional into being much more, I would say, part of that strategic aspect of the overall project success. Not just thinking about management of an individual project and its specific goals but thinking of the overall objectives and how that project fits into them.
STEVE HENDERSHOT
Excellent. Thanks very much. Thanks to both of you for joining us. Welcome to PMI. Look forward to doing it again.
RON SCHMELZER
Thank you for having us.
KATHLEEN WALCH
Yeah, so excited. This is a really wonderful discussion.
STEVE HENDERSHOT
And thank you for listening to Projectified. Like what you heard? Subscribe to the show on your favorite podcast platform and leave us a rating or review. Your feedback matters a lot—it helps us get the support we need to continue making this show. And be sure to visit us online at PMI.org/podcast, where you’ll find the full transcripts for episodes as well as links to related content, like useful tools and templates, the latest research reports and more. Catch you next time!