Why AI Ethics Matters in Project Management
Transcript
Artificial intelligence is fundamentally shifting how we work, but it’s imperative to use the emerging tech responsibly. That’s sparking important conversations among project professionals and their teams. We discuss this with:
Naveen Goud Bobburi, PMP, chief manager, ICICI Bank, Hyderabad: Bobburi discusses how organizations can create and implement AI ethics strategies—and how to incorporate them into day-to-day project workflows. Bobburi also walks us through an example of how he and one of his teams handled an ethics concern of biased data.
Lea Li, PMP, technical program manager, Meta, Menlo Park, California, USA: Li shares some of the biggest AI ethics concerns project professionals are facing and the steps her teams take to use or create ethical AI. Plus, how strong AI ethics training should continuously adapt as AI rapidly evolves.
STEVE HENDERSHOT
For many companies, the artificial intelligence journey is just getting started. So they’re still getting a grasp on AI’s immense and constantly evolving power. But one thing is clear: There’s still so much work to be done for AI to become a reliable and responsible best-case version of itself.
So today, let’s talk about AI ethics—and what it means for project professionals who use or create AI tools.
In today’s fast-paced and complex business landscape, project professionals lead the way, delivering value while tackling critical challenges and embracing innovative ways of working. On Projectified®, we bring you insights from the project management community to help you thrive in this evolving world of work through real-world stories and strategies, inspiring you to advance your career and make a positive impact.
This is Projectified. I’m Steve Hendershot.
We’re watching an AI revolution before our very eyes: Seventy percent of business leaders believe generative AI will significantly change the way their companies create, deliver and capture value, according to the 2024 PwC Global CEO survey. But there’s also an AI reckoning when it comes to using it responsibly. Only 29% are very confident that AI and machine learning are being applied ethically today, according to a Workday global survey.
For project professionals, there are myriad ethical concerns. Is the AI tool you’re using to plan resource allocation or identify risks delivering accurate results? Is the AI system your team is building secure? And are the machine learning models team members are developing tested and vetted so they don’t generate biased insights or harmful results? The AI landscape is changing fast—and to help you keep pace, you can check out a suite of PMI resources. Go to PMI.org/podcast and click on the transcript for this episode.
Organizations and teams are adopting AI so quickly that it can be a challenge to ensure that people are taking time to think through the ethical concerns. They need to carefully establish practices that are adopted across departments. I spoke about that with Naveen Goud Bobburi, a chief manager at India’s ICICI Bank in Hyderabad.
MUSICAL TRANSITION
STEVE HENDERSHOT
Naveen, let’s dive into our AI ethics discussion. There are several ethical challenges project professionals need to be aware of. You refer to them as the ABCs of ethics: accuracy, bias, compliance, discrimination, explainability, fairness. That’s a pretty good way to remember them. What’s an example of one of these concerns in a project?
NAVEEN GOUD BOBBURI
When we integrate AI into project management, it sort of introduces a complex interplay of technology advancement, and that brings with itself the ethical considerations. Coming to project management professionals, they must navigate the landscape.
A relevant example would be like for bias—when selecting data for training AI models, there is obviously a risk of introducing biases. For instance, you may take the case of facial recognition systems, trained primarily on faces that might have difficulty recognizing people with darker skins. You have to identify and mitigate the biases in the AI algorithms and data to prevent discriminatory outcomes towards anyone.
Strategies to create—and implement—AI ethics practices on teams
STEVE HENDERSHOT
That’s a good example. Now, you’re not only looking at these ethics concerns from the project management perspective but also in the context of the financial sector. How do you go about creating and implementing AI ethical practices in your work, given that extra layer of regulation? And how can other organizations do the same?
NAVEEN GOUD BOBBURI
For us, any AI-specific or AI technology that we use, AI model that we use, we have clear instructions how to go about them. And if at all there are any outputs from them which are not according to the standards, then there are certain actions to be taken, which are clearly spelled out.
To efficiently manage AI ethics, organizations should kind of develop a comprehensive framework that includes AI ethic policies and guidelines, wherein they should clearly define their organization’s stance on the AI ethics and then outline the principle for critical concerns of the AI—that is, data privacy, fairness, transparency, accountability and safety. Provide guidelines for AI development, deployment and monitoring.
And then comes to the AI ethical training, which is not just that we are making out the policies and guidelines and just keeping at the background. So making sure that we are following them up, not just having them in the book or in the policies for people just to show off. We need to train the people on them, educate employees, especially project managers, on the AI ethical concepts and principles if they are not already aware of [them]. And then provide training to them on identifying and mitigating the ethical risks also. And then ensuring that we are fostering a culture of ethical awareness and responsibility.
It’s not just that you have created policies and guidelines and train them. You’re conducting regular assessments to evaluate the potential ethical implication of the AI projects and then identify and address the potential biases or the privacy risks that are arising out of them. And then lastly, they should be creating an AI ethics committee or a task force who essentially overlook these concerns at the organization level, establish a cross-functional team to oversee AI ethics initiatives to have enough stakeholders from all the relevant teams, and then provide guidance and supports for the project managers. And monitor industry best practices and regulatory changes so what you are adhering to is still relevant.
STEVE HENDERSHOT
If you pursue that task force model, what’s the best way to ensure that its recommendations find their way into [the] day-to-day workflow?
NAVEEN GOUD BOBBURI
It all comes down to leadership backing. So even if you have a task force, if you have rules, if you have many [pieces of] documentation and rules and regulations, if there is not enough leadership backing and from all the stakeholders or the board of the company, then this would be nothing. We should have a strong leadership culture, which fosters an ethical AI culture wherein they champion the ethical values, they lead by example and demonstrate a commitment to ethics. They should be able to point out that AI ethics is something which they are bound to as a company principle. And then you need to create a safe space to discuss ethical concerns. And then there should be enough collaboration among the stakeholders. Like you build partnership with not only internal, external [stakeholders] also to address those ethical challenges.
STEVE HENDERSHOT
Can you share an example of how you’ve championed AI ethics in the past? How have you navigated AI ethical concerns or encouraged and equipped people to do so themselves?
NAVEEN GOUD BOBBURI
For these data-intensive and AI-related projects, or compliance projects as well, we generally significantly face ethical dilemmas in terms of data. To give you an example, we were developing a system to identify suspicious transaction patterns. In [the] financial world, you would see any transaction that happens; internal banking teams track them and make sure that they’re not suspicious. And we are bound by regulations to report them. So while we were developing such [a] system through using the AI, we had the concern of bias in the data. The fear was that [the] system might disproportionately flag certain customer segments, which should not be the case, ideally or ethically.
To address this, we had implemented a multifaceted approach wherein firstly, we conducted a rigorous data audit of the system to remove any biases in the data set. In addition to that, we employed advanced statistical techniques to ensure that the model [had] fairness and equity in terms of data. We also had engaged with the business team to ensure that we are in compliance with the specific relevant regulations for that kind of a model. And crucially, what we had done was that we involved a diverse group of stakeholders, including the compliance team, the data team and the end users as well in the decision-makers’ process. End users, when I said, the business team. This collaborative approach, it had helped us to develop a comprehensive solution, which kind of mitigated the bias while adjusting the models, parameters, and incorporating the additional features or the safety measures.
This underscored the importance of the ethical considerations. From that situation, we integrated AI ethics training into our project management team and then established a dedicated ethics review process for all the AI initiatives. Moreover, we kind of cultivated a culture of open dialogue, wherein team members feel comfortable raising their ethical concerns. By fostering this approach, rather than in the project outcomes, we now prioritize the ethical consideration from the inception itself.
STEVE HENDERSHOT
That’s great. So you’ve had experience creating environments where people can openly raise ethical concerns. Let’s take this scenario: Someone on a project team has an ethical concern or even just an ethics question while working on a project. Who should they take their concern or question to so it can be handled quickly and correctly?
NAVEEN GOUD BOBBURI
Once an AI ethical concern has been identified, the project professional must determine the appropriate avenues for raising it. This could involve discussing immediately with the respective supervisor or the project manager. Typically the first point of contact is the direct supervisor, who may have the authority on this specific project. They might have an ethical officer or the chief compliance officers, dedicated personnels who deal with such issues. They should take up with them, and then see if that is kind of resolving and comes into the scope of immediate management.
If the organization has a specialized group focused on the AI ethics, they should be the ideal forum for addressing such issues. While they’re doing all this stuff, they should have a structure where they’re documenting the concern, wherein they clearly outline the ethical issue, including the context, potential impact and the supporting evidences wherever needed. And use formal channels, leverage the established mechanism that the organization has, such as the ethical hotlines, internal reporting tools, and if at all any, there are official forms designated for such ethical concerns. Lastly, mostly people don’t point out such issues—any ethical issues or any critical issue—because they fear the retaliation. I think mostly organizations have the whistleblower policies wherein they can raise such issues, which sort of protects them from any retribution. If such internal channels fail or are inadequate, it may be necessary to escalate the issues to the external bodies respectively, such as industry regulators or any professional associations.
Keeping your AI ethics policies flexible for the tech’s continued advancement
STEVE HENDERSHOT
We know AI is changing fast—there’s a new tool or application every day. How can project professionals make sure they are equipped to adapt as AI implementations or ethical issues evolve?
NAVEEN GOUD BOBBURI
To ensure that the ethical plans remain adaptable in the face of AI advancements, organization should have ethical impact assessments, integrate this assessment into the project’s life cycle. They should have it through the entire project life cycle to ensure the identification and mitigate potential risks. And then continuous training as well to provide ongoing training to the project management and the team that works on the AI ethics and their best responsible AI practices. And then collaborating with the experts in this field, experts on the AI ethical parts and legal professionals as well to ensure that we are compliant, stay informed about the evolving regulations and best practices.
AI has the potential to create significant positive impact, but you might have seen from the Marvel movies, power is a responsibility. So how will you use it while avoiding the unintended negative consequences of the AI? By proactively incorporating ethical considerations into project management, organizations can reap the benefits of AI while ensuring the enterprise risk management and building a strong ethical foundation for the future with stable outcomes.
MUSICAL TRANSITION
STEVE HENDERSHOT
Really quick, could you do us a favor? If you’re enjoying this episode, please leave a rating or review on Apple Podcasts, Spotify or wherever you listen. Your feedback helps us keep making this show.
Okay, now, let’s go to our next conversation with Lea Li. She’s a technical program manager at one of the world’s largest AI companies, Meta. Her work focuses on privacy infrastructure. She’s in Menlo Park, California, in the United States. Lea spoke with Projectified’s Hannah LaBelle about the good practices her teams follow to use and create ethical AI.
MUSICAL TRANSITION
AI ethics concerns project professionals are facing today
HANNAH LABELLE
Lea, thank you for talking with me. Let’s start with what are some of the biggest AI ethics topics or concerns that project professionals are facing today?
LEA LI
Sure. Some of the biggest AI ethics topics and concerns today include consent and privacy, potential bias in data, inaccurate results from AI. We need to ensure the users and individuals are aware of how their data is being used and that it’s being handled in accordance with their consent. We need to ensure that data used to train AI models is representative and unbiased. And then the models themselves do not amplify existing bias. We need to ensure that AI models are accurate and reliable.
HANNAH LABELLE
When are project professionals most likely to kind of see these in their work? Is it, you know, either in project planning, during project execution? What might be kind of the scenarios where you see these big concerns popping up?
LEA LI
Project professionals may experience these concerns when working on projects that involve use of AI, such as developing new products or services that rely on AI. It could be when they work with data that is used to train AI models or when they work with stakeholders who are impacted by the use of AI. I break it into five areas: during data collection and processing, model training and deployment, the integration with existing system from the AI models, when we scale AI solutions, when we handle sensitive or personal data in AI.
HANNAH LABELLE
We’ve heard tech companies, including Meta, being mentioned in the news about data privacy and AI concerns. Given these concerns and AI’s increasingly fast adoption across industries, why is it so critical for organizations and their project teams to have a handle on AI ethics?
LEA LI
It’s critical for organizations for regulatory compliance, public trust and reputation, innovation and competitive advantage, risk management, last, social responsibility. Regulatory bodies are focusing on AI ethics and privacy, leading to new regulations and guidelines that orgs must comply with, for example, GDPR (General Data Protection Regulation) and California Consumer Privacy Act, and the recent Digital Markets Act from Europe. This is to aim to regulate the behavior of large online platforms, including Meta and others, those use AI. On public trust and regulation, as AI becomes more prevalent in society, there’s growing concern about its potential impact on communities and individuals. Organizations that prioritize ethics can build trust with their customers, stakeholders and the public, which is essential for long-term success.
On the innovation and competitive advantage, by addressing the ethical concerns proactively, orgs can create innovative AI solutions that not only meet regulatory requirements but also provide a competitive advantage. Ethical AI can lead to better decision-making, improve customer experience and increase efficiency. AI can introduce new risks such as bias, discrimination and privacy violations. By prioritizing AI ethics, orgs can mitigate these risks and avoid costly legal battles, reputational damage and financial losses. As AI becomes more integrated into various aspects of life, organizations have a responsibility to ensure their AI applications align with societal values and contribute positively to the world.
HANNAH LABELLE
So there are several areas where AI ethics can become a concern on a project. What are some good practices your teams follow to use or create ethical AI on projects?
LEA LI
So first, we need to do education awareness. We can carry out training sessions on a regular cadence. This includes understanding bias, privacy concerns and the social impacts of AI solutions. And also some resources that we can reference on a daily basis.
Second, clear guidelines and standards, which we refer [to] as code of ethics and review processes. In code of ethics, we outline the ethical standards and practices that team members are expected to follow in review processes. We assess AI projects for ethical concerns, and only [if] we completed those processes, we can launch a new feature or product. During the process of developing AI solutions, we need to ensure we have a diverse team—diverse in different background, race, gender, expertise—so that we can all provide perspectives and identify potential ethical issues that might not be apparent otherwise.
We have regular audits of AI projects to ensure they comply with ethical guidelines and standards. There is also feedback mechanisms. That’s open for team members and stakeholders to provide feedback on AI projects. These are used to improve ethical practice. The last thing is on transparency and accountability. It includes documentation, the AI model algorithms, including what data is used, the decision-making process and the rationale behind these decisions. Team members should be able to explain how and why AI systems make certain decisions, which is crucial for identifying and mitigating ethical risks.
HANNAH LABELLE
Okay, so your teams are having AI ethics discussions throughout projects. Who all is involved in these conversations?
LEA LI
At Meta, we have engineers, data scientists and the partner team members working on the AI project. They all should be part of the discussion to ensure they understand the ethical implications of their work. Senior leaders and decision-makers are involved to ensure the ethical considerations are integrated into the org’s strategy and decision-making process. Lastly, legal experts. For every regulation, we have legal experts to translate and interpret those policies and regulations into engineering requirements. Involving them in the discussions, we can ensure compliance with relevant laws and regulations, such as data privacy laws, anti-discrimination laws and intellectual property laws.
HANNAH LABELLE
Where do project professionals typically fit into these AI ethics discussions? Are you facilitating communication or checking that certain steps are followed before moving on to the next project phase?
LEA LI
At Meta, in privacy infra (infrastructure), technical program managers are the key stakeholder [partners] in the org to stitch everything together. So they are aligning on the roadmap, prioritization, dependencies, and make sure every stakeholder is aware of the AI ethical practices and are integrated into the design and implementation with the projects. We conduct several events to make sure these ethical discussions take place regularly. There are project meetings, workflow creation, design reviews, retrospectives and training sessions, and annual ethics reviews.
We make sure [the] processes are followed by our teams. There are steps you need to follow through a template. We review at each checkpoint and in the end, we need to review [that] this feature complies with a legal requirement. That’s an implement process throughout the life cycle of a product or feature implementation. That’s how program managers are ensuring these checkpoints are happening and sign off before it moved on to next stage and to the final stage of release.
What AI ethics training should include—and how often it should be updated
HANNAH LABELLE
Let’s talk about AI ethics training. What should formal AI ethics training include?
LEA LI
Training should start with a basic understanding of AI tech and their potential impacts on society. Second, introduce the core ethical principles that should guide AI development and deployment, such as fairness, accountability, transparency and privacy. Three, provide some case studies and real-world examples. This will help see how ethical principles apply in real world and encourage team members to think critically about similar challenges they might face. Fourth is about regulatory and legal considerations. Cover the current legal landscape related to AI, including any relevant laws and regulations that must be followed. There is current DMA, Digital Markets Act, to ensure fair competition for social platforms and other big tech companies. Then lastly, hands-on workshops to provide interactive sessions where participants can work through ethical problems and practice implementing solutions. It could involve role-playing exercises or the use of AI ethics simulation tools. We all observed the fast evolvement of generative AI, and the regulations and the AI ethics training needed to be updated to reflect it.
HANNAH LABELLE
What would you say are some of your top lessons learned when it comes to AI ethics and these types of discussions that you’re having with teams? And how do you take that forward into future projects that you’re working on?
LEA LI
Data collection is super expensive to make sure they are representative and unbiased. During program management, we need to realistically count this effort in for any timeline. As we always hear people say, “Hey, rubbish in, rubbish out.” So we need to ensure the data collected into the model training are great quality. And we need to establish practices to measure the quality of the data and also measure the quality of the AI models.
Second lesson is about continuous monitoring and evaluation. We need to have alerts built in, monitoring built in to ensure there is no regression, especially for the AI ethics part, like fairness, transparency and effectiveness. Evaluation is always needed. And lastly, the team members. We need to keep the communication open and include diverse [and] representative team members in the life cycle of AI models. It should include both technical and nontechnical teams so we can meet both business needs and also ensuring ethical considerations.
STEVE HENDERSHOT
Thanks for listening to Projectified. Like what you heard? Subscribe to the show on your favorite podcast platform and leave us a rating or review. Your feedback matters a lot—it helps us get the support we need to continue making this show. And be sure to visit us online at PMI.org/podcast, where you’ll find the full transcripts for episodes as well as links to related content like useful tools and templates, the latest research reports, and more. Catch you next time!