Mainstream Artificial Intelligence
AI is now table stakes for many companies. What’s still desperately needed are hard conversations around the ethical implications of AI.
AI has surged well beyond the realm of automating routine tasks. It’s now making decisions for us based on our behaviors and using adaptive algorithms to help us navigate unfamiliar environments. The implications are enormous, fueling innovation in everything from fully autonomous transport to AI-powered knowledge and creative work.
“We are seeing every industry adopt AI, with the biggest-value projects occurring in large enterprise financial and insurance companies,” said Steven Astorino, VP of development, data and AI at IBM, Toronto, Ontario, Canada.
However, the usage and benefits of AI are unevenly spread (see figure 2). And innovation brings risk: the seamless integration of algorithms into our daily lives means encoded opinions and biases don’t get noticed, let alone questioned. One emerging area gaining traction is the field of emotion AI, which enables machines to read and respond to our emotional states. This could help organizations gain a much better understanding of their customers and employees. But, here, too, there are risks—and murky ethical areas.
“Every conversation about technologies should consider, ‘Okay, what are the ethical implications? What are the unintended consequences?’” said Rana el Kaliouby, author and CEO of emotion AI pioneer Affectiva, based in Boston, Massachusetts, United States.13
The effects aren’t always what they would appear on the surface. “My biggest concern is not that robots are going to take over—it’s that we’re accidentally building in bias in unintended ways,” she said. To combat that, project leaders must double down on building diverse teams, so this powerful technology is harnessed by people with different points of view and perspectives.
UK global creative agency AnalogFolk has gone a step further. It saw how language could affect how people are perceived—and that women often choose wording that makes them sound passive. So the agency developed a tool, called BigUp.AI, that uses natural language processing and machine learning to analyze blocks of text and offer users more powerful wording.14
13. “Reinvent, Reimagine, Rewrite, Reemerge—and Rise Up,” Voices on Project Management, PMI, 2020.
Every conversation about technologies should consider, 'Okay, what are the ethical implications? What are the unintended consequences?'
Rana El Kaliouby