

Creating a fair AI future

Elizabeth Adams has spent more than 25 years in the tech world, but only in the last few years has her name become synonymous with an urgent cause: AI ethics. An advocate for gender equality and inclusion, Adams now works as a speaker and consultant to promote initiatives that reduce or eliminate bias throughout the AI product development lifecycle. Her Leadership of Responsible AI program is designed to help business leaders turn these concepts into action.

AI bias is real. It happens because there's a lack of diversity in the training data around gender, race, or ethnicity.
Her work comes at a crucial time. Gartner predicts that 85% of AI projects created from 2018 through this year will deliver erroneous outcomes due to bias in data, algorithms, or the teams that manage them. Adams, who is based in Minneapolis, recently took time to talk to GO! about these issues and offer guidance to enterprises on the complexities of tackling algorithmic bias.

Bias in AI is often present as a quirky human interest story in the media, like when an AI chatbot goes off the rails and profanely insults users. How serious is the problem, really?
AI bias is real. It happens because there's a lack of diversity in the training data around gender, race, or ethnicity. It's a huge problem that's happening in hiring, mortgage lending, housing, and insurance. We can even talk about AI algorithms that have prioritized business executives getting COVID-19 vaccinations over frontline hospital workers or about algorithms choosing what communities, usually privileged, get more vaccinations than others. It's happening everywhere. To address it, we need all hands on deck to unpack it across various disciplines and systems.

There seems to be a lot going on under the surface. How did you get involved?
I spent three years immersed in communities in the city of Minneapolis, both as a concerned citizen and as an appointee for the city's Racial Equity Community Advisory Committee. I later helped establish a coalition to address public oversight of surveillance technology. Our coalition called for the city to ban facial recognition technology, which the city council ultimately did. It was a grassroots movement. And it took volunteers' time away from their families – just so they could advocate for what should be a basic human right: safe technology. The people who are traditionally harmed by this bias are the ones who are doing all the work, and without a budget. There's an opportunity to focus on this problem much further upstream, in government and business leadership, which is what we should be doing.


So what can we do to fix all of this?
The United Nations has established AI ethics guidelines. The National Institute of Standards and Technology (NIST) has a face recognition vender test that every facial recognition company should participate in. Nonprofits like the Montreal AI Ethics Institute have ethics playbooks that can help organizations define, measure, and mitigate AI racial bias. Businesses like Microsoft are working to build a responsible AI team. So, all these groups are looking to unpack the AI lifecycle and figure out how best to mitigate and/or eliminate algorithmic harms.

When I advise companies on bias in AI, one thing I ask – once I learn what business problem they're trying to solve – is whether AI is even needed for that problem.

Are playbooks and guidelines going to be enough?
They're a start. When I advise companies on bias in AI, one thing I ask – once I learn what business problem they're trying to solve – is whether AI is even needed for that problem. Just because AI is out there doesn't mean you need to adopt it. I work with companies to develop AI ethics principles first, then move to a playbook when they've developed a responsible AI framework.
That requires shared leadership. Technical and non-technical leaders should be engaging in the AI development lifecycle. Once I understand the business problem, then I start looking for quick wins. Then we start talking about what should be happening next. This has to be for the long haul; it's not a short game. Ultimately it all has to lead to something that's sustainable and beneficial to the community.

You wrote a children's book about AI. What motivated you to write it?
I love short stories. Writing them helps me solve complex technology problems. When COVID hit, we were all at home, and I decided I didn't want to press pause on my dreams to write a book. "Little A.I. and Peety" started off as an eBook for parents to engage with kids on topics in emerging technology. After it was published I got a call from a Minneapolis day care center. They said if you put the book out in hardcover, we'll buy them. Close to 3,000 students here now have access to the books, and they're in 40 stores around the world. The goal was to teach caregivers and children about safe technology. There's even a song you can sing along with on YouTube. It has been a really fun experience. As a technologist, I had no idea people would be into books.