Skip to main contentSkip to navigationSkip to navigation
Biased AI systems can be largely attributed to the lack of diversity among those who design and build them, the report said.
Biased AI systems can be largely attributed to the lack of diversity among those who design and build them, the report said. Photograph: Jens Schlüter/EPA
Biased AI systems can be largely attributed to the lack of diversity among those who design and build them, the report said. Photograph: Jens Schlüter/EPA

'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

This article is more than 4 years old

Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems

Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

“The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems,” Kate Crawford, an author on the report said. “The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.”

More than 80% of AI professors are men, and only 15% of AI researchers at Facebook and 10% of AI researchers at Google are women, the report said. The makeup of the AI field is reflective of “a larger problem across computer science, Stem fields, and even more broadly, society as a whole”, said Danaë Metaxa, a PhD candidate and researcher at Stanford focused on issues of internet and democracy. Women comprised only 24% of the field of computer and information sciences in 2015, according to the National Science Board. Only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%, and little data exists on trans workers or other gender minorities in the AI field.

“The urgency behind this issue is increasing as AI becomes increasingly integrated into society,” Metaxa said. “Essentially, the lack of diversity in AI is concentrating an increasingly large amount of power and capital in the hands of a select subset of people.”

Venture capital funding for AI startups reached record levels in 2018, increasing 72% compared to 2017 to $9.33bn in funding. Active AI startups in the US increased 113% from 2015 to 2018. As more money and resources are invested into AI, companies have the opportunity to address the crisis as it unfolds, said Tess Posner, the chief executive officer of AI4ALL, a not-for-profit that works to increase diversity in the AI field. This lack of diversity must be addressed before AI reaches a “tipping point”, she said.

“Every day that goes by it gets more difficult to solve the problem,” she said. “Right now we are in an exciting moment where we can make a difference before we see how much more complicated it can get later.”

The report released on Tuesday cautioned against addressing diversity in the tech industry by fixing the “pipeline” problem, or the makeup of who is hired, alone. Men currently make up 71% of the applicant pool for AI jobs in the US, according to the 2018 AI Index, an independent report on the industry released annually. The AI institute suggested additional measures, including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of underrepresented groups at all levels.

Google disbanded an artificial intelligence ethics council meant to oversee such issues just one week after announcing it in March. The Advanced Technology External Advisory Council (ATEAC) was attracted backlash inside and outside the company after it appointed the anti-LGBT advocate Kay Coles James.

Posner noted that additional efforts to increase transparency around how algorithms are built and how they work may be necessary to fix the diversity problems in AI. This month, the US senators Cory Booker and Ron Wyden introduced the Algorithmic Accountability Act, a bill that would require algorithms used by companies that make more than $50m per year or hold information on at least 1 million users to be evaluated for biases.

“The core of the problem is whether market forces are going to be sufficient for this to be fixed,” Posner said. “It’s going to take effort at all stages of AI and take change at cultural and procedural levels to solve this.”

Most viewed

Most viewed