As algorithms play a growing role in criminal justice, education and more, tech advisory boards and academic programs mirror real-world inequality
'When Stanford announced a new artificial intelligence institute, the university said the “designers of AI must be broadly representative of humanity” and unveiled 120 faculty and tech leaders partnering on the initiative.
Some were quick to notice that not a single member of this “representative” group appeared to be black. The backlash was swift, sparking discussion on the severe lack of diversity across the AI field. But the problems surrounding representation extend far beyond exclusion and prejudice in academia.
Major tech corporations have launched AI “ethics” boards that not only lack diversity, but sometimes include powerful people with interests that don’t align with the ethics mission. The result is what some see as a systemic failure to take AI ethics concerns seriously, despite widespread evidence that algorithms, facial recognition, machine learning and other automated systems replicate and amplify biases and discriminatory practices.
This week, Google also announced an “external advisory council” for AI ethics, including Dyan Gibbens, the CEO of a drone company, and Kay Coles James, the president of a rightwing thinktank who has a history of anti-immigrant and transphobic advocacy.
For people directly harmed by the fast-moving and largely unregulated deployment of AI in the criminal justice system, education, the financial sector, government surveillance, transportation and other realms of society, the consequences can be dire.
“Algorithms determine who gets housing loans and who doesn’t, who goes to jail and who doesn’t, who gets to go to what school,” said Malkia Devich Cyril, the executive director of the Center for Media Justice. “There is a real risk and real danger to people’s lives and people’s freedom.”'
Read more: 'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley