How Grassroots Groups Should Respond to AI
“From even a cursory glance, we can conclude that AI is pervasive, seemingly omnipresent, and damaging to some industries.”
By Jennifer R. Farmer
There is no question that if you run an organization that operates with a racial equity lens, you will need to be mindful of the risks and rewards of AI. You will need to be aware of the inherent bias in AI, and how to overcome machine learning bias. You must also be informed about algorithmic discrimination, the ways in which AI supports pervasive surveillance of demonstrators, facial recognition in policing, misidentification of persons, data privacy, etc. Racial justice causes can’t simply adopt AI without considering how its managed.
It is not that scientists and policymakers have been unaware of these challenges, as algorithmic advocates have spoken up. Some have been punished when they have attempted to raise concerns about AI. As Tawana Petty, director of policy and advocacy at the Algorithm Justice League, has said “Dr. Timnit Gebru was terminated from Google when she warned about AI. In some cases, these companies have terminated their ethics departments and people who have warned of potential harms of AI.”
AI bill of rights urged
As AI increases in relevancy and adoption, the Algorithmic Justice League and others are highlighting the need for an AI bill of rights that determines how AI will be used and sets forth a risk management framework. You can learn more here. This is critical because while the notion of AI companions that make our lives easier is appealing, we must remember that everything has a shadow. It is in our best interest to know what is in the shadow and how to stay safe.
As algorithmic advocates wrap their heads around AI, we should be informed about the ways in which many of the software programs we use every day incorporate AI. From Zoom to email marketing platforms such as Constant Contact, to media platforms such as Cision and Meltwater, to search engines such as YouTube and Google, all have embedded AI across their systems. Constant Contact will even let users access an AI assistant to help suggest headlines more likely to appeal to their readers. ChatGPT will even generate ideas for content, and in some cases, write the content itself. Some question whether there a correlation between newsroom cuts and AI. This says nothing of how some companies utilize AI to source resumes and candidates, and what that means in terms of bias in AI. From even a cursory glance, we can conclude that AI is pervasive, seemingly omnipresent, and damaging to some industries.
What’s next?
As we enter a new year with new resources and tools, I hope we continue to research the policy solutions that can help center the needs of marginalized communities where AI is concerned. I also hope we follow the lead of groups like Dr. Timnit’s Distributed Artificial Intelligence Research Institute (DAIR), the Algorithm Justice League and the Ida B. Wells Just Data Lab.
While there is nothing cut and dry about AI, it is a tool we cannot ignore. All leaders should be asking themselves how they will implement AI frameworks that protect themselves and the communities they lead, and how they will leverage AI to work more effectively.