Data Activist Renée Cummings Says There’s An Important Question To Ask When Making Responsible AI: ‘Where Did This Data Come From?’

One of AI’s thorniest issues is ensuring the new technology doesn’t end up reflecting all the same biases of its human creators have. Generative AI, trained on massive amounts of data, is compounding the problem, rather than mitigating it, as models deliver answers that align with pervasive prejudices about different races and genders.

Data activist Renée Cummings, at Fortune’s Brainstorm Design event in Macau on Thursday, argued that the core of the issue is data. “We know that AI is a revolutionary technology, we know that AI is a radical technology, but we have to understand that there is no AI without data,” she said. 

Cummings, a professor of practice in data science at the University of Virginia, examines ethical risks in artificial intelligence and studies how to build a responsible and sustainable AI that can benefit everyone in society, regardless of background. 

For example, a recent United Nations study recently pointed to predictive policing as an example of how racial biases are reproduced through technology. This practice involves making judgments regarding who will commit future crimes and where these incidents may take place. The study warns that predictive policing could further add to problems regarding over-policing communities based on race and ethnicity. 

“We know that data creates very unique challenges when it comes to bias, when it comes to discrimination, when it comes to systemic challenges,” Cummings said. 

“It’s so important when we’re designing to ask ourselves, where did this data come from?” she asked. “Sometimes we’re using data that is being duplicated in very weird ways. Data that’s actually missing critical pieces of the stories that need to be told.”

That means AI designers need to have a heightened sense of “ethical vigilance,” she explained. “We have a responsibility to ensure that we bring a framework to…audit the ways in which we are building models, the ways in which we are building algorithms,” she said. 

AI also brings its share of environmental concerns. The metals used in AI hardware, for instance, can lead to both pollution and soil erosion. “We speak about building a sustainable and resilient future using this technology, but we’ve got to think about the ways in which the environment is also being impacted.”

Due diligence can help AI designers work around these issues. They could design new models while ensuring that concepts such as equity, access, and fairness are respected, yet also in a manner that doesn’t undermine the accuracy, or the validity of data, she said

“How involved are communities in the process? It’s very critical the ways in which we engage communities to design, because we’re realizing that many of our data sets…are creating very critical risks,” she explained. 

AI does have the potential to do “extraordinary things,” Cummings noted. AI tools have led to increased efficiency, reduced costs, and greater convenience. But that only heightens the need to make sure the new technology works in a responsible way.

If AI designers don’t take data equity into account, then “what we are going to create are systems that not only undermine our own progress, but systems that could create situations that undermine the ways in which we move into the future together,” she warned. 

{Categories} _Category: Implications{/Categories}
{URL}https://fortune.com/asia/2024/12/06/renee-cummings-responsible-ethical-ai-brainstorm-design/{/URL}
{Author}Jay Ganglani{/Author}
{Image}https://fortune.com/img-assets/wp-content/uploads/2024/12/54184492706_2ea2ef4349_o-e1733471718334.jpg?w=2048{/Image}
{Keywords}Conferences{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}

Exit mobile version