Building Responsible AI: How To Combat Bias And Promote Equity

Building Responsible AI: How To Combat Bias And Promote Equity
Adobe StockAI has the power to be hugely transformative, both in business and in the way we live our lives. But as we know, with great power comes great responsibility!

When it comes to AI, it’s the responsibility of those who build it and those who use it to ensure it’s done in a way that minimizes the risk of causing harm. And one of the most pressing challenges here is reducing the damage caused by bias in AI.

So here I’ll overview what we mean when we talk about AI bias, as well as discuss examples where this has caused real problems. Then I’ll take a look at some of the steps that businesses and us as individuals can take to ensure we’re using AI in a responsible and trustworthy way.

What Is AI Bias And Why Is It A Problem?
AI bias can be caused by either bias that is present in data, or bias that is present in the algorithms that process data.

Bias often occurs in data when it is collected in a non-representative way – for example, by failing to ensure that there is an adequate balance between genders, age groups or ethnic groups in the sample. It can be introduced to algorithms when they are coded in a way that inadvertently favor certain outcomes, or overlook critical factors.

The danger with AI is that it is designed to work at scale, making huge numbers of predictions based on vast amounts of data. Because of this, the effect of just a small amount of bias present in either the data or the algorithms can quickly be magnified exponentially.

For example, if an image search algorithm is more likely to show pictures of a white male when asked to show people in high-paying professions (as they do – see the next section for examples!) and then an image generator is trained on the results of those searches, then the image generator will be more likely to create an image of a white male when asked for a picture of a CEO, doctor or lawyer.

When AI is used to make decisions or predictions that affect humans, the results of this can be severe and far-reaching.

For example, bias present in HR systems used to automate processes in hiring and recruitment can perpetuate existing unfair or discriminatory behaviors.

Bias present in systems used in financial services to automate lending decisions or risk assessment can unfairly affect people’s ability to access money.

Biases in systems used in healthcare to assist with diagnosing illnesses or creating personalized treatments can lead to misdiagnosis and worsen healthcare outcomes.

When this happens, trust in the use of AI technologies is damaged, making it difficult for companies or organizations, in general, to put this potentially world-changing technology to work in ways that could potentially do a lot of good!

Examples Of AI Bias
There have been many occasions where bias in AI has created real-world problems. These include a recruitment tool developed by Amazon to help rate candidates for software engineering roles, which was found to discriminate against women. The ratings of female applicants were downgraded simply because fewer women applied, and there was less data available to assess them. This led to the system being scrapped.

And online education provider iTutorGroup was fined hundreds of thousands of dollars when its hiring algorithms were found to discriminate against older applicants, downgrading applications from women aged over 55 or men aged over 60.

Facial recognition software using AI algorithms to identify people from video and photographs has also been found to be more likely to misidentify people of an ethnic minority, leading to its use in law enforcement being banned in many jurisdictions, including the entire European Union.

Additionally, a system known as COMPAS, designed by the US government to predict the likelihood of criminals reoffending, was also found to be racially biased. According to an investigation by ProPublica, it overestimated the likelihood of black people reoffending.

Google’s algorithms have been accused of bias, too. Searching for terms like CEO is disproportionately likely to return an image of a white male, and researchers at Carnegie Melon University found that its system for displaying job ads was displaying vacancies for high-paying jobs to men more frequently than to women.

And in healthcare, an algorithm used to predict the future healthcare needs of patients was found to underestimate the needs of black people compared to white people because spending on black healthcare had historically been lower, reflecting ongoing systemic inequality.

How Do We Fix This?
There are important steps that everyone involved in AI – either building or using it should take to ensure they aren’t doing so in an irresponsible way.

Firstly, it’s important to ensure that all the proper checks and guardrails are in place when collecting data. It should be done in a representative way, balanced by age, gender, race and any other critical factor that could lead to bias.

Human oversight is critical, too, in order to pick up on erroneous decisions before action is taken based on them. Many of the examples highlighted above were only spotted later by third-party investigators, increasing the harm that was caused as well as the financial impact and reputational damage to the organizations involved.

Algorithms and models should be regularly audited and tested. Tools like AI Fairness 360 and Google’s What-If can be used to examine and measure the behavior of machine learning algorithms.

And finally, it’s crucial to ensure that data and engineering teams are themselves diverse, as this makes a variety of perspectives and experiences available during design, development and testing.

With AI impacting our daily lives in more and more ways, everyone involved has a part to play in creating fairer and more equitable systems. It also sets us up for costly and embarrassing mistakes further down the line. Failing to do so now will damage trust in the potential for AI to do good and have severe ramifications on marginalized and vulnerable fellow humans.

{Categories} _Category: Inspiration{/Categories}
{Author}Bernard Marr, Contributor{/Author}
{Keywords}Enterprise Tech,/enterprise-tech,Innovation,/innovation,Enterprise Tech,/enterprise-tech,AI,/ai,technology,standard{/Keywords}

Exit mobile version