Avoiding Bias in Artificial Intelligence: Paving the Path to Fairness

Eko Adetolani
4 min readJun 19, 2023

--

Artificial intelligence (AI) is the rave of the moment. It has rapidly transformed numerous industries, promising efficiency, convenience, and innovation. However, beneath its shiny facade is something I am quite worried about: bias. In this article, we will explore the underlying mechanisms of bias in AI, its implications, and actionable steps to mitigate this bias in your AI projects.

The Nature of Bias in AI:

To grasp how bias seeps into AI, we must first understand how AI works. To put it loosely, AI algorithms analyze vast amounts of data, discern patterns, and determine the “best” answers based on correlations it discovers. Essentially, AI models heavily rely on the data they are trained on and the outcomes associated with that data.

Bias can enter AI through two main avenues: the training data and the training process itself. Both of these factors are influenced by the individuals or teams responsible for developing AI systems. Often, bias creeps in unconsciously, perpetuating societal prejudices and inequalities.

An example of bias in AI that I find interesting is one that we explored in the AI for business program at Wharton. The case study was focused on a recruitment algorithm created by Amazon. The algorithm, designed to streamline the hiring process, was trained on a decade’s worth of applicant data, much of which came from male candidates. Over time, the system developed a clear preference for male applicants, essentially teaching itself that they were more desirable. Even more concerning was the fact that the algorithm began penalizing resumes containing the term “women.” Although attempts were made to rectify the bias, it became apparent that the system was unreliable and could not be trusted for fair decision-making.

A more personal experience is something we experienced while building Vybe, an online dating platform for Africans. We tried to combat fake users and impersonation by using AI to compare live user photos with their profile pictures. Our first approach was to look for an off the shelf solution however, we encountered a significant issue — the AI solutions we explored yielded about a 40% higher rate of false positives for photos of people of color, which constituted 95% of our user base. i.e it frequently said 2 people of color who looked nothing alike were the same person.

Upon investigation, we discovered that most facial recognition AI models, including those used by prominent dating companies like Tinder and Bumble, were predominantly trained on photos of Caucasians. Consequently, these models were more prone to errors when processing images of people of color, perpetuating bias.

Why should you care about bias in AI?

The implications of bias in AI are far-reaching and pose significant challenges, particularly for marginalized groups. When AI systems exhibit bias, they amplify existing inequalities, further marginalizing those who are already underrepresented or disadvantaged. Whether it’s biased facial recognition technologies that disproportionately misidentify individuals with darker skin tones or sexist algorithms that favor male candidates in recruitment processes, these biases can have real-life consequences. They can lead to discriminatory practices, reinforce stereotypes, and hinder opportunities for certain groups. It is essential to address and mitigate bias in AI to ensure fairness, equal representation, and a more inclusive future where technology serves as a tool for empowerment rather than a source of discrimination.

Mitigating Bias in AI:

Fortunately, there are actionable steps we can take to remove unconscious bias from AI systems and ensure a fair and equitable future. Consider the following approaches:

  1. Use diverse and representative training data: Reducing bias in AI begins with ensuring that the data used to train the models is diverse and representative of the real-world population it serves. This requires collecting data from a variety of sources and guaranteeing its accuracy in reflecting the diversity of the population.
  2. Implement fairness metrics: Establishing fairness metrics for AI systems is crucial to prevent discrimination against any particular group. These metrics can ensure that the model maintains equal accuracy across different racial groups, genders, or other relevant attributes. Examples of fairness metrics are Statistical parity difference,
  3. Audit the model for bias: Regularly auditing AI models is essential to identify and correct any bias-related issues that may arise. External AI bias audit firms or AI model auditing software can be employed to analyze model predictions and compare them to ground truth data, unveiling patterns of bias that require rectification. Companies like Kosa.ai can help you to audit your AI algorithms and identify any bias.
  4. Apply algorithmic techniques: Algorithmic techniques, such as debiasing, can be employed to reduce bias in AI systems. These techniques involve removing features in the data that are correlated with protected attributes like race or gender to ensure fair treatment.
  5. Encourage diverse teams: Building a diverse team of developers and data scientists is instrumental in minimizing bias during AI development. By incorporating a wide range of perspectives and experiences, a diverse team can better identify and mitigate potential sources of bias, fostering a more inclusive technology landscape.

As AI continues to disrupt various sectors, it is imperative to confront and address the issue of bias. By understanding how bias emerges in AI systems and implementing actionable strategies to mitigate it, we can create a technology landscape that is fair, inclusive, and respectful of the rights and dignity of all individuals. Together, let’s use AI to provide a level playing field for marginalized communities and work towards a future where technology serves as a catalyst for equality and progress.

--

--

Eko Adetolani
Eko Adetolani

Written by Eko Adetolani

Sometimes a writer but always a Product person building in Fintech, Online dating, Automation & A.I.

No responses yet