Why bias mitigation should top the priorities of an AI engineer

Why bias mitigation should top the priorities of an AI engineer
Aug 23, 2021

Percent of AI projects

Not quite a doomsday scenario, but a pretty dire situation all the same, is it not? Artificial intelligence (AI) was hailed as the solution to data and automation woes, the savior for those seeking accurate and error-free output of data or other products. It may not quite be like that, though!

Data bias is not new.

As far back as 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. This was on account of a program used by the latter to select applicants to be called for interviews, which was determined to be biased against candidates who had non-European names or were female. The interesting part was that the program itself matched admission decisions taken by humans – with 90-95 percent accuracy! In spite of this, the school did have a higher proportion of non-European students than most other medical schools in London.

Human biases do cast their shadow.

Intentions do not always match results, though, and more so than for other technologies, an AI engineer could easily, yet inadvertently, do harm instead of good by amplifying unfair biases and causing much resultant harm. People are shaped by culture, experiences, and upbringing, leading to them internalizing assumptions about the world they see. Human biases are not new – implicit association tests have shown biases the person(s) may not even be aware of, while field experiments have uncovered the effect of these biases on final outcomes.

What is AI bias?

Output of machine learning

The passage of time has shown how these biases make their way into the operations and results of AI systems, and cause much harm as a result. The algorithms have become more complex, but the challenge remains the same. AI can identify and reduce the impact of human bias, but it could also worsen the problem by ingraining bias and deploying it at scale.

AI exists in no vacuum, and the algorithms it comprises are devised and tweaked by artificial intelligence engineers with biases, which is why the system thinks the way it has been taught. With AI systems being deployed at a growing scale across enterprises, looking out for these risks and working to mitigate them is imperative.

Types of AI bias

Here are the types of bias in AI:

  • Aggregation bias: Small choices made by artificial intelligence engineers across an AI project have a large collective impact on the integrity of results.

  • Algorithmic bias: Algorithms are trained using biased data.

    • Cognitive bias: These are feelings toward a person or group based on their perceived group membership, and could seep into algorithms either via designers introducing them unknowingly or through biased training data sets.

    • Incomplete data: This may not be representative of a larger set of people, and hence may include bias.

  • Societal bias: Social assumptions and norms create blind spots and expectations in thinking, in turn significantly influencing algorithmic bias.

AI bias has been seen in several real-life situations.

History bears witness to many instances of AI bias:

  • In October 2019, an algorithm used on over 200 million people in US hospitals was found to favor white patients over black patients. Healthcare cost history, rather than race itself, was the variable used, but its correlation to the latter was seen in black patients incurring lower health-care costs than white patients with the same health conditions.

  • US court systems used the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm to predict the likelihood that a defendant would become a recidivist. Bias was discovered when the model predicted twice as many recidivism false positives for black offenders (45 percent) than white offenders (23 percent).

  • In 2015, Amazon realized that its algorithm for hiring employees was biased against women. The cause discovered was that the algorithm was based on resume submissions over the past decade, and given that most of these were from men, the algorithm was trained to favor men.

The impact of AI bias is a cause for concern.

Data-driven technologies such as AI were hailed as saviors on the path to an equitable, just world. There is unfortunately the worry that the bias driving real-world outcomes could overtake the digital domain too.

The problem is very real. Bias is injustice against a person or a group, and a large part of human bias can easily be transferred to machines, which are only as good or as bad as the people who develop them. The stakes are high for business too, as misguided results from AI could lead to damaged reputations and costly failures.

Bias is the responsibility of no one person – it is a collective failing. Those discriminated against are prime sufferers, but everyone is hurt as well, as it reduces the ability of people to take part in the economy and society. Distorted results and mistrust grow, which is why business and organizational leaders must drive progress on research and standards to ensure AI systems bring about actual improvements on human decision-making.

Minimizing bias is critical for AI to reach its potential and gain the trust of people.

If a data set is complete, bias could still exist due to human prejudices, whose removal from the data set should be in focus. It is easier said than done, though. Deleting protected classes – such as race and sex – by removing their labels might be considered a tactic to reduce bias; however, this could affect the understanding of the model and the accuracy of its results.

Top considerations to remove AI bias

Here are some key considerations for removing AI bias:

  • Contextual awareness: It is important to understand the contexts in which AI can correct or can exacerbate bias, by fully perusing the algorithm and related data.

  • Test and mitigation practices and processes: This is critical to test for and mitigate AI bias. This could include deploying suitable tools to identify bias and inaccuracies, improving data collection processes using internal red teams and third-party auditors, and establishing a workplace with transparent practices and metrics.

  • Fact-based conversations: These could help AI engineers and others uncover unnoticed biases and help understand the reasons for the same, with the right training and cultural and process redesigns improving results.

  • Man-machine collaboration: Evaluate best usage scenarios for automated vs human decision-making.

  • Bias research: Getting rid of bias needs a multidisciplinary approach involving R&D from ethicists, social scientists, and experts, each of whom could contribute their nuanced understanding of each application area.

  • Diversifying AI: A diverse AI team with due representation from minority communities helps to quickly pick up on instances of bias.

There are tools to reduce AI bias.

AI Fairness 360 is an open-source library released by IBM to detect and mitigate biases in unsupervised learning algorithms. With multiple contributors on Github, the library allows AI programmers to tests biases with comprehensive metrics and to mitigate them with 12 packaged algorithms such as Disparate Impact Remover Learning Fair Representations, and Reject Option Classification.

IBM Watson Open Scale checks for biases and mitigates them in real time when AI is making its decisions.

The What-If Tool from Google allows testing of performance in hypothetical situations, analysis of the importance of different data features, and visualization of model behavior across multiple models and input data subsets as well as for different ML fairness metrics.

AI certification is useful to learn more about the latest approaches to mitigate bias.

An AI engineer certification is a great way to pick up the latest skills and knowhow in the field of AI. Certified candidates are recognized by potential employers as being aware of the newest technologies and approaches in their domain, and will be considered skilled in bias mitigation and other operational aspects of AI.

To sum it up…

The ideal future by definition is difficult. If AI is to live up to human ideals, the focus should be not on building what is possible but what is needed. It is important to understand deep-rooted problems and ethically engage with marginalized communities for a more reliable view of both the data driving the algorithms and the problems the algorithms seek to tackle. An equitable, inclusive, and socially beneficial approach with a deeper understanding of critical, myriad aspects is the only way to hope to mitigate AI bias while unlocking new possibilities for organizations.

Follow Us!

Conversational Ai Best Practices: Strategies for Implementation and Success
Brought to you by ARTiBA
Artificial Intelligence Certification
Conversational Ai Best Practices: Strategies for Implementation and Success

Conversational Ai Best Practices:
Strategies for Implementation and Success

The future is promising with conversational Ai leading the way. This guide provides a roadmap to seamlessly integrate conversational Ai, enabling virtual assistants to enhance user engagement in augmented or virtual reality environments.

  • Mechanism of Conversational Ai
  • Application of Conversational Ai
  • It's Advantages
  • Using Conversational Ai in your Organization
  • Real-World Examples
  • Evolution of Conversational Ai
Download
X

This website uses cookies to enhance website functionalities and improve your online experience. By browsing this website, you agree to the use of cookies as outlined in our Privacy Policy.

Got it