Rotate your phone

Once that's done you'll be able to
experience the OutsideThePC website perfectly.

Our Blog

Why AI Bias is a Problem, and How to Overcome It
Back to Blog

Why AI Bias is a Problem, and How to Overcome It

In 1988, a medical school in the UK was found guilty of racial discrimination. Surprisingly, their discriminatory practices stemmed from a computer program that was used to determine which applicants to invite for interviews. The program was found to be biased against ethnic minorities and women and ended up getting the school in a lot of trouble with the UK Commission for Racial Equality.

Bias in computing systems is not new.

In 1970, Marvin Minsky, one of the pioneers of AI, stated, "from three to eight years, we will have a machine with the general intelligence of an average human being." While the emulation of general human intelligence is still a work in progress, AI has progressed significantly with regard to its manifold applications in helping solve dozens of real-world challenges.

However, despite its many benefits, both researchers and practitioners still struggle with the problem of AI bias. AI bias refers to the underlying prejudice in AI data models and algorithms that may result in discrimination, stereotyping, and other harmful social consequences.

What Are the Causes of AI Bias?

Gartner expects that until at least 2030, 85% of AI projects will deliver false results due to inherent biases. These biases will be built into the application's training data, algorithms, or natural language processing (NLP) models.

Training data and machine learning models are essential for AI applications. But this data is supplied by AI engineers – who tend to be biased simply because they're human. Consequently, they may allow their own assumptions and perspectives to colour their AI datasets and models, which then results in biases being passed on to the application.

The data may also reflect historical or social inequities that can introduce bias. Groups may be over or underrepresented, resulting in flawed data sampling and introducing bias into results. Even news articles can add bias to an AI system when NLP models are trained on them.

All in all, when small decisions have a large collective impact on the integrity of results, it can create aggregation bias. In that light, it wouldn't be an exaggeration to say that biases might be hardcoded into AI algorithms.

Why is AI Bias Harmful?

Social and Public Harm

AI bias can be especially problematic if the applications are used in sensitive areas like immigration and law enforcement. In these domains, biases can – and do – perpetuate harmful stereotypes and prejudice, resulting in incorrect decisions and further entrenching social, racial, or gender-based inequities.

A few years ago, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an AI model used by US court systems, predicted that 45% of black offenders would re-offend. In comparison, it predicted that just 23% of white offenders would re-offend. Such skewed results prompted some investigative journalists at ProPublica to conclude that COMPAS "is no better than random, untrained people on the Internet."

Harm to Individuals

AI bias can also harm individuals. For one, when decision-makers rely on the results of an AI system, they may end up discriminating against innocent people and taking action that hurts their social, legal, or financial standing.

Such AI-rooted discrimination also undermines equal opportunity and treatment and "normalizes" oppression. Unfortunately, this is already quite common in areas like hiring, airport security, healthcare, and social security.

Harm to Businesses

Many companies use AI systems to perform tasks that formerly would have been performed by human staffers. While AI can help improve efficiency and productivity, AI bias can induce many problems.

It might generate incorrect conclusions, which can lead to erroneous decisions and subsequent costly failures, and serious reputational damage.

Even a prominent organization like Amazon could not avoid the fallout of AI bias. In 2015, its AI-based hiring algorithm was found to be biased against women candidates. Amazon stopped using the algorithm, but the disaster further solidified the need to address the problem of AI bias and improve the capabilities of AI tools and systems. To that end, the following are some ways to keep aloof of AI bias.

How to Detect and Remove AI Bias

It's certainly not straightforward to create 100% non-biased algorithms because both the training data and AI engineers must be bias-free. Nonetheless, there are ways to detect and remove bias from AI applications:

Enforce Governance Policies

AI companies and researchers must implement systems and policies to regulate their modelling processes, define bias metrics, address blind spots that may lead to bias, and audit algorithms. It's also important to check that training data represents everyone equally before it is applied to an AI system.

Evaluate Diverse Social Groups

To avoid unwanted results like those generated by COMPAS or by Amazon's hiring AI, it's crucial to evaluate AI models by different social groups based on gender, age, ethnicity, etc. Such evaluations will help reduce false positives and improve the accuracy of results.

Leverage AI Frameworks and Toolkits

Standard AI frameworks and toolkits can help organizations minimize undue influences during application development and reduce bias during deployment. Frameworks like Deloitte's AI framework and the Aletheia framework from Rolls Royce provide guidelines around ethical AI practices, anti-bias safeguards, conscious development, reproducibility, and more. Toolkits like IBM's AI Fairness 360 and Watson OpenScale can also help identify and eliminate bias in machine learning models and pipelines.

Conclusion

Over the next few years, AI will become ubiquitous. But AI bias can set back progress so it's crucial to address this problem and eliminate it from AI systems.

Fortunately, it's not impossible to design AI systems that are unbiased, fair, and ethical. And this is one of the things that we do at Outside the PC. We help organizations unlock the power of their data with cutting-edge and futuristic products.

Our unique combination of conceptual and technical expertise enables us to create AI solutions that are both highly functional and comprehensive. Let us help you build a world characterized by data-driven opportunities, intelligent business decisions, and greater ROI. To know more about our expertise, click here, or contact us.