Are Algorithms Biased?

Posted on Posted in Blog

ARE ALGORITHMS BIASED?

Should we trust algorithms to perform decision making tasks more objectively than people? As non-human entities they are, after all, free of the prejudices that drive human judgement.

But are they?

It’s easy to think that because these AI and statistical creations have not one human bone in their mathematical bodies, they cannot be anything but objective.  But a growing body of evidence suggests otherwise.  Machine learning is rapidly empowering data and code to develop human-like qualities, including our many flaws.

Whenever we apply an algorithm to a human situation there will always be some level of bias because a human has inputted the data in the first place*.  Any prejudices held by that human can be reflected in the outputs of the system – which raises questions and concerns around the potential for AI to reinforce and even perpetuate existing biases and social inequalities if left unchecked.

Algorithms are already employed in processing decisions that affect all areas of society, including law enforcement, finance, tax, insurance and recruitment.  But can these results be trusted?  A growing amount of evidence suggests we need to be cautious.

Joy Buolamwini’s TED talk on fighting bias in algorithms (watch below) provides insight into data-driven bias using a real-world example – how some facial-recognition software is unable to recognise certain ethnicities.  The issue appears to be a consequence of machine learning using datasets that are not reflective of the multi-cultural society in which the software is being used.

This type of scenario has far reaching social implications.  Ironically, it could leave many of us wondering if AI, the new found ‘intelligence’, might actually lead us backwards in our quest for a more inclusive world by exacerbating prejudice against ethnic groups.

Growing concerns about bias have led to the formation of AI watchdogs, including The Algorithmic Justice League founded by Joy Buolamwini, whose aim is to raise awareness of algorithmic bias and provide a mechanism for reporting, evaluating and tackling the issues. These organisations also provide education and training for building impartial, inclusive and socially just algorithmic systems.

At Datamine, we agree that organisations looking to implement AI to help with decision making need a clear strategy for ensuring robotic processes and systems are not ‘contaminated’ with human bias.  CEOs and programmers must be aligned in their plans to create datasets with a focus on inclusion and accurate representation of society and end-users, and build a process that accounts for outliers and manual, human-led quality control.

In this way AI can accomplish the goal it was designed for – simply, to perform human tasks consistently at a superior level.  Creating transparent and neutral algorithms to bring about social reform seems like a goal we should all be working towards.

*Paraphrased from Gary Marcus, cognitive psychologist speaking at the 2016 World Science Festival