AI demonstrates gender bias

Signal of change / AI demonstrates gender bias

By George Harding-Rolls / 11 Jan 2019

Each of us has hundreds of biases through which we view and interpret the world. From confirmation bias to courtesy bias, these biases effect our decision making, interactions and choices in daily life. It stands to reason, therefore, that AI created by humans unintentionally takes on the biases of their creators. Research by leading institutions including IBM and Kings College London has shown how biases in AI can lead to gender and racial discrimination, and an experiment last year demonstrated the AI was able to guess the sexual orientation of users from their Facebook profile pictures with 91% accuracy.

So what?

With social media increasingly mediated by artificial intelligence, it is concerning that we have managed to bake in many of our inherent biases into the algorithms and processes that control our experience and access to the internet. In the future policy, defense and business decision will be made using data collected and interpreted by AI. The risk is that, unchecked, these biases could further entrench descrimination on the basis of gender, race or sexuality. IBM has come up with a rating system that helps human users understand the relative bias of the algorithms from which they receive data. But as more and more of our systems rely on AI, will it ever be possible to stem the flow of biased AI? They may not be flesh and bone, but are the ones and zeros of algorithms in this way simply too human?


What might the implications of this be? What related signals of change have you seen?

Please register or log in to comment.