Just read an article at Ars Technica that highlights something we should all be paying more attention to: AI-driven hiring tools still have a long way to go in terms of fairness. Tests show these systems tend to favor white and male candidates, confirming that even with all our tech advances, biases persist in ways we can’t ignore. And this isn’t the only article discussing this, it’s only the latest, which means it’s a long known problem that hasn’t been rectified.
For all the hype around AI’s potential to revolutionize hiring, if it’s just reinforcing biases, what’s the point? How are these algorithms trained and why are they showing a such a strong bias towards white male candidates?
If you’re a recruiter or decision-maker, it might be time to rethink the role of AI in hiring. We all understand the basic tenet of data processing, garbage in, garbage out. Until there’s a proper process in the middle that takes away such biases, people shouldn’t be fully reliant on technology for such purposes because it’ll only reinforce them.
These high end filters make “decisions” based on their training data and will reflect biases that are already incorporated. I’m sure you’ve heard about facial identification or hand sensors that don’t work properly or have high error rates when the skin color is darker.
Not saying human-led processes aren’t prone to bias, I mean these tech “solutions” were after all built to minimize the impact of biases from human judgements, but when the outcome is no different or maybe even worse, that’s no solution at all.