Another important session at RebelCon 2024 tackled a growing reality in tech: bias in AI systems isn’t just a theoretical problem — it’s affecting products and users today.
As AI adoption accelerates, it’s critical to build models that are not only powerful but also fair, transparent, and trustworthy.
The Speaker:
Divya Bansal, Director of Engineering at Workhuman
Some of the key ideas discussed included:
🔹 Bias starts with data — but doesn’t end there:
Training datasets often reflect historical biases. But bias can also be introduced during model tuning, feature selection, and even in how outcomes are interpreted and acted upon.
🔹 Bias is multi-dimensional:
It’s not just about race or gender. Bias can emerge across geography, language, ability, socioeconomic status, and more. Truly addressing it requires broad thinking and multi-stakeholder input.
🔹 Mitigation is iterative:
There’s no silver bullet. Bias mitigation involves multiple steps: diversifying datasets, stress-testing models, introducing human-in-the-loop systems, and regularly auditing outcomes over time.
🔹 Business risk is real:
Bias isn’t just an ethical issue — it’s a legal, brand, and business risk. Companies that ignore it could face reputational damage, user distrust, or regulatory scrutiny.
The session made it clear: if we want AI to be a durable part of the products we build, bias management has to be treated as an engineering and leadership responsibility — not just an afterthought.