How can teams prevent structural inequalities and their unconscious biases from affecting AI systems?
The AI Blindspot project aims to encourage discussions and solutions to the challenges of such blindspots.
“Blindspots can occur at any point in the pipeline during the development of a model, from when the model is first conceptualized to when it is built and even after it is deployed. No human is immune to blind spots, and while we can roughly point to their location, they are normally hard to perceive. The same can be said of the way algorithmic technologies are developed — even with the best of intentions, the things we never anticipated can end up causing great harm.”
You might also like
- communityOh, what a tangled web we weave
- communityDelving into Disinformation
- communityInside the Assembly Student Fellowship