Predictive AI is too flawed — both technically and ethically — to prevent another El Paso or Dayton. BKC faculty associate Desmond Patton emphasizes that current AI tools tend to identify the language of African American and Latinx people as gang-involved or otherwise threatening, but consistently miss the posts of white mass murderers.
“Until we confront and adequately address bias in these systems, I don’t feel comfortable with them being used as a tool for prevention,” Patton says.
You might also like
- communityRethinking democracy for the age of AI