This March, Facebook announced a remarkable initiative that detects people who are most at risk of suicide and directs support to them from friends and professionals. As society entrusts our safety and well-being to AI systems like this one, how can we ensure that the outcomes are beneficial?
"Research on fairness checks that AIs treat people fairly. I want to be sure they’re actually saving lives."
Throughout these conversations, I often feel like I’m asking completely different questions. It’s only recently that I started to find the language for what’s different. Think about Facebook’s suicide prevention initiative: while research on fairness checks that AIs treat people fairly, I want to be sure they’re actually saving lives. While a system that benefits some people more than others is a problem, we should also worry about AI systems that harm everyone equally.
(The ideas here were discussed with several people from the workshop, including Christo Wilson, Solon Barocas, Paul Resnick, and Alondra Nelson. All errors and mistakes are my own. Since the event was under Chatham House Rule, I’ll acknowledge people as they are willing to be named.)