Fellow Sean McGregor's work on the AI Incident Database (AIID) is highlighted in the Bulletin of the Atomic Scientists. The AIID indexes the harms (and potential/near harms) that artificial intelligence systems bring about, capturing "emerging risks and especially significant issues in AI adoption." These include errors on AI systems' part, as well as the use of such technologies for nefarious purposes. "Humans should remain in the loop somewhere, McGregor says, rather than letting AI loose without oversight. Better guardrails and more careful training datasets can also help."
You might also like
- communityRewiring Democracy Now
