Skip to the main content
Meet the Fellows

Meet the Fellows

Q&A with Sean McGregor

Incoming BKC Fellow Sean McGregor brings a deep understanding and years of experience in the artificial intelligence space, with a focus on safety, ethics, and assessment. Learn more about his work, and stay tuned for future features of other BKC community members here in the Buzz.

  1. Which community/communities do you hope to most impact with your work?
    Humanity! More specifically: frontier model audit, frontier model evaluation, AI safety data (paging the librarians!), and insurers.
  2. What’s something about your work that you think would surprise people?
    Building more powerful AI systems is in direct conflict with the methods required to understand AI risk. Developers will always choose to use risk evaluations to improve the system rather than measure it — unless there is an outside force (customers, insurance, etc.) demanding truly independent information.
  3. Why is your work important right now?
    Risk from AI systems is scaling far faster than our capacity to understand and manage it.
  4. If an insurer asked you for the top 3 signals that predict lower AI risk, what would you pick?
    Can you identify a responsible party bearing most/all the risks?
    Is the responsible party deeply knowledgeable about the task and technology?
    Does that knowledgeable, responsible party *not* want insurance? Then it is low risk. :)
  5. If you look five years ahead, what do you hope will have changed about how society measures and evaluates AI systems?
    There absolutely must be a robust, independent safety assessment ecosystem with technical and application expertise that generates reliable information about the risks of AI systems. With a mature ecosystem of insurers, auditors, rating agencies, and other forms of evaluation, society will be on a much better path.

 

You might also like