Artificial Intelligence is permeating areas of our everyday lives, ranging from established systems such as health and the judicial system to our social media platforms. As both the private and public sector grapple with the opportunities and drawbacks of AI, they are increasingly adding educational trainings for their organizations, and even adding “Chief Ethics Officers,” and other similar positions to their teams. But what are the educational backgrounds, skills, and core competencies required for leaders to be successful, and what role can academia play in filling these gaps?
To crowdsource key learnings and best practices for preparing tomorrow’s leaders to innovate ethically, Amar Ashar, assistant research director at BKC, and Hannah Hilligoss, a senior project coordinator, led a session on “Ethical AI Career Pathways” at RightsCon in Tunis. Ashar and Hilligoss were joined by representatives from industry, government, civil society, and higher education to discuss their work and what they look for in hiring for ethical AI careers. Together, the leaders facilitated breakout groups which crowdsourced perspectives for these new roles with an emphasis on ethics and values.
The starting point of the session was work at the Berkman Klein Center, which explores ways we convene and educate leaders from the public and private sectors to approach the challenges presented by AI. At Harvard, for instance, the Cyberlaw Clinic educates Harvard Law students to enhance their preparation for high-tech practice and earn course credit by working on real-world litigation, client counseling, advocacy, and transactional/licensing projects and cases. New this year, the Center also launched the Techtopia program, a multidisciplinary research and teaching initiative that brings together Harvard students and faculty around the biggest issues in tech today. We also engage with other stakeholders outside of academia through various initiatives; our AG Tech Forum brings together attorneys general and their deputies to learn about the intersection of emerging technologies and the law, and through the Challenges Forum, we collaborate with industry representatives. We also engage with mid-career professionals through Assembly, a joint program with the MIT Media Lab.
“Our work bringing together different disciplines and sectors through the ethics and governance of AI initiative illustrates the need for better design of concrete interfaces to facilitate interaction and knowledge transfer between academia, companies, government, and public interest organizations,” Ashar said.
Below are three takeaways from the RightsCon session, gleaned from interdisciplinary and international conversations among the breakout groups.
Three Takeaways for Preparing Tomorrow’s Leaders to Innovate Ethically
1. We need to start with a common language before tackling what it means to have ethical AI
Ernani Cerasaro, a Policy Assistant to the Supervisor Buttarelli, the European Data Protection Supervisor, led a breakout group around the question: What does the word “ethics” mean to you?
The group embraced the fact that “ethics” is difficult to define: the meaning of “ethics” ranged widely among group members, from fairness to aligning values with others’ values to deciding whether something is right or wrong based on own knowledge/experience. And further, there were disparities between what termonology participants used to describe digital technologies, which impact how they think about ethics.
The challenge here is a definitional one: it’s difficult to define and scope what we mean by “ethics,” so before conceptualizing the ethics of AI, the group consensus was that there needs to be better common terminology and understanding. Tomorrow's leaders can learn from the diverse set of definitions of ethics and work towards more precisely defining and characterizing the types of challenges they face.
2. There are ways to overcome barriers to entry and close skill gaps
Jessica Dheere, a 2018-2019 Berkman Klein fellow and the founder and executive director of SMEX, hosted a discussion about the barriers to entry around digital rights issues, and how some of those barriers might be overcome.
The group found that barriers to entry for working on digital rights issues may include a lack of experience or skills in the technology sector or the absence of legal education or training. But there are ways to overcome these barriers: there are plenty of listservs, newsletters and online courses that can help to fill this gap. Other skills that would be helpful to hone before joining this area of work is to be able to translate complex info to different audiences.
3. Creative skills are important as technical skills
Talents that are not traditionally associated with computer science are important in entering the AI field, or the tech policy space more specifically. Two groups -- one led by Jessica Fjeld, the assistant director of BKC’s Cyberlaw Clinic, and the other led by Hibah Kamal-Grayson, a public policy manager at Google -- tackled these respective issues, collectively surfacing that specific skills, including curiosity, adaptability, willingness to learn from others, empathy, mindful of different cultures and contexts, critical thinking, and creativity, are valuable for those contributing to the development or furthering understanding of AI.
Fjeld’s group focused on the skills needed for young lawyers entering the tech policy space, and also proposed the development of a set of materials that presents basic knowledge around, for instance, AI, machine learning, and big data. This material might, in turn, inspire lawyers to adapt it for the particular audience with whom they are working.