Research with the Ethics and Governance of AI Effort

The Berkman Klein Center for Internet & Society seeks part-time research assistants for the fall 2017 semester to help advance our research on artificial intelligence, autonomous systems, machine learning, and other algorithmic technologies. Research assistants will join a team that includes lawyers from the Cyberlaw Clinic and Berkman Klein Center faculty (including Berkman Klein Center Executive Director Urs Gasser and Faculty Co-Director Chris Bavitz), staff, and fellows to develop interdisciplinary, cutting-edge research across a range of areas.

In particular, we are looking for research assistants to help in a wide ranging activities that support the Center’s work on:

  • the impact of these types of technologies on social and criminal justice (including the use of risk assessment algorithms and related technical tools to facilitate judicial determinations concerning bail, parole, sentencing, and the like); and
  • global governance of artificial intelligence systems (including the ways in which transboundary applications of AI may challenge existing national and international governance institutions).

To learn more about the initiative and specific areas of focus for the Center’s work, please visit the “AI Use Cases” page on the Berkman Klein Center’s website.

Research Assistant Responsibilities:

Research assistants will work within one of the focus areas described above, among others. This work may include:

  • researching, data collection, and writing related to the above-referenced focus areas;
  • tracking and capturing major developments and discussions in news media, blogosphere, and social mediascape around the identified set of topics;
  • performing legal and/or policy analysis of technological developments in the field;
  • assisting with the writing and coordination of research reports;
  • contributing to other ongoing AI research efforts; and
  • drafting blog posts and editing contributions to the blog from affiliates and other contributors.

Background Resources:

Required Experience and Skills:

RA candidates should be able to demonstrate:

  • interest in and enthusiasm for artificial intelligence or technology-related legal issues;
  • excellent research, writing, and data analysis skills;
  • ability to quickly draft and contextualize written materials within the suite of the project outputs;
  • excellent critical reading abilities, with the ability to absorb material quickly;
  • familiarity with and knowledge of online research platforms (Westlaw, Lexis, and the like) as well as resources accessible through libraries; and
  • the initiative and energy to see projects to completion in a dynamic environment.

Time Commitment:

The position requires a commitment of 8-12 hours per week.  We are looking for RAs to begin in early September. Researchers can generally work remotely but will attend regular meetings on site at the Berkman Klein Center or elsewhere at Harvard Law School.

Compensation:

The position will be compensated at standard Berkman Klein Center research assistant hourly rates for a maximum of 15 hours per week.

General Academic Year Research Assistant Information and Eligibility:

  • we are unable to hire RAs who live outside of the Commonwealth of Massachusetts;
  • we do not have the ability to provide authorization to work in the United States;
  • RAs do not have to be students; and
  • RAs do not have to be affiliated with Harvard University.

About BKC’s Ethics and Governance of AI initiative:

The development, application, and capabilities of AI-based systems are evolving rapidly, leaving largely unanswered a broad range of important short- and long-term questions related to the social impact, governance, and ethical implications of these technologies and practices. The speed of technological development and the uncertainty that accompanies its uses provoke complex questions related to fundamental values and concepts such as autonomy, agency, and accountability. Simultaneously, the knowledge gap between the small group of AI experts and the large population affected by these “black box” technologies is widening and creating misconceptions regarding AI. Taken together, these developments underscore the need (and opportunity) for the Berkman Klein Center and the MIT Media Lab to conduct evidence-based research to provide guidance to decision-makers in the private and public sectors, and to engage in impact-oriented pilot projects to bolster the use of AI for the public good, while also building an institutional knowledge base on the ethics and governance of AI, fostering human capacity, and strengthening interfaces with industry and policy-makers.

For the academic year, core use cases for our work include autonomous vehicles, criminal and social justice, and media and information quality. They are being examined through the lenses of cross-cutting themes, including global governance, diversity and inclusion, and transparency and explanation. For more information on our core use cases and cross-cutting themes, please visit the dedicated page.

Enabled by support from the Ethics and Governance of Artificial Intelligence Fund, the Berkman Klein Center and the Media Lab are collaborating on a range of activities that include a diverse array of voices and seek to discern and preserve the human element so often lost in highly technical AI conversations.

Applications and Questions:

To apply, please email a brief paragraph summarizing your interest and experience and any information about particular aspects of the Center’s work on relevant topics that appeals to you.  Please attach a current CV or resume.  Applications should be sent to: airesearchassistants@cyber.harvard.edu

 

Last updated

September 11, 2017