Law and Regulation of Emerging Robotics and Automation Technologies: Study Group

The Berkman Center for Internet & Society is now taking applications for a new spring 2016 Study Group exploring the Law and Regulation of Emerging Robotics and Automation Technologies, convened by Harvard Law School visiting professor Kenneth Anderson.  The Study Group will meet for four evening sessions on the Harvard Law School campus: on Wednesday March 23, March 30, April 6, and April 13 from 7:30p.m.-9:00p.m.

Participation in the Study Group is open to all members of the greater Boston community interested in topics related to robotics and automated technologies.  Participants in the seminar-size study group will be selected among the applicants who have submitted an application via the web form below by Sunday March 13, 2016.

About the Study Group: Robotic and automation technologies appear poised in the near-to-medium term to enter at least some important ordinary human social settings. Self-driving cars joining human-driven cars on the roadways are one example that has entered the daily news. These emerging technologies can be loosely characterized as robotic machines possessing (greater or lesser) artificial intelligence to enable automated-to-autonomous decision-making by the machine; "embodiment" in the human social world, particularly some capability of physical movement and/or mobility; and sensor technologies to provide input to the mechanism to allow it to situate itself in the social world.

The robotic machines addressed in this study group are ones that, in the near-to-medium term, are intended to be introduced into the world of ordinary human interactions and environments. These diverse social environments include streets and highways; airspace; office settings of many kinds; eldercare facilities such as nursing homes or ordinary homes for those "aging-in-place"; military or combat settings; industrial or agricultural settings of many kinds. Looking beyond today's industrial or assembly-line robots operating in fixed locations doing automated repetitious tasks, these emerging robots raise the possibility of machines working alongside or in coordinated teams with human beings.  That is, the machines discussed in this study group are "social robots" - existing and operating in the ordinary social world, or at least some specific part of it.  

The diffusion of robotic machines with AI capabilities enabling highly automated, even "autonomous," behaviors inevitably raises important normative questions of law, regulation, and ethics of such machines in their interactions with people and the human social world.  Some of these questions are already on the table today - self-driving cars, for example - such as liability for accidents, product liability with respect to design issues, insurance, etc.  Others will arise with respect to robots machines that (as their capacities to simulate human interaction in verbal and other behaviors gradually increase) might invite human trust and reliance with respect to capabilities that the machine might, but might not, actually possess.  Other normative issues might arise, even in the relatively near-to-medium term (meaning, not in some science fiction scenario), with respect to human affective and emotional responses to these machines.

The purpose of the study group is to address a few of the issues of law, regulation, and ethics that these emerging technologies are likely to pose - not science fiction hypotheticals, but technologies likely to emerge in the short-to-medium term, under reasonably foreseeable paths of existing technology.

The topics of the four sessions are as follows:  

  • 1) March 23: What is a robot or robotic machine, for purposes of law, regulation, and ethics?
  • 2) March 30: Design issues in relation to human cognitive response.  (a) How should "social robot" systems be designed and engineered (Human-Robot Interaction), and how should they be conceived in various social settings, for purposes of law, regulation, and ethics? What social roles are normatively appropriate for distinct social settings -  for example, should robots be conceived and designed, and measured in their performance for regulatory purposes, as substitutes for humans, partners in a human-robot team, assistants to the physically disabled, minders or monitors for the mentally incapacitated (such as dementia patients) or in some circumstances children? (b) How should the problems of "incomplete" automation be addressed? (Such problems include particularly, e.g., the "attention-deficit" problems of commercial airline pilots utilizing an autopilot system that flies the plane entirely on its own - until suddenly it doesn't, requiring that an attentional-disengaged human pilot re-take control in potentially a very short time frame and without prior warning.)
  • 3) April 6: Design issues in relation to human affective response.  (a) Trust and reliance issues potentially created by robot design - is it possible that different types of machine design might result in a robot that under-elicits or over-elicits human trust and reliance with respect to some particular function?  How should law, regulation, and ethics address the tendency of human beings to "cognitive/affective dissonance" (you know it's just a machine, but you can't help but respond to it emotionally) or the "reflexive ascription of intentionality" with respect to certain features of robotic machines? (b) Are there legal, regulatory, ethical, or perhaps "therapeutic" issues raised by social robots that, through the increasing quality of their interactions with humans, elicit in some people a tendency to withdraw from ordinary social interactions with people and to favor instead the substitution of emotional and social life focused on a robot that simulates the reciprocal social relations between people? Is this a proper concern for ethics and psychiatry/psychology, law or regulation, or is it normatively a non-problem and just an individual choice?
  • 4) April 13: Accountability, liability, responsibility, tort actions, and insurance - the legal, regulatory, and ethical issues of human accountability for what machines do (or fail to do when they are possessed of the special features set out in the first session to define robots: significant and advancing AI capabilities, increasingly powerful sensors, and movement and mobility in the human world.  How is accountability to be maintained through law and regulation - but without tossing out the benefits of the new technologies that will, like all technologies, result in some level of accidents and failures?

Study Group Convener:  Kenneth Anderson is a visiting professor of law at Harvard Law School during the spring 2016 term, and a professor of law at Washington College of Law, American University, Washington DC.  He comes to the robotics law issues originally from his work in national security law and law of war on automated/autonomous weapons, but his focus has broadened since then to law, regulation, and ethics of robotics generally. A selection of his publications can be found here: 

What are Berkman Study Groups?:  The Berkman Center is supporting an agile and responsive format for exploring the important questions facing Internet and Society through in-depth discussion and development.  Somewhere between an hour long panel discussion and a semester-long seminar, Study Groups might choose to meet once a week for two hours over a few weeks, others may choose to do run an more intensive two-day workshop.  The Study Group format is design to encourage public participation with anyone in the greater Boston community. That might include students from Harvard and other Boston-area institutions, industry experts, and so on. The goal is to foster diversity of participation across disciplines and experience to tackle interesting questions in novel ways and with fresh perspective.

To Apply:  Please fill out the below form by Sunday March 13, 2016.

Last updated

March 3, 2016