Berkman Klein at IGF 2017
This week (December 17-21) marks the 12th annual meeting of the Internet Governance Forum (IGF), a multistakeholder forum for policy dialogue on issues of Internet governance, held this year in Geneva, Switzerland. As in years past, the Berkman Klein Center is pleased to be an active participant in key discussions about some of the most pressing issues of our increasingly networked world, including the ethics and governance of artificial intelligence, harmful speech online, and youth in the digital economy.
A few sessions we are particularly excited about are highlighted below:
Social Responsibility and Ethics in Artificial Intelligence
The breakthroughs in AI will rapidly transform digital society and greatly improve labor productivity, but also will raise a host of new and difficult issues concerning e.g. employment, ethics, the digital divide, privacy, law and regulation. In consequence, there is a growing recognition that all stakeholders will need to engage in a new and difficult dialogue to ensure that AI is implemented in a manner that balances legitimate competing objectives in a manner that leaves society better off.
While engineers may share technical ideas within transnational expert networks, broader public discussions about the social consequences and potential governance of artificial intelligence have tended to be concentrated within linguistic communities and civilizations. However, many of the issues that AI raises are truly global in character, and this will become increasingly evident as AI is incorporated into the functioning of the global Internet. There is therefore a pressing need to establish a distinctively global discourse that is duly informed by the differences between Eastern and Western cultural values, business environments, economic development levels, and political, legal and regulatory systems.
Artificial Intelligence and Inclusion
The policy debates about Artificial Intelligence (AI) have been predominantly dominated by organizations and actors in the Global North. There is a growing need for a more diverse perspective regarding the policy issues and consequences of AI. The developing world will be directly affected by the deployment of AI technologies and services. However, there is a lack of informed perspectives to participate in the policy debates. This roundtable is a follow up to the international event “Artificial Intelligence and Inclusion” held in Rio de Janeiro earlier this year. The discussion will focus on development of Artificial Intelligence and its impact on inclusion in different areas such as health and wellbeing, education, low-resource communities, public safety and security, employment and workplace, and entertainment, media and journalism, among others. The goal of this roundtable is to bring the debates of the this international event to the IGF community, enlarging the conversation and deepening the understanding of AI inclusion
Selective Persecution and the Mob: Hate and Religion Online
As hate speech online spreads at an alarming rate, states, companies, civil society and other stakeholders grapple with the question of how to mitigate the situation. States have relied on command-control regulation, including hate speech laws, as the primary solution. However, these laws are used to censor and punish political dissent and other expression protected under the ICCPR and most countries’ constitutions. These laws also seem to be able to do very little for the journalists being murdered, attacked and threatened for their online speech, or for people receiving onslaughts of threats, doxxing, abuse and other forms of aggression online.
Artificial Intelligence in Asia: What’s Similar? What’s Different? Findings from our AI Workshops
Featuring Malavika Jayaram
Ideas about the future and about what progress means are heavily contested, and context-specific. Digital Asia Hub set out to investigate whether the future of artificial intelligence - heralded as a game changing technology - was constructed and implemented differently in Asia, and to explore whether the problems that AI was deployed in service of signalled different socioeconomic aspirations and fears.
We were also pleased to share recent research about youth practice online in the lightning talk Blurring the lines between work and play: Emerging Youth Practices and the Digital Economy given by Sandra Cortesi, and to participate in a global roundtable on AI and Governance hosted by the Digital Asia Hub that featured evidence-based approaches to testing the social impact of AI-based governance, methods for holding AI governance accountable, and open a conversation on the future of evidence-based policy and consumer protection online.
Learn more about some of the Berkman Klein Center’s related work on the Ethics and Governance of Artificial Intelligence Initiative page on our website. The Initiative, which is guided by the Berkman Klein Center and the MIT Media Lab, aims to foster global conversations among scholars, experts, advocates, and leaders from a range of industries. By developing a shared framework to address urgent questions surrounding AI, the Initiative aims to help public and private decision-makers understand and plan for the effective use of AI systems for the public good.