Digital Economy, AI, and Harmful Speech at IGF 2018
the Berkman Klein Community joins the Internet Governance Forum 2018
November 12-14 marks the 13th annual meeting of the Internet Governance Forum (IGF), a multistakeholder forum for policy dialogue on issues of Internet governance, held this year in Paris, France. As in yearspast, the Berkman Klein Center is pleased to be an active participant in key discussions about some of the most pressing issues of our increasingly networked world, including the ethics and governance of artificial intelligence, harmful speech online, and youth in the digital economy.
By discussing online youth practices in the context of the blurred lines between work and play, we will provide a critical examination of the changing nature of labor, the kind of skills that are needed for participating in the digital economy, and how digital inequalities continue to evolve in complex ways. Driven by rapid technological transformation, the digital economy creates both challenges and opportunities for youth. Large social, commercial, and entertainment platforms offer networked spaces where youth are able to learn, create, play, and develop a range of skills that are key for our rapidly changing economic landscape. However, as many online platforms operate under profitable business models, the potential commercialization of youth’s data and generated content presents a risk in terms of privacy and free labor. In order to gain a deeper understanding of the challenges and opportunities in this digital context it’s crucial to (1) discuss how youth think about their online play and work activities, and how these conceptualizations have evolved over time; (2) analyze how the skills, attitudes, and identities they are developing in digital spaces prepare them for the future workforce; and (3) convene diverse groups of stakeholders, with backgrounds in fields ranging from law (including child rights, human rights) and regulation to media analysis, to debate policy and design interventions that can foster a more equitable and inclusive future.
The need to address the challenges of fundamental rights protection in the digital ecosystem is a constant theme of the policy arena which has been emerging around Internet governance, and it has been addressed from multiple perspectives within the IGF since its foundation. In the last five editions of the Forum, a number of workshops have been held around the trajectories discussed in this panel, i.e. decentralizing technologies and their impact on fundamental rights protection; and digital constitutionalism initiatives originated in an increasingly heterogeneous political setting. Although this workshop is not a direct continuation of previous IGF workshops, it aims at building on foregoing efforts and discussions in order to develop a more encompassing understanding about future opportunities and challenges for digital rights.
This workshop will fulfill the IGF sub-theme “Internet for Development & Sustainable Development Goals” as it will provide an important discussion on how stakeholders can work together to support bottom-up digital social innovation, while respecting ethical and commercially fair practices. This issue is timely given that the 2030 UN Agenda for Sustainable Development reflects an understanding that an open Internet and the spread of global interconnectedness can enable economic development and cross-border commercial activities that can bridge the digital divide and expand societal inclusion. By discussing digital social innovation in the context of emerging economic opportunities, we will provide a critical examination of the changing nature of digital participation, open innovation, and digital inequalities.
Information and communications technologies (ICTs), including the Internet, and emerging technologies have the potential to act as catalysts for the UN 2030 Agenda and help advance all the 17 Sustainable Development Goals (SDGs) and constituent targets. At the same time, rapid technological change poses new challenges and can have unintended consequences.
In order to fully harness and benefit from these technologies and address challenges, sustainable, flexible, globally-minded, inclusive and sensible policies are essential. Inappropriate and restrictive regulation (which can be well intentioned as well) can stifle the very innovation on which the growth of the digital economy depends.
Convening leading experts from diverse and relevant stakeholder groups and communities, this main session will explore policy considerations and approaches needed to leverage the Internet and ICTs to facilitate common development goals.
The need for capacity development in Internet governance and digital policy is voiced substantively and regularly in official speeches and documents. Experienced facilitators and consultants are active in this area. However, supply and demand do not always match.
What does capacity development need to look like? What is the learning of activities on capacity development that could be useful to newcomers? Are there particular opportunities, risks, and benefits associated to capacity development in coming years?
Quality capacity development requires resources, and very often those most qualified in education cannot devote the time and effort required for fundraising. Who should pay for capacity development activities? Those most in need, such as those from least developed countries (LDCs), find it difficult to pay. Several pioneer countries (e.g. Switzerland, with the Geneva Internet Platform project) have gone beyond their own capacity development needs and expanded their provision to the global community. Going further, should the responsibility for funding lie with developed country governments, intergovernmental organisations, with the private sector, or with the numerous foundations or NGOs? What, if any, is the responsibility of participants to self-fund?
After more than two decades of relatively little direct legislation of the Internet, in developed and developing countries alike, national laws meant to govern the internet are rapidly proliferating, in response to data privacy and protections concerns (like the GDPR) and also as a means of managing age-old, pre-digital challenges like organized crime, hate speech, propaganda, gender violence, and violent extremism as they migrate online and take advantage of the particular affordances of digital technologies and networks. As a result, these laws are becoming a key factor in the evolution of Internet governance globally.
The development and spread of the Internet worldwide have reinforced traditional discussions about jurisdiction, as cross-border data flows aspects increase in scale and complexity. The adoption of data protection laws in more than 120 countries over the world has also raised a challenge in terms of legal harmonization and judicial cooperation to mitigate conflict of laws that have proliferated in recent years and to enforce judicial decisions transnationally as revealed by the Internet & Jurisdiction Observatory database. Besides those general aspects, the issue is extremely relevant from a global south point of view, given the concentration of Internet platforms in the developed countries and the fact that law enforcement standards and data protection frameworks are generally built around the experience of the developed north. As more countries from the developing south become integrated to the Internet ecosystem traditional global political and economic imbalances tend to be aggravated by the diffusion of formal and informal norms and practices related to the access to data for criminal persecution by domestic and foreign authorities. This session aims to entertain the debate among different stakeholders groups and the IGF community as a whole about the following policy questions: a) What are the implications of recent institutional solutions adopted in countries in the global north to reconcile the protection of privacy and access to data to address crime and how will they affect the Internet ecosystem in general? What are the implications of those developments for countries in the global south? b) Bearing in mind the position of developing countries in the global Internet economy, how can the protection of fundamental rights of users be reconciled with lawful access to data in the context of criminal persecution by domestic and foreign authorities? What are the challenges and opportunities for the creation of legal interoperability between developed and developing countries in a mutually-agreeable and negotiated way (considering both the synergies and the incompatibilities of intergovernmentalism and multistakeholderism)? How to build a global scenario of balanced and coexisting jurisdictions?
The speakers will introduce the problems posed from a human rights perspective of automated or algorithmic decision-making. Algorithmic justice, algorithmic bias, and algorithmic transparency shall be introduced as concepts.The technical, legal and human rights issues will also be posed.
The development and deployment of algorithmic decision-making, machine learning systems, artificial intelligence (AI), and other related emerging technologies to societal ills online is the subject of many timely and pressing policy debates. Both private and public sector initiatives are mostly based on the belief that, through AI, society will be able to better address a broad spectrum of issues, ranging from hate speech and extremist content, to copyright violations, and the spread of misinformation online. Unfortunately, there is still a great deal of uncertainty in policy debates about which of these issues AI is most likely to be useful to address. The session uses an open debate format, led by experts representing different stakeholder communities, to develop more concrete predictions about the extent to which the application of AI could help solve pressing challenges online while also taking into account its human rights implications.
How can businesses, governments, civil society, and others (tech designers, start-ups, academia) employ the child rights framework to develop effective policies, guidelines, and best practices that will steer the development of ai technologies to capitalize on opportunities to improve children's lives and mitigate risks? While there are many uncertainties around Artificial Intelligence, we know that it will impact almost every part of our lives, and that in many cases the impacts will be greatest for children- from how they are conceived and born, to the services they can access, and how they learn, to the jobs they will train for. This reality brings with it a tremendous amount of opportunity and risk. Without specific attention to children, the evolution of this technology will proceed without considering children’s specific needs and rights. The healthy development of children is crucial the future well being of any society, and the cost to society of failing our children is enormous. At a high level, we seek to start a conversation that will inform the global agenda on AI and children - specifying the priority opportunities and challenges for a global context, and identifying who needs to be involved in furthering the agenda. We seek to convene a diverse audience to identify the best pathways forward for government/ private sector adoption of child-friendly AI policies. This Open Forum will be led by UNICEF in collaboration with its its partners to kick start the consultation process and build a coalition of agencies and individuals willing to work together on building a broader AI and child rights agenda.
Internet’s potential for virality and the platform it provides to those aiming to spread extremist trends, including divisive narratives and hate speeches across the ideological spectrum, forces modern societies to rethink its policy framework and redefine its practices. Reporting tools are often seen as the main action levers, but are not sufficient to sustainably tackle the whole phenomenon. The priority should be to equip and empower citizens with methods and tools enabling them to take impactful action against hate online.
Nowadays, we witness more and more cases of mobile Internet empowering people to connect with society, express themselves, and more importantly, find alternative means of income. For example, there are a whole new range of jobs that have been created due to mobile internet penetration. Globally, people share snippets of their daily lives and instantly earn a living simply by posting it online. Due to mobile internet access, there are also others who are able to work and do their business online and remotely.This new era has seen a thriving digital society and equally, an exponentially growing digital economy with mobile internet at the center. However, with the unprecedented opportunities mobile internet brings a new set of dilemmas. This panel will discuss how innovations can leverage mobile internet to drive an inclusive digital economy and society.
The Session will explore the potential of 5G, IOT and AI, to address complementary and interlinked goals: Digital inclusion and Accessibility. The Session will discuss the policies that will promote the deployment of 5G, an advanced and efficient technology, in order to contribute to the objectives of the sustainable development goals, in view of the commitment made in General Assembly resolution 70/125 to close the digital divides between and within countries, including the gender digital divide, through efforts to improve connectivity, affordability, access to online services, education, information and knowledge, multilingual content, digital skills and digital literacy.
The internet is a truly amazing medium that has completely revolutionised the information landscape across the world. However, the same technology has also been used to facilitate the spread of fake news and misinformation. These stories are no longer limited to random hoax messages of even phishing scams but have now been organised and mobilised in such manners that they have had direct impacts on political decision making. Not only does the internet lends itself to the dissemination of misinformation at next to no cost, but, the presence of big data and the lack of transparency around corporate policies mean that this data can be used to target the most vulnerable audience. However, policy interventions are challenging. How does one crack down on ‘trolls’ without taking away the right to anonymity? If the right to freedom of expression is recognised and celebrated, can pieces of fake news, that are not defamatory, contemptuous or inciting violence really be targeted by regulators without compromising on principles of proptionality? How do you legislate against a phonomena where one piece of content might mean nothing but can do harm as a [art of a larger group? In this panel we will explore the impact of misinformation on journalism, political decision making and discuss how to legislate and create corporate policies around this issue without compromising on the basic right of freedom of expression.
This session will explore the potential for blockchain technologies to be used to advance new solutions to many of the difficult problems facing our global society. Projects are in production for use of blockchain to provide identity documents to refugees and asylees, advance financial inclusion, and support efforts to respond to climate change, among others. Many such projects have received significant attention and funding. Recently, the press and former government officials have questioned how much of the excitement around such projects is hype and how much offers real potential to effectuate change. This session will explore the landscape of blockchain for social good projects and attempt to map out initial best practices for separating the hype from concrete projects making a difference for the most vulnerable globally. In particular, we will consider how decentralized blockchain governance can contribute to enabling these usecases, or, on the other hand, detract from them.
The private sector has been exposed to an exponentially increasing number and variety of attacks in the digital environment. Businesses should protect themselves, but they are dependent on their respective governments if they wish counter-offensive action be legally taken against attackers. With practices known as “hacking-back” being within governments' prerogative only, how far should businesses be allowed to go in taking proactive defensive measures (also referred to as "active cyber defence")? Should public policy evolve, in order to clarify the conditions, limits and safeguards for private sector to resort to such techniques?