Skip to the main content
Alt Text

Contesting Algorithms

Featuring Niva Elkin-Koren, Professor of Law at the University of Haifa and Faculty Associate at the Berkman Klein Center at Harvard University.

Artificial Intelligence (AI) is transforming the way we govern human behavior, undermining the checks and balances which were intended to safeguard fundamental freedoms in liberal democracies. Governance by AI is a dynamic, data driven framework of governance, where decisions regarding a particular instance would be shaped by data analytics. Systems are designed ex ante to optimize certain functions, in non-transparent manners which challenge oversight and may bypass the social contract. This could be game-changing for democracy, as it facilitates the rise of unchecked power.

The case of content moderation by online platforms offers an interesting example. Platforms, such as Google, Facebook and Twitter, are responsible for mediating much of the public discourse and governing access to speech and speakers around the world. Social media platforms use AI to match users and content, to adjudicate conflicting claims regarding the legitimate use of content on their systems and to detect and expeditiously remove illegal content. AI in content moderation is applied as a practical need to operate in a dynamic, ever growing digital landscape; as an innovative competitive advantage; or simply as measures to ensure legal compliance or to avoid a public outcry.

The use of AI to filter unwarranted content cannot be sufficiently addressed by traditional legal rights and procedures, since these tools are ill-equipped to address the robust, non-transparent and dynamic nature of governance by AI. Consequently, in a digital ecosystem governed by AI, we currently lack sufficient safeguards against the blocking of legitimate content. Moreover, we lack a space for negotiating meaning and for deliberating the legitimacy of particular speech.

Therefore, the use of AI in content moderation calls for a fresh approach towards restraining the power of platforms and securing fundamental freedoms in this environment.

In this presentation, Professor Elkin-Koren proposes to address AI-based content moderation by introducing an adversarial procedure. Algorithmic content moderation often seeks to optimize a single goal, such as removing copyright infringing materials as defined by rigthholders, or blocking hate speech. Meanwhile, other values of the public interest, such as fair use, or free speech, are often neglected. Contesting Algorithms introduce an adversarial design, which reflects conflicting interests, and thereby, offers a check on dominant removal systems.

The presentation will introduce the strategy of Contesting Algorithms, discuss its promises and limitations, and demonstrate how regulatory measures could promote the development and implementation of this strategy in online content moderation.

Professor Elkin-Koren is the coauthor of The Limits of Analysis: Law and Economics of Intellectual Property in the Digital Age (2012) and Law, Economics and Cyberspace: The effects of Cyberspace on the Economic Analysis of Law (2004). She is the coeditor of Law and Information Technology (2011) and The Commodification of Information (2002). Her publications are listed here.

 

Date
Oct 22, 2019
Time
5:00 PM - 6:00 PM
Location
Harvard Law School, Wasserstein Hall Room 1010, First Floor
Cambridge, MA 02138 US

You might also like