Skip to the main content
Your Guide to BKC @ ACM FAT* 2020

Your Guide to BKC @ ACM FAT* 2020

Talks and tutorials led by the Berkman Klein community

Engage with members of the Berkman Klein community at ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) 2020! Check out the full list of opportunities below, or visit the ACM FAT* 2020 program schedule to learn more.

 

MONDAY, JANUARY 27

Towards Positionality-aware Machine Learning [Tutorial]

Christine Kaeser-Chen, Elizabeth Dubois, Friederike Schuur (Assembly 2019 Alumni) | 13:00-14:30

Positionality is the social and political context that influences, and potentially biases, a person's unique but partial understanding of the world. Machine learning (ML) systems have positionality too, which is embedded through choices we make when developing classification systems and datasets. In this tutorial, we uncover positionality in ML systems with a focus on the design of classification systems, study the impact of embedded positionality, and discuss potential intervention mechanisms. 

The Meaning and Measurement of Bias: Lessons from NLP [Tutorial]

Solon Barocas and colleagues | 16:45-18:15

The recent interest in identifying and mitigating bias in computational systems has introduced a wide range of different---and occasionally incomparable---proposals for what constitutes bias in such systems. This tutorial aims to introduce the language of \emph{measurement modeling} from the quantitative social sciences as a framework for understanding fairness in computational systems by examining how social, organizational, and political values enter these systems. We show that this framework helps to clarify the way unobservable theoretical constructs---such as "creditworthiness," "risk to society," or "tweet toxicity"---are implicitly operationalized by measurement models in computational systems. We also show how systematically assessing the \emph{construct validity} and \emph{reliability} of these measurements can be used to detect and characterize fairness-related harms, which often arise from mismatches between constructs and their operationalizations. Through a series of case studies of previous approaches to examining "bias" in NLP models, ranging from work on embedding spaces to machine translation and hate speech detection, we demonstrate how we apply this framework to identify these approaches' implicit constructs and to critique the measurement models operationalizing them. This process illustrates the limits of current so-called "debiasing" techniques, which have obscured the specific harms whose measurements they implicitly aim to reduce. By introducing the language of measurement modeling, we provide the FAT* community with a process for making explicit and testing assumptions about unobservable theoretical constructs, thereby making it easier to identify, characterize, and even mitigate fairness-related harms.

 

TUESDAY, JANUARY 28

Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought [Paper Session]

Ben Green, Salome Viljoen | 9:00-10:50 | Read the paper here

Auditing Radicalization Pathways on YouTube [Paper Session] 

Virgilio Almeida and colleagues | 11:20-12:15 

From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy [Paper Session]

Elettra Bietti | 14:15-16:05

Studying Up: Reorienting the study of algorithmic fairness around issues of power [Paper Session]

Chelsea Barabas, Karthik Dinakar, and colleagues | 14:15-16:05

Roles for Computing in Social Change [Paper Session]

Rediet Abebe, Solon Barocas, and colleagues | 14:15-16:05

 

WEDNESDAY, JANUARY 29

Productivity and Power: The Role of Technology in Political Economy [Keynote]

Yochai Benkler | 12:15-13:15

Market democracies struggle with economic insecurity and growing inequality that presents new threats to democracy. The revival of “political economy” offers a frame for understanding the relationship between productivity and justice in market societies. It reintegrates power and the social and material context—institutions, ideology, and technology—into  our analysis of social relations of production, or how we make and distribute what we need and want to have. Organizations and individuals, alone and in networks, struggle over how much of a society’s production happens in a market sphere, how much happens in nonmarket relations, and how embedded those aspects that do occur in markets are in social relations of mutual obligation and solidarism.  These struggles involve efforts to shape institutions, ideology, and technology in ways that trade off productivity and power, both in the short and long term. The outcome of this struggle shapes the highly divergent paths that diverse market societies take, from oligarchic to egalitarian, and their stability as pluralistic democracies.

 

THURSDAY, JANUARY 30

The False Promise of Risk Assessments: Epistemic Reform and the Limits of Fairness  [Paper Session]

Ben Green | 9:00-10:50 | Read the paper here

Making Accountability Real: Strategic Litigation [Keynote]

Nani Jansen Reventlow | 12:15-13:15

How can we make fairness, accountability and transparency a reality? Litigation is an effective tool for pushing for these principles in the design and deployment of automated decision-making technologies. The courts can be strong guarantors of our rights in a variety of different contexts and have shown already that they are willing to do so in the digital rights setting. As  automated decisions are increasingly impacting every aspect of our lives, we need to engage the courts on these complex issues and enable them to protect our human rights in the digital sphere. We are already seeing cases being taken to challenge facial recognition technology, predictive policing systems, and systems that conduct needs assessments in the provision of public services. However, we still have much work to do in this space. What opportunities do the different frameworks in this area, and especially European regulations such as the GDPR offer, and how can we maximise their potential?

You might also like