Skip to the main content
What We Created During Assembly 2019

What We Created During Assembly 2019

From March to June 2019, the Berkman Klein Center and MIT Media lab co-hosted the third iteration of the Assembly program, which brings together a small cohort of technologists, managers, policymakers, and other professionals to confront some of tech’s biggest problems.

This year’s iteration focused on the ethics and governance of AI, bringing together seventeen people from across the AI ethics field, including Washington Governor Jay Inslee’s tech policy advisor, a program lead at the Partnership on AI, an ethics researcher at Deepmind, and the director of machine learning at CityBlock.

Over three months, the cohort learned together and from BKC and MIT Media Lab’s community of technologists, researchers, activists, and changemakers. By the end of the program, they’d built four projects that offer ways to move forward on the thorny problems at the intersection of AI and ethics.

As the program’s co-lead Jonathan Zittrain said, “If there’s one line through the projects, its thinking about the duties — moral and otherwise — of those who are helping with the rapid deployment of technologies loosely grouped around artificial intelligence… Thinking about what those duties are, who should be mindful of them, and who is responsible for them.”

AI Blindspot offers a process for preventing, detecting, and mitigating bias in AI systems. Kaleidoscope: Positionality-Aware Machine Learninginterrogates the creation of classification systems. Surveillance State of the Union highlights the risks of pursuing surveillance-related work in AI. Finally, Watch Your Words examines the expansion of Natural Language Processing / Natural Language Understanding systems.

On Thursday, June 13th, the cohort presented their work at the Academy of Arts and Sciences in Cambridge, MA. Read and watch more below about the four projects developed during Assembly 2019.

Assembly 2019 Projects

AI Blindspot

Organizations lack a framework for preventing, detecting, and mitigating bias in AI systems. Audit tools often focus on specific parts of a system rather than the entire AI pipeline, which can lead to unintended consequences.

AI Blindspot offers a process for preventing, detecting, and mitigating bias in AI systems. The team produced a set of printed and digital prompt cards to help teams identify and address potential blindspots. The cards help AI developers and teams evaluate and audit how their systems are conceptualized, built, and deployed.

Watch their presentation below to learn more; explore their website; download and test their cards; and give them feedback.

 

Kaleidoscope: Positionality-Aware Machine Learning

“Unbiased” data and machine learning (ML) systems are risky fiction, as there is no view from nowhere. The Kaleidoscope: Positionality-Aware Machine Learning project explores the development of positionality-aware ML/AI systems.

A person’s position in the world and set of experiences shape their view of the world. It defines the bounds of their perspective and what assumptions may appear to them as “universal truths.”

Positionality is the social and political context that influences, and potentially biases, a person’s unique but partial understanding and outlook on the world.

The scale of ML and AI systems makes it easier than ever to embed positionality in society, with potentially harmful effects. If the system’s embedded positionality is not aligned with the context where it will be used, the system could harm sectors of society through the scale of its application.

It is impossible to create “unbiased” AI systems at large scale to fit all people. Given that positionality is inevitable in AI systems, the team explored how to design AI systems that can best fit the context of application.

Watch their presentation below to learn more; explore their website; read the team’s white paper; or learn about their most recent workshop.

Surveillance State of the Union

Surveillance State of the Union is a data visualization and set of forthcoming short illustrative cases that seeks to raise awareness among tech workers, academics, military-decision makers, and journalists about the risks of pursuing surveillance-related work in AI.

Work that may, to a researcher, be thought of as theoretical has very real consequences for people who are subjected to state surveillance, as evidenced in the suppression of the Uyghur minority in Xinjiang province of China and other marginalized communities around the world.

The project leveraged a variety of data sources such as government contracts, co-authored papers, and public releases to begin to map the surveillance research network. The work shows, for example, overlap between universities collaborating on US state-funded surveillance research and similar research by Chinese companies implicated in Xinjiang.

Watch their presentation below to learn more; explore their data visualization (work in progress); and codebase.

Watch Your Words

Watch Your Words examines the expansion of Natural Language Processing (NLP) / Natural Language Understanding (NLU) systems. With increasing frequency, people are being asked to interact with NLP systems in order to access education, job markets, customer service, medical care, and government services. Without active attention, biases encoded in written language will be reinforced, extended and perpetuated in NLP systems, potentially resulting in multiple types of harm to vulnerable populations.

The project believes that discussion of bias needs to move beyond the machine-learning community, to include developers who build applications based on “off-the-shelf” models. Watch Your Words presents evidence of these biases, explores approaches to raise awareness of bias, defines harms visited on vulnerable groups, and suggests approaches for bias mitigation.

Watch their presentation below to learn more.

Learn More About Assembly

Watch the entire Assembly 2019 event here.

Learn more about Assembly 2017, which focused on cybersecurity and privacy, and Assembly 2018, which also focused on the ethics and governance of AI.

You can also learn more about the Data Nutrition Project, which launched out of Assembly 2018, and has since been leading workshops and speaking at a number of conferences, including SXSW, CPDP, and the Open Government Partnership.

Assembly 2020: Disinformation

We’re thrilled to share that in 2020, Assembly will focus on disinformation from a cybersecurity perspective. The program will draw on our long history of work on disinformation, media policy, intermediaries and platforms, cybersecurity, and other relevant areas; as well as our expertise developing new ideas across sectors, and then embodying them as institutions, protocols, and ideas.

Applications for Assembly 2020 will launch later this summer. We’ll be recruiting a cohort with experience tackling disinformation and related problems across sectors.

 

Read this full post on Medium

You might also like


Projects & Tools 01

Past

Assembly: Disinformation

The Assembly: Disinformation Program brings together participants from academia, industry, government, and civil society from across disciplines to explore and make progress on…