Skip to the main content
5 Contrails to Follow from BKC’s 2025 Fall Launch

5 Contrails to Follow from BKC’s 2025 Fall Launch

In September, the Berkman Klein Center for Internet and Society (BKC) hosted its 2025 Launch, inviting students as well as current and new community members to learn about and engage in the Center’s work. Launch was held over two days and was packed with panel discussions, demos, and networking opportunities to propel BKC’s work over the next academic year. It also celebrated BKC’s Institute for Rebooting Social Media (RSM), which has come to an official end after roughly three years.

The conversations and connections made across these events generated insights into the current state of the Internet and digital technology—particularly around artificial intelligence—and have helped BKC solidify new avenues for inquiry and action. Below are just a handful of the lessons learned from Launch, and questions BKC seeks to tackle in the new school year.

The “open Internet” is over

The Internet and its social spaces have undergone significant changes over the last decade. From the decline of Twitter/X as a public square, to the splintering of discourse into private rooms, what once seemed like a haven for free expression has become rife with polarization and siloing. And with the mainstream adoption of AI by platforms and users, authenticity and transparency on the Internet have been degraded.

“We’re losing the organic part of the Internet” — Faculty Associate David Nemer

During a panel called “Reboot, Rebuild, Reimagine: Social Media Research Then and Now,” RSM alums reflected on some of the opportunities and challenges this evolution of the Internet presents. Faculty Associate Professor Kate Klonick raised the tension between advocating for social media decentralization and still recognizing the civic value of large, centralized platforms—and the need to hold these platforms accountable via antitrust. Meanwhile, Faculty Associate Professor David Nemer added that the decline of the open Internet is tied directly to democracy, citing his own work on monitoring how encrypted messaging apps have shaped elections in Brazil. In a somewhat controversial take, Former Fellow Joe Bak-Coleman suggested that one way to address some of these issues around transparency and accountability is to treat social media as critical infrastructure—much like public roads and other systems. 

Privacy cannot be a design afterthought. It must be engineered in from the get-go

In a world where accessing critical personal data, such as medical information and finances, and establishing an online presence is all but required, we must ensure that a person’s identity can be protected and secured across the board.
To address increasing concerns around privacy and trust online, the Applied Social Media Lab (ASML) demoed a few of its ongoing projects aimed at protecting user privacy and credentialing, including the ASML Wallet. The ASML Wallet aims to improve trust across platforms by allowing one secure verification to carry across the Fediverse, rather than repeatedly handing sensitive information to platforms you can’t fully trust to safeguard your information. As ASML Principal Engineer Brendan Miller put it, “this enables creators and publishers to link identities across platforms for trust without starting from scratch or oversharing data.”

ASML aims to create systems that proactively protect users’ interests over those of  online platforms and to tackle thorny practical challenges in digital spaces such as how to balance the need for personal information sharing while protecting user privacy.

AI agents aren’t just tools—they’re counterparties we’ll need to govern

There has been no clear consensus on how to define agentic AI since the conversation began in the late 1990s. This makes the governance of AI tools uniquely difficult to establish, according to panelists including BKC’s Chief AI Scientist Josh Joseph, Affiliate Jordi Weinstock, and Professor Aileen Nielsen. Complicating matters, Joseph noted that large language models (LLMs) now enable multi-step planning, creating systems that feel “closer to an individual” than a tool.

This lack of categorization and blurring of lines between “tool” and “individual” makes categorizing AI as either a product, a person or a corporation for legal purposes all the more complex. To illustrate the challenges this poses, Weinstock emphasized the law’s lack of mechanisms to handle AI entities acting far from direct human oversight. Meanwhile, Nielsen argued that thinking of AI as a “counterparty”—with interests that may align or conflict with ours—may be more productive than debates about personhood.

These tensions—between hype and skepticism, agency and accountability—are shaping BKC’s new AI research, which will investigate questions of benchmarking agency and consciousness, the policy implications of interpretability, governance of agentic systems, and how AI intersects with human flourishing. 

Students are cautiously optimistic about AI

What does AI mean for the future of work—particularly for those new to the workforce, or re-entering the workforce with new skills? This question was very much on the minds of students who attended BKC’s student-focused networking event. While many of the students seemed to accept that LLMs are here to stay and are more or less embracing AI tools, there was a sense of anxiety about what this embrace means for the future of day-to-day life. And yet, some students expressed an interest in seeing AI agents take on certain roles in legal and medical fields, such as public defenders and therapists, as a way to potentially make these services more accessible. 

AI’s future will be defined by (sometimes) divergent geographical and ideological worldviews

During the panel “Cutting Through the Hype Cloud: The Deeper Questions We Need to Ask About AI”, Faculty Director Jonathan Zittrain framed the AI landscape as a “triad” of three camps offering starkly different visions of what’s at stake: accelerationists, who encourage the rapid development of AI because of its potential benefits to humanity; safetyists, who fear increasingly advanced AI could doom humanity and want significant guardrails in place; and skeptics, who highlight the daily harms in people’s lives that AI already causes and/or believe the technology’s capabilities and impact are overhyped. 

“On one hand, AI is seen as revolutionary; on the other, as an existential threat. And some argue it’s neither, just smoke and mirrors. How can we hold these divergent perspectives and put them in conversation?” — Faculty Director Jonathan Zittrain

Similarly,  Professor James Mickens later observed that where a model is built and deployed can be significant; for instance, a model built in the Global North behaves differently than a model in the Global South. Technologists, regulators, civil society and the public should be more cognizant about these differences in AI that arise from the diverse ecosystems and cultural values in which it is built.

Community Launch energized BKC’s growing community and its growing community of staff, faculty, affiliates, fellows, and students to address the questions and challenges that current and emerging digital technologies like AI present for people and society. As BKC Executive Director Alex Pascal said in his closing remarks: “There’s never been a more exciting time to study the laws and politics that govern artificial intelligence, nor is there a more exciting place to do so than at the Berkman Klein Center.”

Stay in touch! Please be sure to sign up for BKC’s newsletters, and follow us on social media! 

You might also like


Projects & Tools 01

The Institute for Rebooting Social Media

The Institute for Rebooting Social Media is a three-year "pop-up" initiative that will address the biggest questions in social media and beyond...