Skip to the main content

Global AI Dialogue Series: Hong Kong SAR, China

Takeaways from the Asia Pacific-US AI Workshop: Measuring Impact, Building Inclusive AI, and Bolstering Trust in the Digital Ecosystem

Berkman Klein Center for Internet & Society at Harvard University and the Digital Asia Hub, in collaboration with United Nations University Institute on Computing and Society & The China Institute for Science and Technology Policy at Tsinghua University

INTRODUCTION
The Asia-Pacific region and the U.S. are pioneers and early adopters of numerous innovations in the ongoing AI and automated systems technological revolution. However, at the same time that the respective regions are targeting high growth and investing heavily in AI, new governance challenges ranging from privacy and security to unbiased decision-making are also rising. The promise of AI-based technologies is enormous -- private firms and public institutions will benefit massively from AI-enabled efficiency gains and other unprecedented improvements across sectors -- but barriers to these gains and potential externalities are equally significant.

In order to establish a cross-cultural dialogue and learning network on specific AI issues and potential methods for addressing them within and across the US and APAC region, the Asia Pacific-US AI Workshop was convened by the Berkman Klein Center for Internet & Society at Harvard University with the Digital Asia Hub, in collaboration with the UN University Institute on Computing and Society and the China Institute for Science and Technology Policy at Tsinghua University. The objective of the meeting, which convened 35 subject matter experts from across academia, industry, government, and civil society, hailing primarily from several APAC countries as well as the US, was to provide a platform for stakeholders with policy, business, or technology responsibilities to develop and share insights on AI from an ethics and governance perspective.

The meeting encompassed three thematic tracks, 1) AI Indices: APAC Data, the China AI Index, and the AI Index, 2) Measuring AI’s Social Impact - State of Play and Looking Forward, and 3) (Re-)Establishing Trust in the Digital Ecosystem, all of which were explored through a range of full-group discussions, breakout working sessions, and report-backs that included input statements and case study presentations from leading contributors.

This write-up seeks to share observations from the workshop, highlight overarching themes that emerged, and extract insights on next steps for sustaining the cross-cultural dialogue and building out from it. The distilled outputs are centered around five key takeaways that emerged throughout the workshop sessions:

1. In order to paint a robust picture of AI’s societal influence, the lens must shift from market to impact metrics

2. There is an emerging awareness that despite some applicable lessons from previous technologies, AI is fundamentally different

3. Responsibility for mitigating AI risks lies in a triangulation between users, industry, and government

4. Discussion pertaining to trust and ethics in AI should not be divorced from similar discussions in other technical areas

5. Despite the lack of consensus on a best way forward, all players and stakeholders should continue to actively iterate on interventions that seek to bolster trust and ethics in the digital ecosystem

These key takeaways were distilled with the intention of serving as a heuristic tool that allows us to map the most salient outcomes of the workshop, the context from which they stemmed in the meeting, and how they might be implemented. However, it is worth noting that there is significant overlap across and within these categories, and that this write-up is intended to represent a snapshot of the APAC-US AI Workshop, situated within a broader conversation on AI and its societal impact.

You might also like


Projects & Tools 01

Past

AI: Global Governance and Inclusion

In a world challenged by growing domestic and international inequalities, policymakers face hard problems and difficult choices when dealing with AI systems.