The central objective of the session would be to convene groups working on social media collection and monitoring to describe new efforts to integrate datasets of social media data, develop and define workflows and standards, demonstrate tools for content collection and verification, discuss strategies for collaboration and integration, provide some feedback, ask questions, help develop tools, and incubate a new group of potential users for content collection and sharing. While initiatives like the Credibility Coalition, the Journalism Trust Initiative, Trust Project and TrustMetrics, NewsGuard and the W3C Credible Web Community Group are making efforts to develop standards, and numerous initiatives are working on datasets, limited resources are being devoted to communities in the Global South. Stakeholders such as the international human rights community and organizations working toward protecting free expression should also be more closely involved. In this session, our aim is to bring together more of the RightsCon community into the conversation and determine practical steps for moving forward
Misinformation, disinformation, propaganda, rumour -- you've all heard the terms used to describe viral deception, but do you know the difference between them? How do online harassment, trolling, and information operations uses to promote these tactics? Join Brittan Heller as she explains how these terms are distinctive, how they overlap, and what different behaviours and actors are implicated in each one.
The approximate structure of the session will be the following: - Opening remarks on the most relevant legislative anti-terrorist proposals (at the EU level and beyond) and their contrast with international standards and most important regional liability exemption regimes. - Reflection on the effectiveness of legislative measures introducing responsibility provisions vis-a-vis platforms regarding terrorist online content. This approach will be provided by anti-terrorist expert(s), using the experience of international security organizations and national law enforcement bodies. - Presentation of cases of national legislation and its effects in different regions of the world (mainly South East Asia, Europe and Latin America). After these presentations, participants in the audience will be asked to share and discuss specific cases, experiences and approaches. The panel aims at fostering a debate that shall combine a human rights and international standards approach together with a proper consideration of the adequate tools to effectively deal with terrorist online content, with the aim of defining best possible models. The debate will also identify global and regional tendencies aiming at transforming the general liability system applicable to online platforms, and possible actions and efforts to properly tackle these tendencies and adequately understand the impact on freedom of expression.
Dragana Kaurin, Juan Ortiz Freuler | Wednesday, June 12th | 12:00 - 13:00 | Location: Adean
The Internet is a powerful catalyst for social change, bringing new opportunities for sharing information and connecting social movements across continents. Access to these resources is a key determinant to socioeconomic opportunities, but many communities are met with constant language barriers as they navigate the Internet -- limiting the scope of the information available to them. As policymakers continue to address the growing digital divide, special attention must be paid to underrepresented language users who are continually relegated to the peripheries of the Internet. What obstacles do underrepresented language communities face as a result of their online exclusion? And what role can policymakers, developers, and activists play to bring linguistic diversity to the forefront? Our panel discussion brings together key stakeholders to shed light on the challenges they face, both as Internet users and as individuals working to promote their languages online. The session will discuss issues from digital security risks and economic obstacles to technical challenges when incorporating languages with no script and creating keyboards with new font types. We will also discuss the ways that communities are successfully leveraging technology to preserve and promote language.
This session will explore how human rights frameworks can be integrated into the governance of artificial intelligence and machine learning technologies. Building on international human rights law and the UN Guiding Principles on Business and Human Rights, it will discuss the value the existing human rights framework brings to the conversation about “ethical AI.” With a diverse panel drawn from international organisations, civil society, academics, and private sector, the discussion will explore the varying impact of these technologies in different countries and highlight both recent accomplishments and where further research is needed to inform policy development. The rapid development and deployment of artificial intelligence has many stakeholders concerned, and in the past several years, we have witnessed a proliferation of principles documents that attempt to guide the ethical and rights-respecting development and deployment. These declarations may each have individual value, but do they represent an emerging consensus about the proper use of AI/ML tools? If a consensus is emerging, does it reflect the reality of how AI tools are being deployed globally and cross-culturally, in non-democratic states as well as democratic ones? Do current principles reflect international human rights norms, and how do they anticipate and address governance challenges?
This session is aimed at sparking a frank dialogue among recipients of foreign aid programming and leading strategists in the foreign aid space to examine current standards around "data-driven" development initiatives and strategies and examine the ethics guiding their implementation. Public and private donors and INGOs increasingly highlight the importance of "data-driven" programming, and the launch of GDPR in 2018 deepened internal discussions throughout Europe and North America on data management and acquisition standards. But where are the voices of the "beneficiaries" of foreign assistance in these strategies, and how are their perspectives and rights informing the decisions donors and INGOs make with regard to data collection, aggregation and use within foreign aid? With current legal and procurement structures, and the political drivers, inherent in foreign aid, can our current assistance structures truly "empower beneficiaries" in new or powerful ways by shifting to data-driven or digital strategies, or are these moves primarily designed to diminish costs and increase efficiency among large Western institutions?
Promoting technologies like blockchain as a seemingly simple solution to complex humanitarian and development challenges oversimplifies these issues and celebrates tech as a "silver bullet" solution that will solve everything from poverty to the refugee crisis. This has led to many projects seeking technical solutions first without analyzing the challenges and opportunities, understanding the limits, potential, and threats that come with the technology in question, or including beneficiaries in decision-making. When we make decisions in humanitarian and development projects based on solutions we want to use, instead of user needs, we could be exposing beneficiaries to bigger risks, by leaving them out of decision-making process. Our panel of experts will discuss the following points: *What is technosolutionism and how to avoid it in projects? *How organizations and individuals can deal with technosolutionist demands from funders and partners. *So, what is blockchain and when *should* it be used in projects? *In praise of low-tech or no-tech solutions: How to shift media focus from shiny new toys in tech, to the mundane, consentful, and trusted solutions--and the complexity of the development work, participation, and research necessary that leads to successful and sustainable projects.
The new “fake news” rhetoric is providing a dangerous framework for repression. CPJ has seen “fake news” used globally to justify attacks, harassment, and arrests of journalists. In order to frame the scope of the discussion, the session will share relevant research into where fake news legislation is in place, has been proposed, or in rare cases, been revoked and its impact on journalists and civil society. After a brief overview of existing research and its findings, participants will break into three groups for each of the categories to discuss potential action or approaches that could be helpful to repeal or prevent fake news laws, and to try to understand what led to their revocation in rare cases. How does “fake news” legislation manifest similarly or differently around the globe, and what can be learned? A rapporteur for each group will take notes and will report back to the entire group at the end of the session to share findings. CPJ will collect the findings and put together a lessons-learned and advocacy strategy based upon the discussion to share with participants. The takeaways from this discussion will inform CPJ’s and other organizations’ advocacy with both tech platforms and with newsrooms.
Whether it is championing gun reforms, ending violence against women, advocating for gender and sexual diversities, leading initiatives to tackle climate, promoting healthy relationships and individual wellbeing, or playing a key role in peace processes, youth leaders and the work they are doing are being recognized by media and key decision makers. Youth are able to leverage technological innovation in order to spread awareness, mobilize people to action, and organize and implement their initiatives. However, while technology can serve as a useful tool to allow youth to exercise their agency, it is insufficient in realizing social innovation and change. Emerging technologies not only have the potential but are already being used to suppress and surveil young people, posing a threat to their security and rights. The lack of reliable access to these technologies also presents a challenge to young people, as technological literacy becomes increasingly important in a globalized world. Young people are not meaningfully engaged in the governance of technologies. Participants will engage in a design session to identify and co-create solutions for the challenges identified during the workshop. These solutions will be translated into concrete actions that each participant will be responsible for implementing over the next year.
The increasing risk presented to women in positions of social and political vulnerability, strongly illustrated by the public execution of Marielle Franco, exposed the urge for more supportive networks of women trainers on digital and integral security. In this intricate scenario in which individual freedom and human rights are threatened for women in a very global context, it's utterly necessary building affective networks and infrastructures - both physical and digital - towards a reimagined digital security culture based on transfeminist perspectives of collective care. Joining members from several experienced organizations and feminist infrastructures, the Feminist Digisec Network share their experiences towards building approaches for digital security training focused on sensible, nurturing and easy to understand methodologies to engage non-technical women and LGBTQIA+ human rights defenders and frontline activits on employing the current technological toolset for digital self-defense and methodologies for context analysis, holistic security, threat and risk model assessment and self-care techniques, bringing some cases and experiences from our activities. Women digital trainers from all the world are invited to join on a critical discussion over constructing narratives based on decolonial and feminist internet principles for a new culture of digital security and online self-preservation.
Since the anti sex-trafficking law FOSTA was passed, Internet platforms have been forced to implement increasingly draconian restrictions on sexual speech, in an effort both to comply with the law, and to appear tough against sexual abusers. But this can actually make child sexual abuse prevention harder, by restricting the flow of information to children and caregivers alike as well as by conflating child sexual exploitation material with legitimate speech. This event follows on from an event held in San Francisco in May, aimed at providing platforms with strategies to remove more material that is harmful to children and has no protected expressive value, and less material such as lawful, accurate information on child sexual abuse prevention. This session builds on the earlier event by proposing concrete recommendations on reconciling child protection with freedom of sexual speech. Participants should expect to contribute to a facilitated roundtable discussion between experts and stakeholders who are are normally excluded from the development of child protection policies. Our objective is to enable industry participants to ensure that their child protection policies and practices are scientifically sound, and that they fulfill their obligations under the United Nations Guiding Principles on Business and Human Rights.
Side Meeting on Emerging Tech/Human Rights Government Briefing
Gretchen Greene | Thursday, June 13 | 9:00 - 10:00
The Dangerous Speech Project | Thursday, June 13 | 9:00 - 10:15 | Location: Biscay
We will offer knowledge from specific efforts to respond to hatred online, and then facilitate a brainstorm among all attendees. First, Dr. Cathy Buerger will describe a handful of key anti-hatred efforts from around the world, based on more than a year of research and interviews with their leaders. Then Mina Dennert, founder of #Jagärhär, will describe her work. Mina, an Iranian-born Swedish journalist, was deeply dispirited by the surge of hatred online in Sweden that followed the arrival of many refugees to Europe in 2015. She founded a group to respond to hatred collectively, wrote a set of rules strictly governing the tone of the group’s posts, and called it Jagärhär, or ”I am here.” The idea went viral, attracting 75,000 members, and Jagärhär has since been replicated in at least 8 other countries. After these brief and provocative talks, the moderator will invite the entire group to offer specific ideas from their own responses to hatred online - or those they have witnessed. Finally we will invite ideas for future projects and collaborations
As many NoC representatives attend RightsCon, we hope that by hosting this meetup NoC representatives can (re)connect and new people can join the conversation. The meetup is envisioned as a highly engaging, participatory, and somewhat gamified encounter. The idea for the meetup is inspired by a cards-based activity designed at the Berkman Klein Center (bit.ly/iqcards1). In short time intervals, meetup participants are encouraged to find someone with a stack of cards and have them introduce their project using the card as a basis for the conversation. Those receiving cards will be able to keep them and invited to reconnect with those they spoke to during the conference, opening opportunities for channels of collaboration.
Join a discussion about how speech gets weaponized. In this session, Brittan Heller will interview speakers about what disinformation is, how it functions and permeates through online spaces, and what role bots, trolls, and the media play in this process. What is the role of anonymous speech in the promotion, spread, and functioning of disinformation? What is the difference between misinformation, disinformation, propaganda and rumor? How do forms of online speech like trolling, harassment, hate speech, and media manipulation intersect with these phenomena? The highlight of this talk will be a discussion about how anonymity functions within disinformation ecosystems -- and what this means for proponents of freedom of expression. Oftentimes, anonymous speech is a polarizing issue for online communities. It is either considered to be essential for protecting activists, dissidents, and journalists -- or a primary driver in the targeting of minority voices. This talk will approach the topic of anonymous speech, as it relates to misinformation, by grounding it in the speakers' research on bots, harassment, hate speech.
The Network of Internet & Society Centers Card-Based Meetup
The Network of Internet & Society Centers is a network of almost 100 Internet and society centers around the world. As many NoC representatives attend RightsCon, we hope that by hosting this meetup NoC representatives can (re)connect and new people can join the conversation. The meetup is envisioned as a highly engaging, participatory, and somewhat gamified encounter. The idea for the meetup is inspired by a cards-based activity designed at the Berkman Klein Center (bit.ly/iqcards1). In short time intervals, meetup participants are encouraged to find someone with a stack of cards and have them introduce their project using the card as a basis for the conversation. Those receiving cards will be able to keep them and invited to reconnect with those they spoke to during the conference, opening opportunities for channels of collaboration.
Provocations and Inspirations: Meet the Women of the NoC
Thursday, June 13 | 13:00 - 14:30 | Location: TBD
Nicole West Bassoff, Levin Kim, Sandra Cortesi, Newman, María Paz Canales, Mariel Garcia-Montes, Paola Ricaurte Quijano; Contributors (non-BKC): Ksenia Ermoshina, Malavika Jayaram, Azza El Masri, Chenai Chair, Alison Gillwald
Question your assumptions, imagine different futures! At this provocative and informal bring-your-own-lunch activity, nine women from the NoC will share with others more about their work and challenge us to consider new ways of thinking about human rights in the digital world. Inspired by their brilliance, we’ll collaborate on an assemblage of our reactions to their provocations. In the process, you’ll have the opportunity to meet, learn from, and create with a multifaceted group of scholars thinking at the edge of their fields. All are welcome!
Informed consent for survey and broader population data is critical to ethical applications of humanitarian intervention, from the use of household surveys to national population data. However, how much of the data we collect is collected by consent, and what does "informed" in the age in AI? While Artificial Intelligence may allow humanitarians to achieve truly meaningful achievements -- to predict and prescribe interventions before the famine or before the outbreak -- is the data behind those future successes ethical? Is data collected for one study but repurposed to train algorithms what those original subjects agreed to? If the platforms we store this data on today are upgraded to incorporate deep learning functions, is that original data fair game? This session aims to explore how data on individuals represents virtual bodies, subject to an interpretation of international law that forbids experimentation, and that even the examination of this data in aggregate does not fulfill our fundamental "do no harm" policies. We ask: what is missing to bring truly informed consent to AI?
Jonnie Penn | Thursday, June 13 | 14:15 - 15:30 | Location: Carthage 2
The global internet continues to fragment. Governments, in particular, are using their influence to shape the ways that digital companies, markets, and rights connect us online. This new form of realpolitik, which we call “digitalpolitik,” is an emerging tactical playbook for how governments use their political, regulatory, military, and commercial powers to project influence in global, digital markets. This session will, building on an early, initial effort, begin a conversation on identifying, categorizing, and tracking the strategies and tactics, and their divergence, among the world's most influential digital actors.
This session aims to address the challenge that many tech SMEs face in implementing the UN Guiding Principles due to a lack of capacity and resources comparable to larger companies. This challenge is particularly acute in the global south, where an absence of National Action Plans on Business and Human Rights, and up-to-date legal and regulatory frameworks, have meant issues left unconsidered. By bringing together representatives of large tech companies, tech SMEs, human rights defenders, and others, we will debate, discuss and share the good practices of others, and find ways of transferring the broad principles in the UN Guiding Principles into practical steps that tech SMEs can take to respect their users’ human rights.
To respond to the challenges around disinformation, data misuse, and the development and deployment of artificial intelligence (AI), tech companies are hiring Chief Ethics Officers and launching ethics and society teams; governments are creating departments dedicated to technology and innovation, and scores of civil society and research organizations on AI are popping up. In parallel, higher education institutions are developing new curriculum and multidisciplinary pedagogical approaches to prepare students to solve these complex problems. This session will explore novel programs that create better interfaces between the academic, private, and public sectors. Representatives from industry, government, civil society, and higher education will briefly share their work and what they are looking for in new hires or how they are preparing students to enter the field. All participants in the meet-up will use post-it notes to map out the educational backgrounds, skills, and core competencies required for these new roles with an emphasis on ethics and values. The session format will enable productive conversation about designing the right jobs, departments, and teams to solve the most pressing tech challenges. It will also illuminate where higher education institutions should dedicate resources to equip students with the tools required to thrive in the field.
Refugees are the target of surveillance in host regions like EU and US, and often the organizations that are designed to protect them share information about them without their consent. Refugees must give biometric data, as well as very sensitive information about why it is that they left their country and why they are afraid to go back. This information might include details about war crimes, crimes against humanity, or sexual violence--information that is extremely sensitive and must be protected. In this panel session, we will discuss how this affects the asylum process. After going over some standard process for refugee registration and information that is shared with them we will analyze: -If refugees are not informed about who has access to this information, how do they decide what information to share, knowing this will make or break their asylum application? -How do refugees feel about the information that is taken from them, especially the biometric data? How do they navigate through the system knowing there is a risk of surveillance? -How can stakeholder groups put pressure to ensure better practices during the registration process, and offer a dignified system for people who are in this very vulnerable state?
This workshop convenes a critical session surrounding the development, use, and experience of emerging ICTs for ‘integration’ amongst refugees, asylum seekers, and vulnerable migrant populations at destinations. With smartphones to hand, many refugees today document their traversals and communicate with family scattered across the globe, creating and leaving data trails. Once they reach an intermediary or final destination, biometric data is recorded in order to process visas or asylum documents, even to grant access to benefits schemes. As they enter new cultural and geographical contexts, refugees are subject to more data processing - as job seekers and housing applicants, or as users of civic tech for integration The workshop invites participants at RightsCon, especially those potentially affected by these technologies, to take part in producing a set of easily accessible guidelines to ensure that the development of refugee centred technologies take a rights-based approach. It brings together speakers (and break out group facilitators) from the Centre of Development Studies at the University of Cambridge, Centre for Socio-Legal Studies at the University of Oxford, the Localization Lab, UNHCR, and the International Rescue Committee, who are engaged with either the development or critical evaluation of technologies used in migration contexts.
FRIDAY, JUNE 14 (DAY 3)
NoC / BKC Community Village Table
Friday, June 14th | 9:00 - 13:00 | Location: Community Village
Please visit us in the Community Village. If you are an attendee from a NoC, we would love to exhibit your work at this table. The Community Village Table is an opportunity to showcase a project, initiative, featured research, or a new campaign. The NoC table area will allow for items to be handed out, but not sold. Table may not be powered and items cannot be hung. Members of our community will be present for the duration, please let us know how we can help promote your NoC efforts! For more information, please contact Carey Andersen at email@example.com.
Newman, Jie Qi, Mindy Seu | Friday, June 14th | 9:00 - 10:15 | Location: Elyssa
This workshop, led by artists and designers from Harvard University and the University of Tokyo, will bring together participants to think through difficult questions questions about human relationships to technology, and create a visual work that each participant will get to take home. The session is inspired by the Value Alignment Problem: the challenge of assuring that the goals embedded in intelligent systems (or the secondary goals they subsequently form) are aligned with the values of the society they serve. The session will include: discussion morality across cultures, creative exercises geared toward generating diverse questions, and compiling the participant-generated questions into "moral labyrinth." The labyrinth will include as many voices as there are workshop participants, in as many languages as possible. The session will conclude with participant-generated labyrinths, and each participant will get to take one home. After RightsCon, the collective Moral Labyrinth will be posted online to share with others, and visitors to the site will also be able to submit their own moral questions for reflection. The workshop encourages collaborative reflection on value alignment in the 21st century -- emphasizing the necessity of asking questions as we co-create and steer toward our shared technological future.
This session will be a workshop for experts in both new media and online targeting, designed to brainstorm about the next frontier in cyber harassment. We will discuss the characteristics of trolling, harassment, and targeting -- and what can be learned from how these currently operate that would be applicable toward curtailing harassment in behavioral-based virtual worlds. The workshop is designed for experts in online harassment and emergent media, who will share and generate new ideas about how online harassment will function in non-text based environments, like gaming, VR/AR/XR, and image-based media.
As the pace of technological development accelerates, consumers are interacting with smart devices, artificial intelligence, big data analytics, and online platforms on an increasingly frequent basis. This has led to a parallel, but delayed, upswing in regulatory efforts from governments keen to protect citizens from potential negative impacts. Business leaders and policy makers face major questions about how to foster innovation, protect human rights, and ensure fair markets. We believe companies play an important role in informing and influencing regulation for new technologies, and that they should do so in a manner consistent with commitments they make to respect human rights and be responsible corporate citizens. Companies can also encourage policy frameworks that are able to adapt to rapid changes in the development and use of technology. This session will discuss how businesses and governments can best collaborate to develop responsible regulation. We will use two key cases to illustrate these opportunities: -The advent of facial recognition, and how companies are calling for regulation to protect consumers against biased decision-making and privacy violations. -Regulation intended to restrict misinformation, hate news, and harmful content online, and how companies proactively advocate for policy that balances security, privacy, and freedom of expression.
Data is not a neutral representation of reality and technology is not a neutral tool. They are reflections of power: the power of those who collected the data and those who designed the technology. This power informs the substance of the news we consume, and the manner in which it is parcelled out to the public. It affects what information is made available, and what is hidden. It shapes the norms that have been set around data collection, usage, and privacy, along with the terms of public discourse, the basic health of democracies, and the structure of competition and markets. A new paradigm for understanding data and its uses and consequences is needed. This might draw on environmental analogies—thinking of data as akin to greenhouse gases, where small bits of pollution, individually innocuous, have calamitous collective consequences. Just as large amounts of greenhouse gases cause fundamental damage to the environment, a massive shift in the nature of privacy causes fundamental damage to the social fabric. We will consider how data rights policies can help protect both individual and groups. We will learn about the infrastructure needed to safeguard data rights, including data rights boards, data trusts, specialized data-rights litigators etc.
Our goal is to prompt new thinking on the topic of remedy for AI, with a focus on those harmed by algorithmic discrimination. The UN Guiding Principles on Business and Human Rights provide a useful framework for businesses as they consider how to develop ethical AI. However, the issue that often receives the least attention is what remedy should be provided to those who are harmed. Generally, the business and human rights field struggles with this question, and AI raises particularly thorny issues due to challenges of causality and scale. This session focuses on the complexities of providing remedy under the UN Guiding Principles on Business and Human Rights (UN Guiding Principles), focusing on two scenarios involving algorithms used for content takedowns that have discriminatory effects. We will be crafting these scenarios based on real-life situations, but will anonymize them for RightsCon. The UN Guiding Principles call for remedy to be provided by governments but also by companies that cause or contribute to an adverse human rights impact. Remedy is often called “the forgotten pillar” by business and human rights experts. We seek to “remedy” this omission.
Dragana Kaurin, An Xiao Mina | Friday, June 14th | 10:30 - 11:45 | Location: Jelsa
This session will review different types of human rights manuals, digital security training manuals and immigration legal manuals that are available online, and what user needs they meet before discussing localizability and adoption challenges. We will work in small groups to identify challenges and opportunities in adopting the framework in the guide for a particular audience. We will budget about 45 minutes for small groups, which will give plenty of time to do some research and answer the following questions: 1. Are there other guides/materials that are similar to this one? Do they cater to a different audience, or for different situations? (For example, claiming asylum in EU vs. US) What parts are similar, and where do they differ in language or content? 2. Which aspects of this guide/material are applicable in other situations? What makes them universal? 3. Which aspects of this guide/material are not universal, and would need to be changed for other audiences, or jurisdictions? 4. How would you redesign this guide/material, being mindful of localization principles, and knowing others will adopt this framework for their communities?
Chinmayi Arun, Rob Faris | Friday, June 14th | 10:30 - 11:45 | Location: Oya 3
Propaganda and misinformation are on the radar of activists, academics, the private sector, and government agents. In different countries, electoral and political processes unveiled a sensitive relationship between social media and democracy, bringing attention to the dynamics of polarization, media mistrust, manipulation, and hateful speech. In these different realities, economic, political, cultural and even information diet patterns shape context, presenting particular aspects of these phenomena. This fishbowl will be focused on sharing experiences and perspectives about how to discover and tackle these dynamics through different techniques and perspectives. The main questions to be addressed are: what is general and what is contextual? How global platforms should deal with these differences? Which research findings can and cannot establish common ground in policy discussions about the subject? The session will be structured as a frank conversation between five observers from different standpoints, promoting exchange between Global North and South approaches (such as Harvard’s study on US “Network Propaganda” on one side and research about political propaganda on WhatsApp in the 2018's Brazilian election on the other side). Is a space for the construction of shared diagnoses among participants on how to balance diverse conceptions regarding the protection of human rights and democracy.
Matthew Battles | Friday, June 14th | 12:00 - 13:00 | Location: Beehive
The session seeks to inspire participants to reflect on the demands technology makes on the natural world, and to foster dialogue on emergent conceptions of rights for nonhuman entities. Our concepts of sustainable development—indeed our very flourishing as a species—is connected ineluctably to the welfare of nature. How should the networks, communication, and media be designed in light of this inescapable fact? How might we articulate actionable commitments to the digital empowerment of living things on the planet? Is the digital doing enough to address climate change, the loss of habitat, and diminishing biodiversity—not only as present threats to human rights and human thriving, but as essentially detrimental to right itself? Contending with such questions, the session will evoke the vision of a charter with the force, charisma, and rhetorical drive to foster a broad public conversation about the intrinsic worth of the natural world and its right to digital empowerment and expression.
AI has been recognized as an enabler of the SDGs supporting States in the implementation of the 2030 Agenda. However, the deployment of AI in the Global South faces challenges and may bring about particular risks to the enjoyment of human rights, including the right to privacy. One key problem is that most of the debates around the opportunities and challenges afforded by AI are taking place in fora reflecting the views, experiences and interests of stakeholders from the Global North. Moreover, many countries lack institutional and legal frameworks for dealing with AI-related privacy threats. These and others factors can be major roadblocks for deployments of AI that respect and promote the right to privacy. This roundtable will map challenges to the right to privacy brought about by AI in the Global South, addressing questions such as: what legal and institutional safeguards need to be in place to reign in privacy risks? How can data-driven approaches augment social and economic development while protecting the privacy of all individuals? How can local, community-driven solutions be supported? It also will connect key actors and identify the next steps to be taken.
NoC Skills Session
Jenna Sherman, Sonia Kim, Sandra Cortesi, Urs Gasser | Friday, June 14th | 13:00 - 14:30
Members of the NoC warmly invite you to an informal BYOL (bring your own lunch) conversation to swap skills related to your work with fellow attendees and leave RightsCon Tunis with more tools in your toolkit. You’ll have the opportunity to share your experiences and expertise around areas such as community building, fundraising, research, and organizing, and work with others to brainstorm creative ways to address common challenges in these spaces -- all while getting a sense of who your fellow attendees are. We hope this activity will allow you to make connections with those you haven't crossed paths with yet and to take learnings from this skill exchange back home with you in tangible ways. In the spirit of fostering diverse connections, open dialogue, and bi-directional learning, all are welcome to this lunch. We hope you'll BYOL and join us!
The session will be divided into four parts. The first part will be dedicated to an examination of case studies where censorship of one country has been applied abroad. These case studies will be matched with potential solutions proposed by the panel members and audience. The second part will proceed similarly and explore what kind of surveillance countries have instituted most commonly. How can we prevent equipment built by China from being exported globally and used for human rights abuses? Shall countries impose sanctions on companies? Thirdly, the panel will explore how to approach information influence operations abroad. Many countries have experienced Russian and Iranian propaganda. However, it has been difficult to counter this subtle kind of information controls. While the audience is encouraged to interject remarks or ideas at any point of the session, the last part is particularly designed to stimulate their interaction. The moderator will encourage the audience to voice their opinion and give examples that the panel may have missed or deserve attention. The audience will be also invited to introduce the platforms, channels or organisations that they work for if any of them can contribute to tailoring counterstrategies and make a lasting impact.
Data might not be the new oil, and the Middle East isn't China, but Gulf Arab states are seeking a new global role in technology policy and investment. Their efforts are being embraced by international organizations encouraging digital development without confronting current digital dilemmas, as well as by global companies, who are both investing in and seeking investment from these states. Despite repressive cybercrime laws and a dismal record of human rights protection online in the Arab region, few in power seem to be asking what the implications of the Saudi government's $3.5 billion investment in Uber might be. Or how Bahrain's new data protection law will affect data stored in Amazon's new data center there. Or how the UAE's prioritization of applied artificial intelligence to advance citizens’ “happiness and wellbeing" could possibly go wrong. At this strategic roundtable, we will pose these and other questions to illuminate underexplored relationships between global technology companies and governments in the digitally developed GCC, and interrogate their impact on human rights. We hope to highlight the dilemmas raised when tech companies do business in these countries and/or receive investment from them and will discuss strategies to keep human rights top of mind.
We conceive technology as an assemblage of materialities, norms, flows, actors, practices, territories, bodies, and subjectivities: something inextricably related to what we are, think and feel. In many indigenous and non-urban or alternative communities, the respect of nature, the attachment to the land, the preservation of memory and traditions, shared goals and a strong organization are crucial to surviving. However, these imaginaries are not compatible with the dominant and corporate technological rationality that takes advantage of people’s lives and environments and produce narratives that separate the land, the people and their affections. For these communities, digital colonialism, the datafication of the self, and the capture of life produce poverty, exclusion, the loss of natural resources, and, in some cases, death. In this workshop, we would like to analyze the implications of digital colonialism in our quotidian lives and in relation to our traditional cultures.
Alexa Hasse, Sandra Cortesi, Sonia Kim, Mariel García-Montes | Friday, June 14 | 14:15 - 15:30 | Location: Elydhafa
Extending youth’s privacy, participation, and education rights to the digital world, Berkman Klein’s Youth and Media (YaM) team has co-produced a series of learning resources for and with young people (ages 12-18) — some translated into over 35 languages and implemented in countries such as Africa, India, and Pakistan. By co-designing these educational resources with youth, we have been able to develop tools that incorporate youth voices and perspectives, connecting various knowledge areas related to the digital world with young people’s interests and experiences. Currently, over 100 such learning resources are available through our team’s open access Digital Citizenship+ (Plus) Resource Platform (DCRP).
Against this backdrop, during this session, our team aims to showcase the DCRP, and with educators who have employed and adapted YaM’s learning resources, empower attendees to utilize the DCRP by demonstrating simple ways to adapt the platform’s learning resources according to their contexts.