Skip to the main content

Get To Know 22-23 BKC Fellow: Elizabeth Dubois

Dr. Elizabeth Dubois is an Associate Professor in the Department of Communication and University Research Chair in Politics, Communication and Technology at the University of Ottawa, where she also serves as the Director of the PolCommTech Lab. Her work examines political uses of digital media, including media manipulation, citizen engagement, and artificial intelligence. She is a 2022-2023 Berkman Klein Center Fellow. 

What are you excited about that is upcoming in your field of work? Are there any emerging policies or a phenomenon that you are seeing, or perhaps something about your project? 

The next project that I'm really excited about right now is looking at social media influencers in election campaigns and in politics. Social media influencers have been used in influencer marketing in a bunch of industries for a while now; they're only starting to be formally incorporated into electoral marketing strategies and into election campaigns, and a lot of...election agencies in Canada and the US and the UK, the three countries that I've so far focused on, are not really ready for social media influencers to be a core part of election strategies. We don't have a good sense of what their role is right now. They're being thought of as advertisers, but they're not exclusively advertisers; they are social media accounts that happen to have a large following. 

In theory, [influencers] should still be allowed to talk about politics and support whatever candidate they want. But if they have this large following, which is how they make their career, that free political speech starts to look a lot like advertising, which would need to be reported formally in order to comply with election laws. So, I am really excited to dig into who these social media influencers [are]: what their roles are in campaigns, when do we need to expect and demand transparency, any money exchanging hands, any training they are receiving, communication with campaigns, and when we need to recognize that they are free to go do and say what they want with their platforms—because there will be instances where both of those responses are appropriate, but we don't know yet what those instances are. 

I'm working with an RA [research assistant] who is based at the Harvard Kennedy School of Government. So, Priya and I have been working on collecting, creating a database of all of the laws, regulatory guidance, and regulations that we can find in the US and Canada related to politics, social media, influencers, and marketing strategies. We are setting up a project to start to do interviews with campaigns to try and understand how social media influencers are being engaged by campaigns. And our goal is to really map out the scene here, to understand their roles and how they're interacting in wider campaign information ecosystems.  

Beyond that, [several colleagues and I] are working on a larger project where we're trying to develop theories and methods, toolkits that we can share out with folks to try and understand these phenomena on a more international scale. I think that's particularly important because we know that outside of Western democracies, there are uses of social media influencers and campaigns that have been going on for a while. We know that social media influencers are being integrated into disinformation campaigns and hate campaigns in Brazil, in Chile, in the Philippines, in India, and a variety of other places. But I don't have the geographic or cultural expertise to do studies in those areas. So, we're working on creating a network of folks and a set of tools that we can share out to try and do this research in a way that allows us for cross-national comparison.  

I picked this one from your PolCommTech Lab website; it was a question there, but I was curious myself, are there policy impacts of Explanatory Journalism? If possible, could you quote a recent instance from your findings? 

This project on explanatory journalism is done in partnership with The Conversation, which is a news outlet that is dedicated to getting academic knowledge out into the public sphere. Our team is trying to understand what happens when academics write these explanatory pieces, whether it's for The Conversation or in Canada for this outlet called Policy Options. We also see academics writing op-eds in various outlets—that kind of work where academics are trying to explain to policy audiences or public audiences what they've found in their work through journalistic practices. We are still in the early stages, so I don't have concrete evidence yet, I can't tell you, you know, X percent of a policy change was impacted by this academic or that academic writing, these explanatory pieces. What I can say is in our research, what we found is that these examples of explanatory journalism are one part of a larger constellation of different kinds of efforts, and their policy impacts really reside in the kind of network of different activities that academics and policymakers participate in.  

What I mean here is yes, having an op-ed explaining a particular issue and the policy relevance is helpful, but it might be the spark that gets an academic invited in to give a briefing to a government department, and then they're going to spend time engaging on the ideas in more depth through that conversation in those briefings, and then down the road, some of those ideas might show up in particular policy language. It's really hard to trace. But definitely in the qualitative parts of this study and the conversations we've been having, we've seen that these explanatory pieces are touchpoints, and they are important, but they're part of a much wider effort for policy impact by academics. 

You have also extensively worked around disinformation/misinformation cycles; has a strategy worked where social media companies or digital platforms have been able to address echo chambers and the disinformation misinformation cycle? 

The first thing I think we need to do is pick apart, a little bit, what the problem we're trying to solve is. I've done a bunch of work on echo chambers, really trying to understand how people shape their information environment by the choices they make in terms of what media they're going to consume, which social media to use, how they're going to use them. What we've found is, actually, echo chambers are not quite as big a problem as we sometimes fear. People are making choices that give them access to other ideas and expose them to other information; whether they take that information on board is a different story. Whether or not they believe it, whether or not they find it credible, those are different aspects to it, but when we're thinking about [whether] people are being exposed to ideas that are counter to what they already believe, the majority of people actually are equipping themselves already. 

So, to get back to your question, what are platforms doing? What can they do? It's not actually a place where I think platforms need to step in, necessarily. That said, platforms that are actively trying to give people tools to find new information, explore new ideas, those are the ones that help people most when they're trying to check information or find new ideas. We know that the algorithms that underpin our social media are designed in a way to make us click more and share more and spend more time on those platforms. So, sometimes that does lead a person who likes cat videos to see just tons and tons of cat videos, right? The times where we see the algorithmic design being optimized for variety rather than repetition of the same things—those are instances where we would see fewer chances of echo chamber effects being reinforced inadvertently or intentionally by the filtering process that these companies create when they're designing their algorithms.  

You also mentioned mis- and disinformation, and I think that's a very different kind of question. 

Certainly, they overlap because, among the people who don't make use of their media environment in ways that expose them to other ideas, you can see a small portion of the population ending up kind of stuck in what some people have described as something of an “alternate reality”: believing facts that are completely different from what another group of people believes are the base facts, and that's where the disinformation part comes in. That's where we see conspiracy theories take off like wildfire, and that's where we see potential harms, both digitally and out in the physical world, really kind of rear their heads. In this instance, what platforms have been able to do, and what I think they need to continue doing, is recognizing when their platform is designed in a way to reinforce that behavior rather than to incentivize and promote behaviors that help people get a broader variety of information. Trying to create spaces where hostility is not incentivized, and instead, empathy is incentivized. These are hard things to do, and at the end of the day, people are still going to use these tools the way they want to use these tools. So, there isn't a silver bullet or any sort of perfect way to go about it. 

But thinking about those core principles of what kind of information environment the platform wants to create, those are the kinds of things that are going to help us develop environments that are pro-democratic, if that's the goal. Or just lacking in hate speech and harassment, if that's the goal. But sometimes that's not what the goals are for these companies. 

Interviewer 

Salome Bhatta is pursuing her Master’s in Learning Design Innovation Technology at the Harvard Graduate School of Education. During the summer of 2023, she interned with the Berkman Klein Center’s events, communications, and educational programs teams.