Skip to the main content

Get To Know 22-23 RSM Visiting Scholar: Elissa Redmiles

Dr. Elissa M. Redmiles is an Assistant Professor of Computer Science at Georgetown University, where she uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. She was a 2022-2023 Visiting Scholar with the Institute for Rebooting Social Media.

Could you share some of your ongoing projects? Which one are you most excited about?

Absolutely. The first one is [about] image-based sexual abuse. [My colleagues and I] have been doing a large-scale interview study with folks who share intimate content, as well as those folks who have been survivors of image-based sexual abuse, specifically non-consensual distribution of their intimate content, and with organizations that support victims of image-based sexual abuse. This is an issue that affects both folks who share intimate images recreationally—research shows that over 80% of US adults do this—as well as folks who produce intimate imagery for commercial purposes like OnlyFans. There has been a lot of work on the mental health harms and the criminology side of image-based sexual abuse, but very little work on the technological side in terms of: What technologies do people use to store and share this content? What technologies are they using to try to protect their content from being re-shared? And what are the safer futures that they imagine that technologists might be able to build?

The second thing that we have been working on is trying to explain to people various technical privacy protections…These are things that technologists get really excited about because they feel like they offer guarantees that will help protect people's privacy, but it can be very difficult to effectively explain to people what guarantees they're actually getting from these technologies in a way that they understand. It's also important to make sure that we're prioritizing developing technology for the biggest concerns that people have. And I think sometimes we end up developing technology that's fun to build, but isn't necessarily matched with end user concerns. And so we try to translate these technologies into simple statements that people can understand so that they can make informed decisions when they go to share their data, and also so that we can elicit their preferences about what kinds of privacy guarantees are most important to them.

And the third project was on Facebook and advertisements?

We have a paper coming out on this soon. This paper is based on a longitudinal study that we did where people installed a browser plugin that collected their Facebook advertisements, and we qualitatively coded a large portion of those Facebook advertisements into different categories. We were trying to identify ads that didn't necessarily violate the rules that Facebook has, but that could be problematic–for example, ads with weight loss or body image or eating disorder-related content, [or] advertisements that are deceptive that might offer someone a too-good-to-be-true offer. Another example is free solar panels; what they're really trying to do is collect personal data with clickbait things like this. 

We did our own coding of what we saw, but we also asked participants to give us feedback on their own ads. We surveyed them every month and showed them a small portion of the ads that they had received in the past month, and asked them for their perception of whether they found the ad relevant, whether they liked it or disliked it, and why. Based on this, we were able to develop a taxonomy of problematic advertising, characterizing what it means for an ad to be problematic, and to identify biases. We found that some socio-demographic groups were getting more problematic ads than others. For example, we see older adults and Black users receiving more of certain types of problematic ads. 

We used our data to figure out whether they're seeing those ads because advertisers are targeting them or because Facebook's ad optimization algorithm is just sending the ads to them, and we saw that it's a combination of both. Advertisers are targeting certain vulnerable groups with these ads, but we also see that even when the advertiser does no targeting at all, Facebook still, for example, sends more click bait to older adults. 

It looks like this is a current trade-off that people are paying in exchange for get to use social media or services like that for free. Is there an alternate way of how this can be handled by social media companies too, or things that we could do better as users? 

I think one of the things we're quite interested in, and there's some research from other groups on this as well, is people's ability to have autonomy and control over the experience…Giving people the option to not see ads around a particular topic would be really helpful, and there really is not a way for people to do that right now. There is a way to say, “I don't want to be advertised to,” based on a particular interest the platform has inferred for you, but we saw that actually, relatively few advertisers were even use those categories. So it's likely that platforms need to be doing a bit more machine learning and analysis on the ads that are getting submitted to classify them into topics and allow people to opt out of advertisements on particular topics.

For the platforms, there's also the option of spam detection. You often have these kinds of crawlers that will see, okay, what is this link that's being put in this post? Where is the link taking us? What's happening there? So there's more analysis that they could be done, say, on these solar panel ads to be like, okay, it's taking users to a form where they have to fill in their personal information. Does that form ever result in them getting an offer? That's something that the platform would be able to look into and verify either by themselves or potentially automate. This can be more rigorous, but keeps low quality ads out of their website. There is also this whole area of brand safety, where brands usually don't want to be in publications that have low quality or spammy ads. So I think it's also some[where] that advertisers might be able to put pressure on social media platforms.

Coming back around to something we touched on earlier, users’ privacy, security and online safety-related decision-making processes, what are some privacy protection measures that we can take that you personally undertake and that you recommend others do as well, on micro level?

My usual answer to this question is that it is incredibly context-specific. For example, you'll end up hearing things like, “don't write down your passwords.” Well, if you're worried about an attacker being in your house, then yes, you shouldn't write down your passwords—like if you're in a domestic violence situation or maybe you're in a really high-traffic physical space. If your home is a safe space, then writing your passwords down is probably a much better practice than having the same password for every account. For example, the best practice would be to use a password manager, although there have been breaches of those, but I'd say by and large, a password manager is sort of the best practice.

Different people have different kinds of priorities, capacities, and financial [considerations], so what I usually recommend people do is think through what you're worried about–what are the things that would cause you the most harm?  For example, in the use case of people sending intimate content, one of the things that folks in the sex industry do a lot is they think very carefully about not taking pictures in a setting that would be easy to identify. I you're sharing pictures of yourself, trying to think carefully about identifying items, tattoos, birthmarks, so on and so forth.

Two-factor authentication is often recommended as a best practice; a lot of times you're forced to do it, probably more than you would like. What I usually tell people is that you only have so much kind of compliance budget to do all of these things, so pick the couple of accounts that you care about most. Think about how they're linked to your other accounts. You might care about your bank account most; if the password reset for your bank account goes to your email, you also need to care about your email. And so think about those couple of things you care about, what information is contained in them, and focus your efforts there. And don't get too discouraged. You're never going to be able to protect everything; you can only do your best. 

Interviewer 

Salome Bhatta is pursuing her Master’s in Learning Design Innovation Technology at the Harvard Graduate School of Education. During the summer of 2023, she interned with the Berkman Klein Center’s events, communications, and educational programs teams.