This interview is part of a collaborative effort between the summer 2018 BKC interns and the Communications team to showcase the tremendous work and backgrounds of our 2018 -'19 BKC fellows.
Sabelo Mhlambi's research goal is to improve societies, particularly ones in which there is a history of human rights abuses, through accessible and secure methods of communication facilitated by novel utilization of cheap hardware, like that in our mobile devices. Along with this, he works to advance rights to privacy and freedom of expression in the context of expanding data collection and processing for AI. Technical topics that he is interested in include offline networks, federated learning, homomorphic encryption, and blockchain.
I know that the project you're working on as a Fellow focuses on privacy & freedom of expression in countries lacking certain digital infrastructure through various technical methods. Can you tell me more about the specifics of it? What are your goals?
The specifics of my ongoing project rely on creating peer-to-peer networks that don't rely on the internet. I do so using Bluetooth and WiFi technology, and I'm hoping to use Digital Signal Processing more formally by encoding and transferring information through ultrasonic sound. Currently my work targets Android devices, which are the majority of smartphones in the African continent. The technology I'm trying to improve upon is designed to be "low-tech," in the sense that it should take the least amount of effort, money and technological tools for people to be able to share and access information, and to do so privately. Technology that can improve basic rights should be accessible to the so-called "developing world" or the Global South. To summarize my goals, I'd like to use the cheap hardware on our devices, especially mobile, in new ways to facilitate accessible and secure ways of communication as a means to improve society—especially societies were there is a history of human rights abuse.
Technology that can improve basic rights should be accessible to the so-called "developing world" or the Global South.
Regarding AI accessibility, at my last job (https://mapbox.com) I created a system that removed our reliance for rendering map images on GPU servers to regular AWS EC2 servers and our costs decreased by 7x while getting similar performance. Since a lot of Machine Learning runs on GPUs for speed, I'm trying to create a basic ML library that can use the same infrastructure I used for removing the need to render maps on GPUs and see if it will perform the same on CPU cluster at a fraction on the cost. I'm assuming this could work—but honestly I don't know, it's just a guess. If I succeed on making AI more accessible, lowering costs, I'd also like to explore homomorphic encryption as a way to train a ML model on encrypted data or to encrypt the updated weights that are sent to a model, as a determined adversary might be able to discover identity through the weights sent to a model. Quite similar to https://www.openmined.org. The idea is that if more people—especially in poorer regions—can get into AI, it might be a different way to look at bias in AI. Perhaps bias in AI is partially due to the barrier of AI practitioners in the global south participating in AI. At the world's first BlackInAI symposium, a mini symposium at NIPS that saw more than 200 African and "Black" researchers in AI, the biggest complaint with regards to participating in AI was the inability to access computing resources.
That's all really cool! In terms of structure, a peer-to-peer network definitely lends itself to censorship resistance, which makes sense for the purpose of spreading information. I'm wondering what challenges you've encountered with your design so far—P2P networks can certainly be made anonymous, but that also means one cannot necessarily check the integrity of sources in terms of their authors. Are there particular tradeoffs you've made, and what ideas have motivated these tradeoffs? Also, what are your thoughts on security in your system so far, and have the different types of hardware you intend to use made security more difficult?
The main challenge so far with the P2P networking is that it's incredibly difficult to create reliable networks in Android devices, as they differ in hardware capabilities. The user experience isn't always great: sometimes connecting to a network can be hindered by several reasons, at times there maybe a hard limit on the number of devices in a network—from my knowledge it's about 8 devices or so when using WiFi-direct. The limitation of devices in a network means you have small networks, which I call "pods," and a member of a pod may have to connect to a member of another pod for the network to extend. This requires creating new mesh networking protocols or more research into existing protocols. Some ways to try to address this are using Bluetooth, perhaps ultrasound, and WiFi for different purposes in the chain of events that occur in P2P networks. I believe an Android device can't use Bluetooth to pair to multiple devices at once, where as it appears P2P "WiFi-direct" can. Then there's WiFi legacy, which is a way of creating access point as means to create a local network. A lot of this knowledge is esoteric and it seems the library I've been developing (with funding and a partnership with The Guardian Project) is an attempt to create the first open source Android library for making P2P connections. I'm directing the project and I hope to remove the complexity, if possible. This is part of the research I hope to conduct at The Berkman Klein Center, as well as making the leap from simple P2P networks to stable mesh-networks.
The limitation of devices in a network means you have small networks, which I call "pods," and a member of a pod may have to connect to a member of another pod for the network to extend. This requires creating new mesh networking protocols or more research into existing protocols.
Regarding security of the library, to connect to a device requires a user to accept a connection, so users are directly involved in approving requests. Sending messages between devices would require the messages themselves to be connection. The design decision I made is that the encryption of messages should be handled by users of the library, which prevents the library from being too verbose—and frankly I haven't had enough funding or time to add encryption into the library. I suppose I'll get more into encryption and security when I delve deeper into Federated Learning, where a cluster of devices can compute weights to a model and send the updated weights to a central location. I plan on using Blockchain technology in verifying the integrity of a "central" source and the integrity of weights (just like how a crypto wallet would work). At The Center I'll be exploring various topics in AI, mostly privacy in AI (protecting our privacy as AI gets "smarter"), and making AI more accessible to developing regions.
What is your interpretation of the meaning and importance of privacy for the context of your work? I tend to think of it mostly as an issue of protection from those in power to impose rules about what they find acceptable, which seems similar to your proposal, and with less emphasis on the "right to be let alone" phrasing.
The way I see privacy is quite similar to yours Emma, it derives from a standpoint of empowering the powerless and creating channels for public good to be achieved through the citizens. I have to say that my view of privacy, in the past few months, has developed more to include the right to anonymity—which I assume is what you describe as "the right to be let alone." The past few months I was working for a large mapping company that uses open source mapping. During my time there I couldn't help but notice that location and mapping technology, to an extent, may disrupt the ability to be private. If a company or some power knows where I'm at, and the places I frequent, this could lead to privacy violations and perhaps persecution. I'm now also interested strongly in that right to be left alone, the right to be anonymous—to not be mapped—as a way to possibly prevent members of marginalized groups being further oppressed. I suppose the two views of privacy we're discussing are more related than what meets the eye.
I have to say that my view of privacy, in the past few months, has developed more to include the right to anonymity—which I assume is what you describe as "the right to be let alone."
That’s a really good way of putting it. What else has your attention right now?
I just arrived in Zimbabwe yesterday where I will be for about two weeks. Zimbabwe is preparing to have its first election without the former president of 38 years, Robert Mugabe. My time living in the African continent and my frequent travels back to the continent have inspired much of my projects. I began to notice and sense years ago that the inability to acquire and share information hindered the development of African nations. The development of society through technology, government, and human rights in some ways depends on the public having information that they can act on.
I began to notice and sense years ago that the inability to acquire and share information hindered the development of African nations. The development of society through technology, government, and human rights in some ways depends on the public having information that they can act on.
I've been thinking about election integrity recently, especially with regards to different technological approaches to curb corruption that seem great in theory but assume fairly advanced digital infrastructure that many places simply don't have. This seems like an example of something you are working against, which is a tendency of many people in tech to push solutions onto groups of people that don't account for that group's lived experiences or needs. I think it's especially exciting that your project specifically takes into account not just a meaningful and realistic use case, but also alternative hardware that works better with the current infrastructure of the environment.
Yes, election integrity is something worth considering. One of my summer projects last year was implementing Bitcoin from scratch as a way to truly learn and understand the blockchain and better understand cryptography, and I got far as building a bitcoin wallet. The blockchain might be useful for elections, especially as a low-tech method.
How did you first become interested in this area of study and work? And how did you hear about the Berkman Klein Center?
The factors that led to this type of work are less about my formal education but reflect my cultural upbringing. I was raised under "Ubuntu," a traditional philosophy which strongly suggests that an individual has a debt to society. Those who are in power have a responsibility to use that power to build and improve society. Being a computer scientist, in a way, puts me in a position of power—and the greatest way to use that power is to empower others.
Being a computer scientist, in a way, puts me in a position of power—and the greatest way to use that power is to empower others.
One of the finest ways of doing so, for me, has been to create tools for sharing information and encouraging progress through the debate of ideas. At my house we don't have internet, as my family here can't afford it, and during the day we don't have electricity (it's said it will be setup at the end of the month, it's a new house). Needlessly to say, the lack of connectivity and costs to access the internet are some of the reasons why I began looking for offline networks.
My interest in the Berkman Klein Center initially formed out of a chance encounter with a Fellow, Nathan Freitas, at a conference on creating offline networks for privacy and sharing of data. Another previous Fellow suggested my interests and projects were similar to the work that the Berkman Klein Center is interested in. Through subsequent conversations over the next few months and more research I did on my own about the work done at the Center by previous fellows, I knew this was a place where my ideas could flourish and I could grow tremendously.
Emma Weil is recent Harvard Undergraduate, where she studied computer science and studio art. Emma focuses on low-level systems and security. They are interested in how security relies on certain assumptions about populations and users, and how politics are embedded in specific technological design choices. During the summer of 2018, Emma worked as a BKC intern with the Berklett Cybersecurity Project.
Stay in touch
Subscribe to our email list for the latest news, information, and commentary from the Berkman Klein Center and our community.