Skip to the main content

Get To Know 23-24 BKC Fellow: Maitreya Shah

Maitreya Shah is one of the Berkman Klein Center’s 2023-2024 Fellows. He is a blind lawyer and technology policy researcher interested in the intersection of technology law and disability rights. His areas of interest include data privacy, algorithmic accountability, and the impact of AI and emerging technologies on persons with disabilities. 

Can you explain the intersection of technology policy and disability rights? 

To start with, given that people with disabilities constitute close to 15% of the world population, this is an intersection which should ideally have been very apparent. It is how people with disabilities use and experience technology, and how it affects them. One of the key aspects of this intersection is accessibility of digital spaces that the international community has recognized through standards, policies, and legislations. Digital accessibility is what I initially began with in my work and what I continue to advocate for.  

However, other issues have come to light with the advent of AI and emerging technologies. People with disabilities are at risk of further marginalization when these technologies don’t take into account their identities, needs, and challenges. Novel tools need to be assessed for harms and biases when they either discriminate or exclude people with disabilities from the digital sphere. Additionally, AI tools that entrench ableism only amplify and reproduce the negative attitudes towards disability and disabled people. I therefore intend to investigate the impact of AI and emerging technologies on people with disabilities through my work.  

What are your thoughts on the impact of AI on persons with disabilities? How do you envision overcoming the risks while maximizing opportunity? 

The first thing that comes to mind is the lack of representation of people with disabilities in the development of these technologies. We have seen these conversations in the context of gender and race before. It is usually a non-disabled, white, cis male designing these technologies. There are, hence, many biases both explicit and implicit that get embedded into these technologies. Secondly, the data used in training algorithms carry historic societal biases against people with disabilities. These biases get imprinted into these technologies, resulting in automated decisions that discriminate against people with disabilities. For instance, in the United States, many tools used in the recruitment and screening of [job] candidates have been found to discriminate against people with disabilities.  

The Equal Employment Opportunity Commission has recently started initiating actions against such biased tools. AI tools deployed by many states have been found to unlawfully curb Social Security benefits that people with disabilities are entitled to. There are numerous such examples. To contain the risks posed by these technologies and to make the digital space more inclusive, we need more representation of people with disabilities in the development and deployment of emerging technologies. Frameworks on mitigating AI biases should consider disability while designing both technical protocols and regulatory documents. Both developers and deployers of AI tools should take into account the unique challenges faced by people with disabilities.  

At BKC, how will you work to promote accessibility and inclusivity for persons with disabilities? Do you have any other goals during your time with BKC? 

My research project at BKC intends to evaluate AI ethics instruments from a disability justice lens. I also look forward to working in tandem with different centers and institutions such as the Harvard Law School Project on Disability. I plan on participating in workshops, seminars, and discussions around technology and disability. I am eager to learn from other fellows of my cohort and from the wider BKC community. I would also love to undertake collaborative projects and exchange ideas with other Fellows and center members.  

Interviewer 

Dhriti Vadlakonda is an undergraduate at Harvard College. She is planning to concentrate in Neuroscience under the Mind Brain Behavior track. During the summer of 2023, Dhriti interned with metaLAB, where her role involved keeping the community informed about rapid advancements in radio frequency technologies via the Waves project.