Technology & Human Rights

Examining how technological advancements affect the future of human rights.

While recognizing the enormous progress that societies have made since the establishment of the Universal Declaration of Human Rights in 1948, technological advancements have inevitably profound implications for the human rights framework.

From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts. From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.  

Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.

News and Announcements

  • 1 of 10
  • »
See all announcements

Upcoming Events

2020 Nov 17

Towards Life 3.0: A Conversation with Shoshana Zuboff

5:30pm to 6:30pm


Virtual Event (Registration Required)

Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Mathias Risse, Director of the Carr Center for Human Rights Policy and Lucius N. Littauer Professor of Philosophy and Public Administration. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human...

Read more about Towards Life 3.0: A Conversation with Shoshana Zuboff


Select Publications

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance


What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes.

Read the full paper.

: Sabelo Mhlambi | July 8 2020
: Tech Fellow Sabelo Mhlambi explores how the Sub-Saharan African philosophy of ubuntu reconciles ethical limitations of artificial intelligence.
Last updated on 08/04/2020

You Purged Racists From Your Website? Great, Now Get to Work

You Purged Racists From Your Website? Great, Now Get to Work


Joan Donovan explains that the covid-19 infodemic has taught social media giants an important lesson: they must take action to control the content on their sites. 

For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.

Read the full article.

: Joan Donovan | July 1 2020
: Joan Donovan explains that the covid-19 infodemic has taught social media giants an important lesson: they must take action to control the content on their sites. 
Last updated on 08/04/2020

Reimagining Reality: Human Rights and Immersive Technology


Brittan Heller. 6/12/2020. “Reimagining Reality: Human Rights and Immersive Technology.” Carr Center Discussion Paper Series, 2020-008. See full text.
Reimagining Reality: Human Rights and Immersive Technology


This paper explores the human rights implications of emergent technology, and focuses on virtual reality (VR), augmented reality (AR), and immersive technologies. Because of the psychological and physiological aspects of immersive technologies, and the potential for a new invasive class of privacy-related harms, she argues that content creators, hardware producers, and lawmakers should take increased caution to protect users. This will help protect the nascent industry in a changing legal landscape and help ensure that the beneficial uses of this powerful technology outweigh the potential misuses.

In the paper, Heller first reviews the technology and terminology around immersive technologies to explain how they work, how a user’s body and mind are impacted by the hardware, and what social role these technologies can play for communities. Next she describes some of the unique challenges for immersive media, from user safety to misalignment with current biometrics laws. She introduces a new concept, biometric psychography, to explain how the potential for privacy-related harms is different in immersive technologies, due to the ability to connect your identity to your innermost thoughts, wants, and desires. Finally, she describe foreseeable developments in the immersive industry, with an eye toward identifying and mitigating future human rights challenges. The paper concludes with five recommendations for actions that the industry and lawmakers can take now, as the industry is still emerging, to build human rights into its DNA.

: Brittan Heller | June 12 2020
: Exploring the human rights implications of virtual reality, augmented reality, and immersive technologies.
Last updated on 06/12/2020
See all publications

“Global civil society and transnational advocacy networks have played an important role in social movements and struggles for social change. Looking ahead, these movements need to coalesce around the impact of technology on society, in particular harnessing the promise, challenging the perils, and looking at maintaining public and private spheres that respect creativity, autonomy, diversity, and freedom of thought and expression.”

- Sushma Raman