Technology & Human Rights

Examining how technological advancements affect the future of human rights.

While recognizing the enormous progress that societies have made since the establishment of the Universal Declaration of Human Rights in 1948, technological advancements have inevitably profound implications for the human rights framework.

From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts. From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.  

Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.

News and Announcements

  •  
  • 1 of 8
  • »
See all announcements

Upcoming Events

2020 Apr 27

CANCELLED - The New Cybersecurity of the Mind

5:30pm to 6:45pm

Location: 

Rubenstein 414-AB

Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Mathias Risse, Director of the Carr Center for Human Rights Policy and Lucius N. Littauer Professor of Philosophy and Public Administration. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.

... Read more about CANCELLED - The New Cybersecurity of the Mind

Select Publications

The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification

The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification

Abstract:

Artificial Intelligence (AI) technologies have the capacity to do a great deal of good in the world, but whether they do so is not only dependent upon how we use those AI technologies but also how we build those AI technologies in the first place.

The unfortunate truth is that personal data has become the bricks and mortar used to build many AI technologies and more must be done to protect and safeguard the humans whose personal data is being used. Through a case study on AI-powered remote biometric identity verification, this paper seeks to explore the technical requirements of building AI technologies with high volumes of personal data and the implications of such on our understanding of existing data protection frameworks. Ultimately, a path forward is proposed for ethically using personal data to build AI technologies.

Read the paper here. 

: Neal Cohen | Apr 4 2020
: Tech Fellow Neal Cohen explores the biases and ethics of building AI technologies with personal data.

Why the AI We Rely on Can’t Get Privacy Right (Yet)

Why the AI We Rely on Can’t Get Privacy Right (Yet)

Abstract:

Neal Cohen analyzes why AI technologies fall short on privacy. While artificial intelligence (AI) powered technologies are now commonly appearing in many digital services we interact with on a daily basis, an often neglected truth is that few companies are actually building the underlying AI technology.

 

: Neal Cohen | Mar 7 2020
: Neal Cohen analyzes why AI technologies fall short on privacy. 
Last updated on 03/11/2020

Can Facebook’s Oversight Board Win People’s Trust?

Citation:

Mark Latonero. 1/29/2020. “Can Facebook’s Oversight Board Win People’s Trust?” Harvard Business Review. See full text.
Can Facebook’s Oversight Board Win People’s Trust?

Abstract:

Technology & Human Rights Fellow, Mark Latonero, breaks down the larger implications of Facebook's global Oversight Board for content moderation. 

Facebook is a step away from creating its global Oversight Board for content moderation. The bylaws for the board, released on Jan. 28, lay out the blueprint for an unprecedented experiment in corporate self-governance for the tech sector. While there’s good reason to be skeptical of whether Facebook itself can fix problems like hate speech and disinformation on the platform, we should pay closer attention to how the board proposes to make decisions.

: Mark Latonero | Jan 29 2020
: Technology & Human Rights Fellow, Mark Latonero, breaks down the larger implications of Facebook's global Oversight Board for content moderation. 
Last updated on 02/25/2020
  •  
  • 1 of 9
  • »
See all publications

“Global civil society and transnational advocacy networks have played an important role in social movements and struggles for social change. Looking ahead, these movements need to coalesce around the impact of technology on society, in particular harnessing the promise, challenging the perils, and looking at maintaining public and private spheres that respect creativity, autonomy, diversity, and freedom of thought and expression.”

- Sushma Raman