Technology & Human Rights

Examining how technological advancements affect the future of human rights.

While recognizing the enormous progress that societies have made since the establishment of the Universal Declaration of Human Rights in 1948, technological advancements have inevitably profound implications for the human rights framework.

From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts. From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.  

Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.

News and Announcements

  •  
  • 1 of 10
  • »
See all announcements

Upcoming Events

2021 Apr 12

Intelligence and Artifice: Ethical Traps in the Imagination of AI

3:00pm to 4:00pm

Location: 

Virtual Event (Registration Required)

Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Mathias Risse, Director of the Carr Center for Human Rights Policy and Berthold Beitz Professor in Human Rights, Global Affairs and Philosophy. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.

...

Read more about Intelligence and Artifice: Ethical Traps in the Imagination of AI

Registration: 

2021 Apr 26

From Citizens United to Bots United: Reinterpreting "Robot Rights" as a Corporate Power Grab

3:00pm to 4:00pm

Location: 

Virtual Event (Registration Required)

Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Mathias Risse, Director of the Carr Center for Human Rights Policy and Berthold Beitz Professor in Human Rights, Global Affairs and Philosophy. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.

... Read more about From Citizens United to Bots United: Reinterpreting "Robot Rights" as a Corporate Power Grab

Registration: 

Select Publications

Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar

Citation:

Mark Latonero and Aaina Agarwal. 3/19/2021. “Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar.” Carr Center Discussion Paper Series. See full text.
Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar

Abstract:

Human rights impact assessments (HRIAs) have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence (AI) and algorithmic systems. The purpose of this paper is to assess whether HRIAs are a tool fit for purpose for AI. Will HRIAs become an effective tool of AI governance that reduces risks and harms? Or, will they become a form of AI “ethics washing” that permits companies to hide behind a veneer of human rights due diligence and accountability? This paper finds that HRIAs of AI are only in their infancy. Simply conducting such assessments with the usual methods will miss the mark for AI and algorithmic systems, as demonstrated by the failures of the HRIA of Facebook in Myanmar. Facebook commissioned an HRIA after UN investigators found that genocide was committed in the country. However, the HRIA did not adequately assess the most salient human rights impacts of Facebook’s presence and product in Myanmar. HRIAs should be updated if they are to be used on AI and algorithmic systems. HRIAs for AI should be seen as an analysis of a sociotechnical system wherein social and technical factors are inherently intertwined and interrelated. Interdisciplinary expertise is needed to determine the appropriate methods and criteria for specific contexts where AI systems are deployed. In addition, HRIAs should be conducted at appropriate times relative to critical stages in an AI development lifecycle and function on an ongoing basis as part of a comprehensive human rights due diligence process. Challenges remain, such as developing methods to identify algorithmic discrimination as one of the most salient human rights concerns when it comes to assessing AI harms. In addition, a mix of voluntary actions and mandatory measures may be needed to incentivize organizations to incorporate HRIAs for AI and algorithmic systems in a more effective, transparent, and accountable way. The paper concludes with considerations for the technology sector, government, and civil society.

Read the paper. 

: Mark Latonero et al | March 19 2021
: Human rights impact assessments have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence and algorithmic systems, but are they effective?

AI Principle Proliferation as a Crisis of Legitimacy

Citation:

Mark Latonero. 9/30/2020. “AI Principle Proliferation as a Crisis of Legitimacy.” Carr Center Discussion Paper Series, 2020-011. See full text.
AI Principle Proliferation as a Crisis of Legitimacy

Abstract:

While Artificial Intelligence is a burgeoning field today, there is a growing concern about the mushrooming of proposed principles on how AI should be governed.

In his latest Carr Center discussion paper, fellow Mark Latonero posits that human rights could serve to stabilize AI governance, particularly if framed as an anchor to guide AI usage that could avert both everyday and catastrophic social harms.

Read the full document here. 

: Mark Latonero | Sept 30 2020
: While Artificial Intelligence is a burgeoning field today, there is a growing concern about the mushrooming of proposed principles on how AI should be governed.
Last updated on 10/05/2020

Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?

Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?

Abstract:

This paper was prepared for an interdisciplinary conference on Gefährliche Forschung? (Dangerous Science?) held at the University of Cologne in February 2020 and is scheduled to appear in a volume of contributions from that event edited by Wilfried Hinsch and Susanne Brandstätter, the organizers, and to be published by de Gruyter. The paper delves into the question proposed to me—might population genetics or artificial intelligence undermine philosophical ideas about equality—without locating the context of this debate or offering a preview of its contents. The first section discusses the ideal of equality, the next two talk about genetics in the context of responses to racism, and the remaining two speak about possible changes that might come from the development of general Artificial Intelligence.

Read full text here

: Mathias Risse | Aug 17 2020
: Risse delves into the extent to which population genetics or artificial intelligence might undermine philosophical ideas about equality—both generally speaking and in the context of responses to racism.
Last updated on 08/16/2020
  •  
  • 1 of 11
  • »
See all publications

“Global civil society and transnational advocacy networks have played an important role in social movements and struggles for social change. Looking ahead, these movements need to coalesce around the impact of technology on society, in particular harnessing the promise, challenging the perils, and looking at maintaining public and private spheres that respect creativity, autonomy, diversity, and freedom of thought and expression.”

- Sushma Raman