Technology & Human Rights

Examining how technological advancements affect the future of human rights.

While recognizing the enormous progress that societies have made since the establishment of the Universal Declaration of Human Rights in 1948, technological advancements have inevitably profound implications for the human rights framework.

From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts. From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.  

Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.

News and Announcements

See all announcements

Select Publications

The Promise and Pitfalls of the Facebook Oversight Board

Citation:

Flynn Coleman, Brandie Nonnecke, and Elizabeth M. Renieris. 5/6/2021. “The Promise and Pitfalls of the Facebook Oversight Board.” Carr Center Discussion Paper Series. Read the Discussion.
The Promise and Pitfalls of the Facebook Oversight Board

Abstract:

The Facebook Oversight Board recently issued its first decisions on content removals by Facebook. See what some of the Carr Center Technology and Human Rights Fellows had to say about the benefits, challenges, and risks of external oversight boards for platform governance and accountability.

Read the discussion.

: Carr Center Technology and Human Rights Fellows | May 6 2021
: What are the benefits, challenges, and risks of external oversight boards for platform governance and accountability?
Last updated on 05/06/2021

Data as Collectively Generated Patterns: Making Sense of Data Ownership

Citation:

Mathias Risse. 4/26/2021. “Data as Collectively Generated Patterns: Making Sense of Data Ownership.” Carr Center Discussion Paper Series. See full text.
Data as Collectively Generated Patterns: Making Sense of Data Ownership

Abstract:

Data ownership is power. Who should hold that power? How should data be owned?  The importance of data ownership explains why it has been analogized to other domains where ownership is better understood. Several data-as proposals are on the table: data as oil, as intellectual property, as personhood, as salvage, data as labor, etc. Author Mathias Risse proposes another way of thinking about data.  His view characterizes data in ways that make them accessible to ownership considerations and can be expressed as a data-as view: data as collectively generated patterns. Unlike the alternatives, data as collectively generated patterns does not create any equivalence with another domain where ownership is already well-understood. It reveals how ownership considerations enter, but we must explore afresh how they do. Accordingly, he proposes a way for ownership considerations to bear on data once we understand them that way. And if we did understand them that way, the internet should presumably be designed very differently from what we have now. 

 

Read the full paper.

: Mathias Risse | April 26 2021
: Data ownership is power. Who should hold that power?
Last updated on 04/26/2021

Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar

Citation:

Mark Latonero and Aaina Agarwal. 3/19/2021. “Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar.” Carr Center Discussion Paper Series. See full text.
Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar

Abstract:

Human rights impact assessments (HRIAs) have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence (AI) and algorithmic systems. The purpose of this paper is to assess whether HRIAs are a tool fit for purpose for AI. Will HRIAs become an effective tool of AI governance that reduces risks and harms? Or, will they become a form of AI “ethics washing” that permits companies to hide behind a veneer of human rights due diligence and accountability? This paper finds that HRIAs of AI are only in their infancy. Simply conducting such assessments with the usual methods will miss the mark for AI and algorithmic systems, as demonstrated by the failures of the HRIA of Facebook in Myanmar. Facebook commissioned an HRIA after UN investigators found that genocide was committed in the country. However, the HRIA did not adequately assess the most salient human rights impacts of Facebook’s presence and product in Myanmar. HRIAs should be updated if they are to be used on AI and algorithmic systems. HRIAs for AI should be seen as an analysis of a sociotechnical system wherein social and technical factors are inherently intertwined and interrelated. Interdisciplinary expertise is needed to determine the appropriate methods and criteria for specific contexts where AI systems are deployed. In addition, HRIAs should be conducted at appropriate times relative to critical stages in an AI development lifecycle and function on an ongoing basis as part of a comprehensive human rights due diligence process. Challenges remain, such as developing methods to identify algorithmic discrimination as one of the most salient human rights concerns when it comes to assessing AI harms. In addition, a mix of voluntary actions and mandatory measures may be needed to incentivize organizations to incorporate HRIAs for AI and algorithmic systems in a more effective, transparent, and accountable way. The paper concludes with considerations for the technology sector, government, and civil society.

Read the paper. 

: Mark Latonero et al | March 19 2021
: Human rights impact assessments have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence and algorithmic systems, but are they effective?
  •  
  • 1 of 12
  • »
See all publications

“Global civil society and transnational advocacy networks have played an important role in social movements and struggles for social change. Looking ahead, these movements need to coalesce around the impact of technology on society, in particular harnessing the promise, challenging the perils, and looking at maintaining public and private spheres that respect creativity, autonomy, diversity, and freedom of thought and expression.”

- Sushma Raman