Technology & Human Rights

Examining how technological advancements affect the future of human rights.

While recognizing the enormous progress that societies have made since the establishment of the Universal Declaration of Human Rights in 1948, technological advancements have inevitably profound implications for the human rights framework.

From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts. From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.  

Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.

News and Announcements

See all announcements

Select Publications

Human Rights Implications of Algorithmic Impact Assessments: Priority Considerations to Guide Effective Development and Use

Human Rights Implications of Algorithmic Impact Assessments: Priority Considerations to Guide Effective Development and Use

Abstract:

The public and private sectors are increasingly turning to the use of algorithmic or artificial intelligence impact assessments (AIAs) as a means to identify and mitigate harms from AI. While promising, lack of clarity on the proper scope, methodology, and best practices for AIAs could inadvertently perpetuate the harms they seek to mitigate, especially to human rights. We explore the emerging integration of the human rights legal framework into AI governance strategies, including the implementation of human rights impacts assessments (HRIAs) to assess AI. The benefits and drawbacks from recent implementations of AIAs and HRIAs to assess AI adopted by the public and private sectors are explored and considered in the context of an emerging trend toward the development of standards, certifications, and regulatory technologies for responsible AI governance practices. We conclude with priority considerations to better ensure that AIAs and their corresponding responsible AI governance strategies live up to their promise.

Read the paper.

author/date: Nonnecke & Dawson | Oct 21 2021
teaser text: Lack of clarity on the proper scope, methodology, and best practices for algorithmic or AI impact assessments could inadvertently perpetuate the harms they seek to mitigate.
Last updated on 10/25/2021

The Power of Choosing Not to Build: Justice, Non-Deployment, and the Purpose of AI Optimization

The Power of Choosing Not to Build: Justice, Non-Deployment, and the Purpose of AI Optimization

Abstract:

Are there any types of AI that should never be built in the first place? The “Non-Deployment Argument”—the claim that some forms of AI should never be deployed, or even built—has been subject to significant controversy recently: non-deployment skeptics fear that it will stifle innovation, and argue that the continued deployment and incremental optimization of AI tools will ultimately benefit everyone in society. However, there are good reasons to subject the view that we should always try to build, deploy, and gradually optimize new AI tools to critical scrutiny: in the context of AI, making things better is not always good enough. In specific cases, there are overriding ethical and political reasons—such as the ongoing presence of entrenched structures of social injustice—why we ought not to continue to build, deploy, and optimize particular AI tools for particular tasks. Instead of defaulting to optimization, we have a moral and political duty to critically interrogate and contest the value and purpose of using AI in a given domain in the first place.

Read the paper.

author/date: Annette Zimmermann | Oct 5 2021
teaser text: In the context of AI, making things better is not always good enough. Instead of defaulting to optimization, we have a moral and political duty to critically interrogate and contest the value and purpose of using AI in a given domain in the first place.
Last updated on 10/21/2021

Human Rights and the Pandemic: The Other Half of the Story

Citation:

Elizabeth M. Renieris. 10/2/2021. “Human Rights and the Pandemic: The Other Half of the Story.” Carr Center Discussion Paper Series. See full text.
Human Rights and the Pandemic: The Other Half of the Story

Abstract:

Human rights are a broad array of civil, political, economic, social, and cultural rights and freedoms that are universal and inalienable, inherent to the dignity of every human being. The application of human rights to digital technologies has generally focused on individual civil and political rights, such as the freedom of expression and privacy. However, as digital technologies evolve beyond traditional information and communications technologies to increasingly mediate access to everything from healthcare to employment, education, and participation in social and cultural life, an increasingly broad array of human rights are implicated. With humanity more reliant on digital tools and technologies than ever before, the stakes have never been more apparent than during the Covid-19 pandemic. Gripped by the magical potential of digital tools and technologies and the allure of simple solutions to complex governance challenges, governments and key stakeholders have adopted an exceedingly limited view of human rights in relation to these technologies, focusing almost exclusively on a narrow set of civil and political rights while virtually ignoring threats to economic, social, and cultural rights. For those already at the margins, this has exacerbated their digital exclusion. This paper calls for a more expansive view of human rights in relation to technology governance. After contextualizing the role of economic, social, and cultural rights in relation to digital technologies, this paper examines how such rights have been largely absent from the discourse around technologies deployed in the pandemic (“pandemic tech”), as well as the consequences of that omission. The paper then explores how a recalibration of human rights in relation to digital technologies, specifically pandemic tech, could help prevent geopolitical fracturing, reorient the conversation around people rather than technology, and provide a critical backstop against the runaway commercialization that threatens the exercise and enjoyment of fundamental rights by individuals and communities.

Read the paper.

author/date: Elizabeth M. Renieris | Oct 2 2021
teaser text: The pandemic has helped to expose a critical shortcoming in the current discourse around human rights in relation to the digital age: a lack of consideration for economic, social, and cultural rights.
Last updated on 10/03/2021
  •  
  • 1 of 14
  • »
See all publications

“Global civil society and transnational advocacy networks have played an important role in social movements and struggles for social change. Looking ahead, these movements need to coalesce around the impact of technology on society, in particular harnessing the promise, challenging the perils, and looking at maintaining public and private spheres that respect creativity, autonomy, diversity, and freedom of thought and expression.”

- Sushma Raman