Technology & Human Rights

Since its founding in 1999 the Carr Center has developed a unique focus of expertise on the most dangerous and intractable human rights challenges of the new century, including genocide, mass atrocity, state failure, and the ethics and politics of military intervention.

Examining how technological advancements affect the future of human rights.

While recognizing the enormous progress that societies have made since the establishment of the Universal Declaration of Human Rights in 1948, technological advancements have inevitably profound implications for the human rights framework.

From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts. From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.  

Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.

News and Announcements

  •  
  • 1 of 11
  • »
See all announcements

Select Publications

Automation Anxiety and a Right to Freedom from Automated Systems and AI

Automation Anxiety and a Right to Freedom from Automated Systems and AI

Abstract:

Rapid advances in AI have created a global sense of urgency around the ways that automated systems are changing human lives. Not all of these changes are necessarily for the better. On what basis, therefore, might we be able to assert a right to be free from automated systems and AI? The idea seems absurd, given how embedded these technologies already are and the improvements they have generated in contemporary life when we compare with prior periods in human history. And yet, there are good grounds for recognizing a general entitlement to protect at least three important human abilities: i) to work; ii) to know and understand the source of the content we consume; and iii) to make our own decisions. Understood comprehensively, a right to freedom from automated systems and AI could mean that individuals and communities are presented with alternative options and/or leverage to keep them from losing these abilities long cherished in the history of human development. Such a right does not call for dismantling the technological age, but rather designates what we ought to contest and protect in a world with a precarious dependence on technology.

Read the paper.

author/date: Ziyaad Bhorat | Oct 2, 2023
teaser text: On what basis can we assert a right to be free from automated systems and AI, given how embedded these technologies are in contemporary life?
Last updated on 10/03/2023

Can We Move Fast Without Breaking Things? Software Engineering Methods Matter to Human Rights Outcomes

Can We Move Fast Without Breaking Things? Software Engineering Methods Matter to Human Rights Outcomes

Abstract:

As the products of the IT industry have become ever more prevalent in our everyday lives, evidence of undesirable consequences of their use has become ever more difficult to ignore. Consequently, several responses ranging from attempts to foster individual ethics and collective standards in the industry to legal and regulatory frameworks have been developed and are being widely discussed in the literature. This paper instead makes the argument that currently popular methods of software engineering are implicated as they hinder work that would be necessary to avoid negative outcomes. I argue that software engineering has regressed and that introducing rights as a core concept into the ways of working in the industry is essential for making software engineering more rights-respecting.

Read the paper.
author/date: Alexander Voss | October 24, 2022
teaser text: As products of the IT industry have become increasingly prevalent, evidence of undesirable consequences of their use has become ever more difficult to ignore.
Last updated on 10/27/2022

Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems

Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems

Abstract:

In the hype of A.I., we are observing a world where States are increasingly adopting algorithmic decision-making systems altogether with narratives that portray them as a magic wand to “solve” social, economic, environmental, and political problems. But in practice, instead of addressing such promise, the so-called Digital Welfare States are likely to be deploying oppressive algorithms that expand practices of surveillance of the poor and vulnerable; automate inequalities; are racist and patriarchal by design; further practices of digital colonialism, where data and mineral extractivism feed Big Tech businesses from the Global North; and reinforce neoliberal practices to progressively drain out social security perspectives. While much has been discussed about “ethical”, “fair,” or “human-Centered” A.I., particularly focused on transparency, accountability, and data protection, these approaches fail to address the overall picture.

To deepen critical thinking and question such trends, led by case-based analysis focused on A.I. projects from Latin America that are likely to pose harm to gender equality and its intersectionalities of race, class, sexuality, territoriality, etc, this article summarizes some findings of the notmy.ai project, seeking to contribute to the development of feminist frameworks to question algorithmic decision-making systems that are being deployed by the public sector. The universalistic approach of human rights frameworks provide important goals for humanity to seek, but when we look into the present, we cannot ignore existing power relations that maintain historical relations of oppression and domination. Rights are not universally accessed.

Feminist theories and practices are important tools to acknowledge the existence of the political structures behind the deployment of technologies and, therefore, are an important framework to question them. For this reason, they can serve as a powerful instrument to imagine other tech and other worlds based on collective and more democratic responses to core societal challenges, focused on equity and social-environmental justice.

Read the paper.
author/date: Joana Varon & Paz Peña | Oct 17, 2022
teaser text: Feminist frameworks can help us question algorithmic decision-making systems that may be racist and patriarchal, shifting to a future that is more focused on equity and social-environmental justice.
Last updated on 10/20/2022
See all publications

“Global civil society and transnational advocacy networks have played an important role in social movements and struggles for social change. Looking ahead, these movements need to coalesce around the impact of technology on society, in particular harnessing the promise, challenging the perils, and looking at maintaining public and private spheres that respect creativity, autonomy, diversity, and freedom of thought and expression.”

- Sushma Raman