Technology & Human Rights

Examining how technological advancements affect the future of human rights.

While recognizing the enormous progress that societies have made since the establishment of the Universal Declaration of Human Rights in 1948, technological advancements have inevitably profound implications for the human rights framework.

From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts. From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.  

Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.

News and Announcements

  •  
  • 1 of 9
  • »
See all announcements

Select Publications

Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?

Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?

Abstract:

This paper was prepared for an interdisciplinary conference on Gefährliche Forschung? (Dangerous Science?) held at the University of Cologne in February 2020 and is scheduled to appear in a volume of contributions from that event edited by Wilfried Hinsch and Susanne Brandstätter, the organizers, and to be published by de Gruyter. The paper delves into the question proposed to me—might population genetics or artificial intelligence undermine philosophical ideas about equality—without locating the context of this debate or offering a preview of its contents. The first section discusses the ideal of equality, the next two talk about genetics in the context of responses to racism, and the remaining two speak about possible changes that might come from the development of general Artificial Intelligence.

Read full text here

: Mathias Risse | Aug 17 2020
: Risse delves into the extent to which population genetics or artificial intelligence might undermine philosophical ideas about equality—both generally speaking and in the context of responses to racism.
Last updated on 08/16/2020

Mass Incarceration and The Future: An Urgent Need to Address the Human Rights Implications of Criminal Background Checks and the Future of Artificial Intelligence

Mass Incarceration and The Future: An Urgent Need to Address the Human Rights Implications of Criminal Background Checks and the Future of Artificial Intelligence

Abstract:

Between 70 and 100 million Americans—one in three— currently live with a criminal record. This number is expected to rise above 100 million by the year 2030.

The criminal justice system in the U.S. has over-incarcerated its citizen base; we have 5% of the world's population but 25% of the world's prison population. America became known as the “incarceration nation” because our prison and jail population exploded from less than 200,000 in 1972 to 2.2 million today, which became a social phenomenon known as mass incarceration. And along the way, there was a subsequent boom in querying databases for data on citizens with criminal records.

Once a person comes in contact with the U.S. criminal justice system, they begin to develop an arrest and/or conviction record. This record includes data aggregated from various databases mostly, if not exclusively, administered by affiliated government agencies. As the prison population grew, the number of background check companies rose as well. The industry has grown and continues to do so with very little motivation to wrestle with morality, data integrity standards, or the role of individual rights.

This paper address the urgent need to look towards a future where background screening decisions and artificial intelligence collide.

Read full paper here. 

 

: Teresa Y. Hodge & Laurin Leonard | Jul 17 2020
: Examining a future where background screening decisions and artificial intelligence collide.
Last updated on 08/04/2020

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance

Abstract:

What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes.

Read the full paper.

: Sabelo Mhlambi | July 8 2020
: Tech Fellow Sabelo Mhlambi explores how the Sub-Saharan African philosophy of ubuntu reconciles ethical limitations of artificial intelligence.
Last updated on 08/04/2020
  •  
  • 1 of 11
  • »
See all publications

“Global civil society and transnational advocacy networks have played an important role in social movements and struggles for social change. Looking ahead, these movements need to coalesce around the impact of technology on society, in particular harnessing the promise, challenging the perils, and looking at maintaining public and private spheres that respect creativity, autonomy, diversity, and freedom of thought and expression.”

- Sushma Raman