Practice

The Evolution of the Global Human Rights Movement: A Three-Decade Perspective
Carr Center Human Rights for Policy. 2/8/2023. “The Evolution of the Global Human Rights Movement: A Three-Decade Perspective”. Publisher's VersionAbstract

Kenneth Roth gave a lecture at the JFK Jr. Forum, discussing the evolution of human rights work, the strategic challenges and opportunities facing Human Rights Watch over the decades, and the future of human rights. 

Roth's talk, co-sponsored by the Carr Center and the Institute of Politics, additionally featured an introduction by Mathias Risse (Berthold Beitz Professor in Human Rights, Global Affairs and Philosophy and Director of the Carr Center for Human Rights Policy), additional comments from Kathryn Sikkink (Ryan Family Professor of Human Rights Policy) and was moderated by Sushma Raman, the former Executive Director of the Carr Center for Human Rights Policy. This transcript has been edited for length and clarity. A complete recording of the event is also available online.

Read the transcript of his lecture here.

Can We Move Fast Without Breaking Things? Software Engineering Methods Matter to Human Rights Outcomes
Alexander Voss. 10/24/2022. “Can We Move Fast Without Breaking Things? Software Engineering Methods Matter to Human Rights Outcomes.” Carr Center Discussion Paper Series. Publisher's VersionAbstract
As the products of the IT industry have become ever more prevalent in our everyday lives, evidence of undesirable consequences of their use has become ever more difficult to ignore. Consequently, several responses ranging from attempts to foster individual ethics and collective standards in the industry to legal and regulatory frameworks have been developed and are being widely discussed in the literature. This paper instead makes the argument that currently popular methods of software engineering are implicated as they hinder work that would be necessary to avoid negative outcomes. I argue that software engineering has regressed and that introducing rights as a core concept into the ways of working in the industry is essential for making software engineering more rights-respecting.

Read the paper.
Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems
Joana Varon and Paz Peña. 10/17/2022. “Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems.” Carr Center Discussion Paper Series. Publisher's VersionAbstract
In the hype of A.I., we are observing a world where States are increasingly adopting algorithmic decision-making systems altogether with narratives that portray them as a magic wand to “solve” social, economic, environmental, and political problems. But in practice, instead of addressing such promise, the so-called Digital Welfare States are likely to be deploying oppressive algorithms that expand practices of surveillance of the poor and vulnerable; automate inequalities; are racist and patriarchal by design; further practices of digital colonialism, where data and mineral extractivism feed Big Tech businesses from the Global North; and reinforce neoliberal practices to progressively drain out social security perspectives. While much has been discussed about “ethical”, “fair,” or “human-Centered” A.I., particularly focused on transparency, accountability, and data protection, these approaches fail to address the overall picture.

To deepen critical thinking and question such trends, led by case-based analysis focused on A.I. projects from Latin America that are likely to pose harm to gender equality and its intersectionalities of race, class, sexuality, territoriality, etc, this article summarizes some findings of the notmy.ai project, seeking to contribute to the development of feminist frameworks to question algorithmic decision-making systems that are being deployed by the public sector. The universalistic approach of human rights frameworks provide important goals for humanity to seek, but when we look into the present, we cannot ignore existing power relations that maintain historical relations of oppression and domination. Rights are not universally accessed.

Feminist theories and practices are important tools to acknowledge the existence of the political structures behind the deployment of technologies and, therefore, are an important framework to question them. For this reason, they can serve as a powerful instrument to imagine other tech and other worlds based on collective and more democratic responses to core societal challenges, focused on equity and social-environmental justice.

Read the paper.
Building Human Rights into Intelligent-Community Design: Beyond Procurement
Phil Dawson, Faun Rice, and Maya Watson. 2/25/2022. “Building Human Rights into Intelligent-Community Design: Beyond Procurement.” Carr Center Discussion Paper Series. See full text.Abstract

Cities have emerged as test beds for digital innovation. Data-collecting devices, such as sensors and cameras, have enabled fine-grained monitoring of public services including urban transit, energy distribution, and waste management, yielding tremendous potential for improvements in efficiency and sustainability. At the same, there is a rising public awareness that without clear guidelines or sufficient safeguards, data collection and use in both public and private spaces can lead to negative impacts on a broad spectrum of human rights and freedoms. In order to productively move forward with intelligent-community projects and design them to meet their full potential in serving the public interest, a consideration of rights and risks is essential.

 

Read the paper.

Humanitarian Digital Ethics: A Foresight and Decolonial Governance Approach
Aarathi Krishnan. 1/20/2022. “Humanitarian Digital Ethics: A Foresight and Decolonial Governance Approach.” Carr Center Discussion Paper Series. Publisher's VersionAbstract
Just as rights are not static, neither is harm. The humanitarian system has always been critiqued as arguably colonial and patriarchal. As these systems increasingly intersect with Western, capitalist technology systems in the race of “for good” technology, how do governance systems ethically anticipate harm, not just now but into the future? Can humanitarian governance systems design mitigation or subversion mechanisms to not lock people into future harm, future inequity, or future indebtedness because of technology design and intervention? Instead of looking at digital governance in terms of control, weaving in foresight and decolonial approaches might liberate our digital futures so that it is a space of safety and humanity for all, and through this, birth new forms of digital humanism. 

Read the paper.
search bar
Rachel Ann Hulvey. 1/11/2022. “Companies as Courts? Google's Role Deciding Digital Human Rights Outcomes in the Right to be Forgotten.” Carr Center Discussion Paper Series. Publisher's VersionAbstract

One of the unwritten rules of the internet is that it was designed to never forget, a feature associated with emerging privacy harms from the availability of personal information captured online. Before the advent of search engines, discovering personal histories would have required hours of sifting through library records. Search engines present the opportunity to find immense amounts of personal details within seconds through a few simple keystrokes. When individuals experience privacy harms, they have limited recourse to demand changes from firms, as platform companies are in the business of making information more accessible.

Read the paper.

 

 

Human Rights Implications of Algorithmic Impact Assessments: Priority Considerations to Guide Effective Development and Use
Brandie Nonnecke and Philip Dawson. 10/21/2021. “Human Rights Implications of Algorithmic Impact Assessments: Priority Considerations to Guide Effective Development and Use.” Carr Center Discussion Paper Series. See full text.Abstract

The public and private sectors are increasingly turning to the use of algorithmic or artificial intelligence impact assessments (AIAs) as a means to identify and mitigate harms from AI. While promising, lack of clarity on the proper scope, methodology, and best practices for AIAs could inadvertently perpetuate the harms they seek to mitigate, especially to human rights. We explore the emerging integration of the human rights legal framework into AI governance strategies, including the implementation of human rights impacts assessments (HRIAs) to assess AI. The benefits and drawbacks from recent implementations of AIAs and HRIAs to assess AI adopted by the public and private sectors are explored and considered in the context of an emerging trend toward the development of standards, certifications, and regulatory technologies for responsible AI governance practices. We conclude with priority considerations to better ensure that AIAs and their corresponding responsible AI governance strategies live up to their promise.

Read the paper.

The Power of Choosing Not to Build: Justice, Non-Deployment, and the Purpose of AI Optimization
Annette Zimmermann. 10/5/2021. “The Power of Choosing Not to Build: Justice, Non-Deployment, and the Purpose of AI Optimization”. See full text.Abstract

Are there any types of AI that should never be built in the first place? The “Non-Deployment Argument”—the claim that some forms of AI should never be deployed, or even built—has been subject to significant controversy recently: non-deployment skeptics fear that it will stifle innovation, and argue that the continued deployment and incremental optimization of AI tools will ultimately benefit everyone in society. However, there are good reasons to subject the view that we should always try to build, deploy, and gradually optimize new AI tools to critical scrutiny: in the context of AI, making things better is not always good enough. In specific cases, there are overriding ethical and political reasons—such as the ongoing presence of entrenched structures of social injustice—why we ought not to continue to build, deploy, and optimize particular AI tools for particular tasks. Instead of defaulting to optimization, we have a moral and political duty to critically interrogate and contest the value and purpose of using AI in a given domain in the first place.

Read the paper.

Human Rights and the Pandemic: The Other Half of the Story
Elizabeth M. Renieris. 10/2/2021. “Human Rights and the Pandemic: The Other Half of the Story.” Carr Center Discussion Paper Series. See full text.Abstract

Human rights are a broad array of civil, political, economic, social, and cultural rights and freedoms that are universal and inalienable, inherent to the dignity of every human being. The application of human rights to digital technologies has generally focused on individual civil and political rights, such as the freedom of expression and privacy. However, as digital technologies evolve beyond traditional information and communications technologies to increasingly mediate access to everything from healthcare to employment, education, and participation in social and cultural life, an increasingly broad array of human rights are implicated. With humanity more reliant on digital tools and technologies than ever before, the stakes have never been more apparent than during the Covid-19 pandemic. Gripped by the magical potential of digital tools and technologies and the allure of simple solutions to complex governance challenges, governments and key stakeholders have adopted an exceedingly limited view of human rights in relation to these technologies, focusing almost exclusively on a narrow set of civil and political rights while virtually ignoring threats to economic, social, and cultural rights. For those already at the margins, this has exacerbated their digital exclusion. This paper calls for a more expansive view of human rights in relation to technology governance. After contextualizing the role of economic, social, and cultural rights in relation to digital technologies, this paper examines how such rights have been largely absent from the discourse around technologies deployed in the pandemic (“pandemic tech”), as well as the consequences of that omission. The paper then explores how a recalibration of human rights in relation to digital technologies, specifically pandemic tech, could help prevent geopolitical fracturing, reorient the conversation around people rather than technology, and provide a critical backstop against the runaway commercialization that threatens the exercise and enjoyment of fundamental rights by individuals and communities.

Read the paper.

Public Health, Technology, and Human Rights: Lessons Learned from Digital Contact Tracing
Maria Carnovale and Khahlil Louisy. 9/27/2021. “Public Health, Technology, and Human Rights: Lessons Learned from Digital Contact Tracing.” Carr Center Discussion Paper Series. See full text.Abstract

To mitigate inefficiencies in manual contact tracing processes, Digital contact tracing and exposure notifications systems were developed for use as public-interest technologies during the SARS-CoV-2 (COVID-19) global pandemic. Effective implementation of these tools requires alignment across several factors, including local regulations and policies and trust in government and public health officials. Careful consideration should also be made to minimize any potential conflicts with existing processes in public health, which has demonstrated effectiveness. Four unique cases—of Ireland, Guayaquil (Ecuador), Haiti, and the Philippines—detailed in this paper will highlight the importance of upholding the principles of Scientific Validity, Necessity, Time-Boundedness, and Proportionality.

Read the paper.

Pages