Artificial Intelligence and New Technologies

A More Equal Future? Political Equality, Discrimination, and Machine Learning
Joshua Simons and Eli Frankel. 5/3/2022. “A More Equal Future? Political Equality, Discrimination, and Machine Learning.” Technology and Democracy Discussion Paper Series. Publisher's VersionAbstract

Machine learning is everywhere. AI-evangelists promise that data-driven decision-making will not only boost organizational efficiency, it will also help make organizations fairer and advance social justice. Yet the effects of machine learning on social justice, human rights, and democracy will depend not on the technology itself, but on human choices about how to design and deploy it. Among the most important is whether and how to ensure systems do not reproduce and entrench pervasive patterns of inequality.

The authors argue that we need radical civil rights reforms to regulate AI in the digital age, and must return to the roots of civil rights. This paper is adapted from Josh Simons's forthcoming book, Algorithms for the People: Democracy in the Age of AI, published by Princeton University Press this Fall.

Read the paper.

How AI Fails Us
Divya Siddarth, Daron Acemoglu, Danielle Allen, Kate Crawford, James Evans, Michael Jordan, and E. Glen Weyl. 4/14/2022. “How AI Fails Us.” Technology and Democracy Discussion Paper Series. Publisher's VersionAbstract

The dominant vision of artificial intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of felds. This “actually existing AI” vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous, optimizing for artificial metrics of human replication rather than for systemic augmentation, and tending to concentrate power, resources, and decision-making in an engineering elite. Alternative visions based on participating in and augmenting human creativity and cooperation have a long history and underlie many celebrated digital technologies such as personal computers and the internet. Researchers and funders should redirect focus from centralized autonomous general intelligence to a plurality of established and emerging approaches that extend cooperative and augmentative traditions as seen in successes such as Taiwan’s digital democracy project and collective intelligence platforms like Wikipedia.

Read the paper.

Building Human Rights into Intelligent-Community Design: Beyond Procurement
Phil Dawson, Faun Rice, and Maya Watson. 2/25/2022. “Building Human Rights into Intelligent-Community Design: Beyond Procurement.” Carr Center Discussion Paper Series. See full text.Abstract

Cities have emerged as test beds for digital innovation. Data-collecting devices, such as sensors and cameras, have enabled fine-grained monitoring of public services including urban transit, energy distribution, and waste management, yielding tremendous potential for improvements in efficiency and sustainability. At the same, there is a rising public awareness that without clear guidelines or sufficient safeguards, data collection and use in both public and private spaces can lead to negative impacts on a broad spectrum of human rights and freedoms. In order to productively move forward with intelligent-community projects and design them to meet their full potential in serving the public interest, a consideration of rights and risks is essential.

 

Read the paper.

Humanitarian Digital Ethics: A Foresight and Decolonial Governance Approach
Aarathi Krishnan. 1/20/2022. “Humanitarian Digital Ethics: A Foresight and Decolonial Governance Approach.” Carr Center Discussion Paper Series. Publisher's VersionAbstract
Just as rights are not static, neither is harm. The humanitarian system has always been critiqued as arguably colonial and patriarchal. As these systems increasingly intersect with Western, capitalist technology systems in the race of “for good” technology, how do governance systems ethically anticipate harm, not just now but into the future? Can humanitarian governance systems design mitigation or subversion mechanisms to not lock people into future harm, future inequity, or future indebtedness because of technology design and intervention? Instead of looking at digital governance in terms of control, weaving in foresight and decolonial approaches might liberate our digital futures so that it is a space of safety and humanity for all, and through this, birth new forms of digital humanism. 

Read the paper.
search bar
Rachel Ann Hulvey. 1/11/2022. “Companies as Courts? Google's Role Deciding Digital Human Rights Outcomes in the Right to be Forgotten.” Carr Center Discussion Paper Series. Publisher's VersionAbstract

One of the unwritten rules of the internet is that it was designed to never forget, a feature associated with emerging privacy harms from the availability of personal information captured online. Before the advent of search engines, discovering personal histories would have required hours of sifting through library records. Search engines present the opportunity to find immense amounts of personal details within seconds through a few simple keystrokes. When individuals experience privacy harms, they have limited recourse to demand changes from firms, as platform companies are in the business of making information more accessible.

Read the paper.

 

 

Human Rights Implications of Algorithmic Impact Assessments: Priority Considerations to Guide Effective Development and Use
Brandie Nonnecke and Philip Dawson. 10/21/2021. “Human Rights Implications of Algorithmic Impact Assessments: Priority Considerations to Guide Effective Development and Use.” Carr Center Discussion Paper Series. See full text.Abstract

The public and private sectors are increasingly turning to the use of algorithmic or artificial intelligence impact assessments (AIAs) as a means to identify and mitigate harms from AI. While promising, lack of clarity on the proper scope, methodology, and best practices for AIAs could inadvertently perpetuate the harms they seek to mitigate, especially to human rights. We explore the emerging integration of the human rights legal framework into AI governance strategies, including the implementation of human rights impacts assessments (HRIAs) to assess AI. The benefits and drawbacks from recent implementations of AIAs and HRIAs to assess AI adopted by the public and private sectors are explored and considered in the context of an emerging trend toward the development of standards, certifications, and regulatory technologies for responsible AI governance practices. We conclude with priority considerations to better ensure that AIAs and their corresponding responsible AI governance strategies live up to their promise.

Read the paper.

The Power of Choosing Not to Build: Justice, Non-Deployment, and the Purpose of AI Optimization
Annette Zimmermann. 10/5/2021. “The Power of Choosing Not to Build: Justice, Non-Deployment, and the Purpose of AI Optimization”. See full text.Abstract

Are there any types of AI that should never be built in the first place? The “Non-Deployment Argument”—the claim that some forms of AI should never be deployed, or even built—has been subject to significant controversy recently: non-deployment skeptics fear that it will stifle innovation, and argue that the continued deployment and incremental optimization of AI tools will ultimately benefit everyone in society. However, there are good reasons to subject the view that we should always try to build, deploy, and gradually optimize new AI tools to critical scrutiny: in the context of AI, making things better is not always good enough. In specific cases, there are overriding ethical and political reasons—such as the ongoing presence of entrenched structures of social injustice—why we ought not to continue to build, deploy, and optimize particular AI tools for particular tasks. Instead of defaulting to optimization, we have a moral and political duty to critically interrogate and contest the value and purpose of using AI in a given domain in the first place.

Read the paper.

Human Rights and the Pandemic: The Other Half of the Story
Elizabeth M. Renieris. 10/2/2021. “Human Rights and the Pandemic: The Other Half of the Story.” Carr Center Discussion Paper Series. See full text.Abstract

Human rights are a broad array of civil, political, economic, social, and cultural rights and freedoms that are universal and inalienable, inherent to the dignity of every human being. The application of human rights to digital technologies has generally focused on individual civil and political rights, such as the freedom of expression and privacy. However, as digital technologies evolve beyond traditional information and communications technologies to increasingly mediate access to everything from healthcare to employment, education, and participation in social and cultural life, an increasingly broad array of human rights are implicated. With humanity more reliant on digital tools and technologies than ever before, the stakes have never been more apparent than during the Covid-19 pandemic. Gripped by the magical potential of digital tools and technologies and the allure of simple solutions to complex governance challenges, governments and key stakeholders have adopted an exceedingly limited view of human rights in relation to these technologies, focusing almost exclusively on a narrow set of civil and political rights while virtually ignoring threats to economic, social, and cultural rights. For those already at the margins, this has exacerbated their digital exclusion. This paper calls for a more expansive view of human rights in relation to technology governance. After contextualizing the role of economic, social, and cultural rights in relation to digital technologies, this paper examines how such rights have been largely absent from the discourse around technologies deployed in the pandemic (“pandemic tech”), as well as the consequences of that omission. The paper then explores how a recalibration of human rights in relation to digital technologies, specifically pandemic tech, could help prevent geopolitical fracturing, reorient the conversation around people rather than technology, and provide a critical backstop against the runaway commercialization that threatens the exercise and enjoyment of fundamental rights by individuals and communities.

Read the paper.

Public Health, Technology, and Human Rights: Lessons Learned from Digital Contact Tracing
Maria Carnovale and Khahlil Louisy. 9/27/2021. “Public Health, Technology, and Human Rights: Lessons Learned from Digital Contact Tracing.” Carr Center Discussion Paper Series. See full text.Abstract

To mitigate inefficiencies in manual contact tracing processes, Digital contact tracing and exposure notifications systems were developed for use as public-interest technologies during the SARS-CoV-2 (COVID-19) global pandemic. Effective implementation of these tools requires alignment across several factors, including local regulations and policies and trust in government and public health officials. Careful consideration should also be made to minimize any potential conflicts with existing processes in public health, which has demonstrated effectiveness. Four unique cases—of Ireland, Guayaquil (Ecuador), Haiti, and the Philippines—detailed in this paper will highlight the importance of upholding the principles of Scientific Validity, Necessity, Time-Boundedness, and Proportionality.

Read the paper.