Technology & Human Rights

We Can't Future-Proof Technology. But Here are 5 Ways to Forward Plan.
Alexa Koenig and Sherif Elsayed-Ali. 1/5/2019. “We Can't Future-Proof Technology. But Here are 5 Ways to Forward Plan.” World Economic Forum . See full text.Abstract
New article co-authored by Carr Center Technology and Human Rights Fellow Sherif Elsayed-Ali.

"We know that the technologies of the Fourth Industrial Revolution are drastically changing our world. This change is happening at a faster rate and greater scale than at any point in human history – and with that change come significant challenges to the ability of our public institutions and governments to adequately respond.

From the plough to vaccines to computers, technological innovations have generally made human societies more productive. Over time, people have figured out how to mitigate their negative aspects. For example, electrical applications are much safer to use now than in the early days of electrification. Though we came close to disaster, since the Second World War the international political system has managed to contain the threat of nuclear weapons for mass destruction.

However, the accelerating pace of change and the power of new technologies mean that negative unintended consequences will only become more frequent and more dangerous. What can we do today to help ensure that new technologies make life better, not worse?"

https://www.weforum.org/agenda/2019/01/how-to-plan-for-technology-future-koenig-elsayed-ali/

Critical Skill for Nonprofits in the Digital Age: Technical Intuition
Alix Dunn. 5/7/2019. “Critical Skill for Nonprofits in the Digital Age: Technical Intuition.” Stanford Social Innovation Review. Listen to the Interview.Abstract

Not everyone needs to become a tech expert, but all activists and nonprofit leaders must develop skills to inquire about, decide on, and demand technological change. Tech Fellow Alix Dunn talks to Stanford's Social Innovation Podcast. 

In a world where the pace of organizational learning is often slower than the pace of technological change, activists and nonprofit leaders must develop their “technical intuition.” Not everyone needs to become a tech expert, explains Alix Dunn, of the consulting firm Computer Says Maybe, but this ongoing process of imagining, inquiring about, deciding on, and demanding technological change is critical.

In this recording from the Stanford Social Innovation Review's 2019 Data on Purpose conference, Dunn walks through her guidelines to help anyone to develop these skills.

The Future Impact of Artificial Intelligence on Humans and Human Rights
Steven Livingston and Mathias Risse. 6/7/2019. “The Future Impact of Artificial Intelligence on Humans and Human Rights.” Ethics and International Affairs, 33, 2, Pp. 141-158. See full text.Abstract
What are the implications of artificial intelligence (AI) on human rights in the next three decades?

Precise answers to this question are made difficult by the rapid rate of innovation in AI research and by the effects of human practices on the adaption of new technologies. Precise answers are also challenged by imprecise usages of the term “AI.” There are several types of research that all fall under this general term. We begin by clarifying what we mean by AI. Most of our attention is then focused on the implications of artificial general intelligence (AGI), which entail that an algorithm or group of algorithms will achieve something like superintelligence. While acknowledging that the feasibility of superintelligence is contested, we consider the moral and ethical implications of such a potential development. What do machines owe humans and what do humans owe superintelligent machines?

Read the full article here

Digital Identity in the Migration & Refugee Context: Italy Case Study
Mark Latonero, Keith Hiatt, Antonella Napolitano, Giulia Clericetti, and Melanie Penagos. 4/2019. Digital Identity in the Migration & Refugee Context: Italy Case Study. Data & Society. Data & Society. See full text.Abstract
New Report by Carr Center Technology and Human Rights Fellow Mark Latonero.

"Increasingly, governments, corporations, international organizations, and nongov-ernmental organizations (NGOs) are seeking to use digital technologies to track the identities of migrants and refugees. This surging interest in digital identity technologies would seem to meet a pressing need: the United Nations Refugee Agency (UNHCR) states that in today’s modern world, lacking proof of identity can limit a person’s access to services and socio-economic participation, including employment opportunities, housing, a mobile phone, and a bank account. But this report argues that the tech-nologies and processes involved in digital identity will not provide easy solutions in the migration and refugee context. Technologies that rely on identity data introduce a new sociotechnical layer that may exacerbate existing biases, discrimination, or power imbalances.How can we weigh the added value of digital identification systems against the potential risks and harms to migrant safety and fundamental human rights? This report provides international organizations, policymakers, civil society, technologists, and funders with a deeper background on what we currently know about digital identity and how migrant identity data is situated in the Italian context. "
Big Tech Firms are Racing to Track Climate Refugees
Mark Latonero. 5/17/2019. “Big Tech Firms are Racing to Track Climate Refugees.” MIT Technology Review. See full text.Abstract
The MIT Technology Review features new report by Carr Center Technology and Human Rights Fellow Mark Latonero.

"Simply layering technology on top of existing humanitarian problems tends to exacerbate the issues it intended to resolve. In a new report on the role of digital identity in refugee and migrant contexts, a team of researchers at the Data & Society Research Institute, led by Mark Latonero, detail the various ways these initiatives can reproduce and worsen existing bureaucratic biases."

https://www.technologyreview.com/s/613531/big-tech-firms-are-racing-to-track-climate-refugees/

Deepfakes are Solvable—but Don’t Forget That “shallowfakes” are Already Pervasive
Mark Latonero. 3/25/2019. “Deepfakes are Solvable—but Don’t Forget That “shallowfakes” are Already Pervasive.” MIT Technology Review. See full text.Abstract
New article features Carr Center Technology and Human Rights Fellow Mark Latonero.

" Mark Latonero, human rights lead at Data & Society, a nonprofit institute dedicated to the applications of data, agreed that technology companies should be doing more to tackle such issues. While Microsoft, Google, Twitter, and others have employees focused on human rights, he said, there was so much more they should be doing before they deploy technologies—not after."
Can Technology deliver freedoms for India’s poor?
Salil Shetty. 12/16/2018. “Can Technology deliver freedoms for India’s poor? ”. See full text.Abstract
Talk given by Carr Center's Senior Fellow Salil Shetty at TechFest IIT Bombay.

"My talk today is addressed to concerned citizens who are not experts on the subject. Many of the issues I am touching on require a much more complex and nuanced treatment but this talk is deliberately taking a simpler narrative."

Read Salil Shetty's complete presentation here: https://carrcenter.hks.harvard.edu/files/cchr/files/can_tech_salil_shetty_01.pdf

Human Rights, Artificial Intelligence and Heideggerian Technoskepticism: The Long (Worrisome?) View
Mathias Risse. 2/12/2019. Human Rights, Artificial Intelligence and Heideggerian Technoskepticism: The Long (Worrisome?) View. Carr Center Discussion Paper Series. 2019002nd ed. Cambridge: Carr Center for Human Rights Policy. See full text.Abstract

Mathias Risse's explores the impact of artificial intelligence on human rights in his latest discussion paper. 

My concern is with the impact of Artificial Intelligence on human rights. I first identify two presumptions about ethics-and-AI we should make only with appropriate qualifications. These presumptions are that (a) for the time being investigating the impact of AI, especially in the human-rights domain, is a matter of investigating impact of certain tools, and that (b) the crucial danger is that some such tools – the artificially intelligent ones – might eventually become like their creators and conceivably turn against them.  We turn to Heidegger’s influential philosophy of technology to argue these presumptions require qualifications of a sort that should inform our discussion of AI. Next I argue that one major challenge is how human rights will prevail in an era that quite possibly is shaped by an enormous increase in economic inequality. Currently the human-rights movement is rather unprepared to deal with the resulting challenges. What is needed is greater focus on social justice/distributive justice, both domestically and globally, to make sure societies do not fall apart. I also ague that, in the long run, we must be prepared to deal with more types of moral status than we currently do and that quite plausibly some machines will have some type of moral status, which may or may not fall short of the moral status of human beings (a point also emerging from the Heidegger discussion). Machines may have to be integrated into human social and political lives.

Conference Report: Human Rights, Ethics, and Artificial Intelligence
Carr Center Human Rights for Policy. 1/1/2019. “Conference Report: Human Rights, Ethics, and Artificial Intelligence.” Human Rights, Ethics, and Artificial Intelligence. See full text.Abstract
Human Rights, Ethics, and Artificial Intelligence: Challenges for the Next 70 Years of the Universal Declaration of Human Rights 

In early December 2018, the Carr Center for Human Rights Policy, the Edmond J. Safra Center for Ethics, and the Berkman Klein Center for Internet and Society hosted an inaugural conference that aimed to respond to the Universal Declaration of Human Rights’ 70th Anniversary by reflecting on the past, present and future of human rights. The conference was organized by Carr Center Faculty Director Mathias Risse.

Pages