Human rights impact assessments (HRIAs) have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence (AI) and algorithmic systems. The purpose of this paper is to assess whether HRIAs are a tool fit for purpose for AI. Will HRIAs become an effective tool of AI governance that reduces risks and harms? Or, will they become a form of AI “ethics washing” that permits companies to hide behind a veneer of human rights due diligence and accountability? This paper finds that HRIAs of AI are only in their infancy. Simply conducting such assessments with the usual methods will miss the mark for AI and algorithmic systems, as demonstrated by the failures of the HRIA of Facebook in Myanmar. Facebook commissioned an HRIA after UN investigators found that genocide was committed in the country. However, the HRIA did not adequately assess the most salient human rights impacts of Facebook’s presence and product in Myanmar. HRIAs should be updated if they are to be used on AI and algorithmic systems. HRIAs for AI should be seen as an analysis of a sociotechnical system wherein social and technical factors are inherently intertwined and interrelated. Interdisciplinary expertise is needed to determine the appropriate methods and criteria for specific contexts where AI systems are deployed. In addition, HRIAs should be conducted at appropriate times relative to critical stages in an AI development lifecycle and function on an ongoing basis as part of a comprehensive human rights due diligence process. Challenges remain, such as developing methods to identify algorithmic discrimination as one of the most salient human rights concerns when it comes to assessing AI harms. In addition, a mix of voluntary actions and mandatory measures may be needed to incentivize organizations to incorporate HRIAs for AI and algorithmic systems in a more effective, transparent, and accountable way. The paper concludes with considerations for the technology sector, government, and civil society.
This paper was prepared for an interdisciplinary conference on Gefährliche Forschung? (Dangerous Science?) held at the University of Cologne in February 2020 and is scheduled to appear in a volume of contributions from that event edited by Wilfried Hinsch and Susanne Brandstätter, the organizers, and to be published by de Gruyter. The paper delves into the question proposed to me—might population genetics or artificial intelligence undermine philosophical ideas about equality—without locating the context of this debate or offering a preview of its contents. The first section discusses the ideal of equality, the next two talk about genetics in the context of responses to racism, and the remaining two speak about possible changes that might come from the development of general Artificial Intelligence.
Read full text here.
For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.
The recent conviction of the journalist Maria Ressa in the Philippines for “cyber libel” has brought into sharp relief the global deterioration of press freedom. Across the world, fundamental freedoms of association, expression, and assembly are under threat. A recent report from Civicus found that twice as many people live under repression today as a year ago. Although much of that is due to diminishing freedoms in countries whose governments have long been known for their heavy hands, an increasing number of attacks on the media have come in places where press freedom was once enshrined.
This paper explores the human rights implications of emergent technology, and focuses on virtual reality (VR), augmented reality (AR), and immersive technologies. Because of the psychological and physiological aspects of immersive technologies, and the potential for a new invasive class of privacy-related harms, she argues that content creators, hardware producers, and lawmakers should take increased caution to protect users. This will help protect the nascent industry in a changing legal landscape and help ensure that the beneficial uses of this powerful technology outweigh the potential misuses.
In the paper, Heller first reviews the technology and terminology around immersive technologies to explain how they work, how a user’s body and mind are impacted by the hardware, and what social role these technologies can play for communities. Next she describes some of the unique challenges for immersive media, from user safety to misalignment with current biometrics laws. She introduces a new concept, biometric psychography, to explain how the potential for privacy-related harms is different in immersive technologies, due to the ability to connect your identity to your innermost thoughts, wants, and desires. Finally, she describe foreseeable developments in the immersive industry, with an eye toward identifying and mitigating future human rights challenges. The paper concludes with five recommendations for actions that the industry and lawmakers can take now, as the industry is still emerging, to build human rights into its DNA.
Across the country, people are protesting the killings of George Floyd, Breonna Taylor and Ahmaud Arbery and demanding action against police violence and systemic racism. National media focuses on the big demonstrations and protest policing in major cities, but they have not picked up on a different phenomenon that may have major long-term consequences for politics. Protests over racism and #BlackLivesMatter are spreading across the country — including in small towns with deeply conservative politics.
Carr Center faculty and fellows examine the human rights implications and legal ramifications of introducing widespread immunity passports. In this latest issue, hear from Mark Latonero, Technology and Human Rights Fellow at the Carr Center and Research Lead at Data & Society, Elizabeth Renieris, a Technology and Human Rights Fellow at the Carr Center and founder of hackylawyER, and Mathias Risse, Faculty Director at the Carr Center.
The horrific death, captured on video, of George Floyd, a 46-year-old black man who died after a white Minneapolis police officer kneeled on his neck, spotlights the longstanding crisis of racism in policing.
To understand the protests that have erupted across the United States, one needs to understand the deeply troubled history of policing and race. Police brutality, racial discrimination, and violence against minorities are intertwined and rooted throughout US history. Technology has made it possible for the level and extent of the problem finally to be publicly documented. The anger expressed in the wake of Floyd’s killing reflects the searing reality that Black people in the United States continue to be dehumanized and treated unjustly.