Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar

Citation:

Mark Latonero and Aaina Agarwal. 3/19/2021. “Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar.” Carr Center Discussion Paper Series. See full text.
Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar

Abstract:

Human rights impact assessments (HRIAs) have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence (AI) and algorithmic systems. The purpose of this paper is to assess whether HRIAs are a tool fit for purpose for AI. Will HRIAs become an effective tool of AI governance that reduces risks and harms? Or, will they become a form of AI “ethics washing” that permits companies to hide behind a veneer of human rights due diligence and accountability? This paper finds that HRIAs of AI are only in their infancy. Simply conducting such assessments with the usual methods will miss the mark for AI and algorithmic systems, as demonstrated by the failures of the HRIA of Facebook in Myanmar. Facebook commissioned an HRIA after UN investigators found that genocide was committed in the country. However, the HRIA did not adequately assess the most salient human rights impacts of Facebook’s presence and product in Myanmar. HRIAs should be updated if they are to be used on AI and algorithmic systems. HRIAs for AI should be seen as an analysis of a sociotechnical system wherein social and technical factors are inherently intertwined and interrelated. Interdisciplinary expertise is needed to determine the appropriate methods and criteria for specific contexts where AI systems are deployed. In addition, HRIAs should be conducted at appropriate times relative to critical stages in an AI development lifecycle and function on an ongoing basis as part of a comprehensive human rights due diligence process. Challenges remain, such as developing methods to identify algorithmic discrimination as one of the most salient human rights concerns when it comes to assessing AI harms. In addition, a mix of voluntary actions and mandatory measures may be needed to incentivize organizations to incorporate HRIAs for AI and algorithmic systems in a more effective, transparent, and accountable way. The paper concludes with considerations for the technology sector, government, and civil society.

Read the paper. 

: Mark Latonero et al | March 19 2021
: Human rights impact assessments have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence and algorithmic systems, but are they effective?