The public and private sectors are increasingly turning to the use of algorithmic or artificial intelligence impact assessments (AIAs) as a means to identify and mitigate harms from AI. While promising, lack of clarity on the proper scope, methodology, and best practices for AIAs could inadvertently perpetuate the harms they seek to mitigate, especially to human rights. We explore the emerging integration of the human rights legal framework into AI governance strategies, including the implementation of human rights impacts assessments (HRIAs) to assess AI. The benefits and drawbacks from recent implementations of AIAs and HRIAs to assess AI adopted by the public and private sectors are explored and considered in the context of an emerging trend toward the development of standards, certifications, and regulatory technologies for responsible AI governance practices. We conclude with priority considerations to better ensure that AIAs and their corresponding responsible AI governance strategies live up to their promise.
Are there any types of AI that should never be built in the first place? The “Non-Deployment Argument”—the claim that some forms of AI should never be deployed, or even built—has been subject to significant controversy recently: non-deployment skeptics fear that it will stifle innovation, and argue that the continued deployment and incremental optimization of AI tools will ultimately benefit everyone in society. However, there are good reasons to subject the view that we should always try to build, deploy, and gradually optimize new AI tools to critical scrutiny: in the context of AI, making things better is not always good enough. In specific cases, there are overriding ethical and political reasons—such as the ongoing presence of entrenched structures of social injustice—why we ought not to continue to build, deploy, and optimize particular AI tools for particular tasks. Instead of defaulting to optimization, we have a moral and political duty to critically interrogate and contest the value and purpose of using AI in a given domain in the first place.
Human rights are a broad array of civil, political, economic, social, and cultural rights and freedoms that are universal and inalienable, inherent to the dignity of every human being. The application of human rights to digital technologies has generally focused on individual civil and political rights, such as the freedom of expression and privacy. However, as digital technologies evolve beyond traditional information and communications technologies to increasingly mediate access to everything from healthcare to employment, education, and participation in social and cultural life, an increasingly broad array of human rights are implicated. With humanity more reliant on digital tools and technologies than ever before, the stakes have never been more apparent than during the Covid-19 pandemic. Gripped by the magical potential of digital tools and technologies and the allure of simple solutions to complex governance challenges, governments and key stakeholders have adopted an exceedingly limited view of human rights in relation to these technologies, focusing almost exclusively on a narrow set of civil and political rights while virtually ignoring threats to economic, social, and cultural rights. For those already at the margins, this has exacerbated their digital exclusion. This paper calls for a more expansive view of human rights in relation to technology governance. After contextualizing the role of economic, social, and cultural rights in relation to digital technologies, this paper examines how such rights have been largely absent from the discourse around technologies deployed in the pandemic (“pandemic tech”), as well as the consequences of that omission. The paper then explores how a recalibration of human rights in relation to digital technologies, specifically pandemic tech, could help prevent geopolitical fracturing, reorient the conversation around people rather than technology, and provide a critical backstop against the runaway commercialization that threatens the exercise and enjoyment of fundamental rights by individuals and communities.
To mitigate inefficiencies in manual contact tracing processes, Digital contact tracing and exposure notifications systems were developed for use as public-interest technologies during the SARS-CoV-2 (COVID-19) global pandemic. Effective implementation of these tools requires alignment across several factors, including local regulations and policies and trust in government and public health officials. Careful consideration should also be made to minimize any potential conflicts with existing processes in public health, which has demonstrated effectiveness. Four unique cases—of Ireland, Guayaquil (Ecuador), Haiti, and the Philippines—detailed in this paper will highlight the importance of upholding the principles of Scientific Validity, Necessity, Time-Boundedness, and Proportionality.