The Carr Center for Human Rights Policy serves as the hub of the Harvard Kennedy School’s research, teaching, and training in the human rights domain. The center embraces a dual mission: to educate students and the next generation of leaders from around the world in human rights policy and practice; and to convene and provide policy-relevant knowledge to international organizations, governments, policymakers, and businesses.
Right-wing populist movements, which incorporate right-wing political theory and populist modalities, have become increasingly prominent and mainstream over the past decade, both in the Global North and Global South. European far-right populism shares many commonalities with other regional populist movements, but also has its own set of distinct methods, risks, and uncertainties.
This discussion paper by Michelle Poulin, Carr Center Topol Fellow, will outline the unique characteristics of European far-right populist movements, the ways in which countries’ 20th century histories have influenced current day populist politics, and the online and offline organizational strategies that have helped right-wing movements influence the successes of right-wing political parties in recent years. It will also examine the rise of Germany’s far-right populist movement and the social factors that may have led to it.
The public and private sectors are increasingly turning to the use of algorithmic or artificial intelligence impact assessments (AIAs) as a means to identify and mitigate harms from AI. While promising, lack of clarity on the proper scope, methodology, and best practices for AIAs could inadvertently perpetuate the harms they seek to mitigate, especially to human rights. We explore the emerging integration of the human rights legal framework into AI governance strategies, including the implementation of human rights impacts assessments (HRIAs) to assess AI. The benefits and drawbacks from recent implementations of AIAs and HRIAs to assess AI adopted by the public and private sectors are explored and considered in the context of an emerging trend toward the development of standards, certifications, and regulatory technologies for responsible AI governance practices. We conclude with priority considerations to better ensure that AIAs and their corresponding responsible AI governance strategies live up to their promise.
Are there any types of AI that should never be built in the first place? The “Non-Deployment Argument”—the claim that some forms of AI should never be deployed, or even built—has been subject to significant controversy recently: non-deployment skeptics fear that it will stifle innovation, and argue that the continued deployment and incremental optimization of AI tools will ultimately benefit everyone in society. However, there are good reasons to subject the view that we should always try to build, deploy, and gradually optimize new AI tools to critical scrutiny: in the context of AI, making things better is not always good enough. In specific cases, there are overriding ethical and political reasons—such as the ongoing presence of entrenched structures of social injustice—why we ought not to continue to build, deploy, and optimize particular AI tools for particular tasks. Instead of defaulting to optimization, we have a moral and political duty to critically interrogate and contest the value and purpose of using AI in a given domain in the first place.
“The Carr Center is building a bridge between ideas on human rights and the practice on the ground. Right now we are at a critical juncture. The pace of technological change and the rise of authoritarian governments are both examples of serious challenges to the flourishing of individual rights. It’s crucial that Harvard and the Kennedy School continue to be a major influence in keeping human rights ideals alive. The Carr Center is a focal point for this important task.”
- Mathias Risse