The Carr Center for Human Rights Policy serves as the hub of the Harvard Kennedy School’s research, teaching, and training in the human rights domain. The center embraces a dual mission: to educate students and the next generation of leaders from around the world in human rights policy and practice; and to convene and provide policy-relevant knowledge to international organizations, governments, policymakers, and businesses.
In April 2019, the Carr Center for Human Rights Policy at the Harvard Kennedy School hosted a faculty consultation on the integrated system for truth, justice, reparation, and nonrepetition, created as a result of the peace accord between the Colombian government and the FARC guerrillas in 2016. President Juan Manuel Santos and Carr Center faculty called upon leading voices in the field of transitional justice to share perspectives on the Colombian peace process and to formulate recommendations. The discussion was organized into four sessions focusing on the main components of the peace process: reparations, justice, truth, and nonrepetition.
What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood.
The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes.
For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.
“The Carr Center is building a bridge between ideas on human rights and the practice on the ground. Right now we are at a critical juncture. The pace of technological change and the rise of authoritarian governments are both examples of serious challenges to the flourishing of individual rights. It’s crucial that Harvard and the Kennedy School continue to be a major influence in keeping human rights ideals alive. The Carr Center is a focal point for this important task.”
- Mathias Risse