Please join us for a study group on technology and human rights at the Harvard Kennedy School!
The Carr Center for Human Rights Policy invites you to join a study group on technology, human rights and artificial intelligence. The study group, which will meet three times this semester, is convened and moderated by Steven Livingston, Senior Fellow at the Carr Center for Human Rights Policy.
This is an open study group. No registration is required.
The study group will meet from 12:00 - 1:15 pm on three occasions this semester:
Wednesday, February 13 in room Taubman-102
- Topic: Technology and Opensource Investigations
- Guest Speakers: Scot Edwards, Senior Advisor, Amnesty International
Wednesday, March 6 in room Wexner-102
- Topic: Disinformation
Wednesday, April 17 in room Wexner-102
- Topic: Super Intelligent AI and Rights
Session 3 - Super Intelligent AI and Rights
Our last session considers the implications of artificial intelligence, especially deep learning and the possibility and implications for artificial general intelligence (AGI). AGI involves algorithms that learn even without training data. An example would be DeepMind’s AlphaGo Zero and AlphaZero. AI pessimists like Elon Musk (and others) believe that there is a possibility of such an AI reaching a knowledge cascade when it would rapidly develop as much and more knowledge than is possessed in all of humanity and in all of human history. It might even become sentient. AI- pessimists like Elon Musk claim the only hope for human survival in the face of inevitable AI superintelligence is found in merging with it. Assuming that not all 9.7 billion persons expected to inhabit the planet in 2050 will have access to superintelligence, what will be the rights of the unenhanced humans left behind? What ethical obligations would enhanced humans have to those who remain unenhanced? What obligations would an AI superintelligent agent have to humans of any type?