Search

Search results

    The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification
    Neal Cohen. 4/4/2020. “The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification.” Carr Center Discussion Paper Series, 2020-004. See full text.Abstract
    Artificial Intelligence (AI) technologies have the capacity to do a great deal of good in the world, but whether they do so is not only dependent upon how we use those AI technologies but also how we build those AI technologies in the first place.

    The unfortunate truth is that personal data has become the bricks and mortar used to build many AI technologies and more must be done to protect and safeguard the humans whose personal data is being used. Through a case study on AI-powered remote biometric identity verification, this paper seeks to explore the technical requirements of building AI technologies with high volumes of personal data and the implications of such on our understanding of existing data protection frameworks. Ultimately, a path forward is proposed for ethically using personal data to build AI technologies.

    Read the paper here. 

    The Ethics of Surveillance Technology during a Global Pandemic
    Vivek Krishnamurthy, Bruce Schneier, and Mathias Risse. 4/2/2020. “The Ethics of Surveillance Technology during a Global Pandemic.” Carr Center Covid-19 Discussion Paper Series, 2. See full text.Abstract
    Three experts on cyberlaw, security, and AI discuss how governments and businesses might ethically employ surveillance and AI technologies to address Covid-19.

    We interviewed Bruce Schneier, Security Technologist and Adjunct Lecturer in Public Policy, Carr Center Fellow Vivek Krishnamurthy, and Carr Center Faculty Director Mathias Risse on the ethics and responsibilities of using AI and surveillance technology amidst a global pandemic. 

    Read their full discussion, here

     
     
    The Future Impact of Artificial Intelligence on Humans and Human Rights
    Steven Livingston and Mathias Risse. 6/7/2019. “The Future Impact of Artificial Intelligence on Humans and Human Rights.” Ethics and International Affairs, 33, 2, Pp. 141-158. See full text.Abstract
    What are the implications of artificial intelligence (AI) on human rights in the next three decades?

    Precise answers to this question are made difficult by the rapid rate of innovation in AI research and by the effects of human practices on the adaption of new technologies. Precise answers are also challenged by imprecise usages of the term “AI.” There are several types of research that all fall under this general term. We begin by clarifying what we mean by AI. Most of our attention is then focused on the implications of artificial general intelligence (AGI), which entail that an algorithm or group of algorithms will achieve something like superintelligence. While acknowledging that the feasibility of superintelligence is contested, we consider the moral and ethical implications of such a potential development. What do machines owe humans and what do humans owe superintelligent machines?

    Read the full article here

    The Quest For Inclusive & Ethical Technology
    Sabelo Mhlambi. 6/10/2019. “The Quest For Inclusive & Ethical Technology.” WUWM Milwaukee NPR. Bonnie North. See full text.Abstract
    New interview with Technology and Human Rights Fellow Sabelo Mhlambi.

    "Most of us think of technology as a neutral force. Objects or processes are designed and implemented to solve problems and there are no biases, implied or overt, at work. But Sabelo Mhlambi says, not so fast. The computer scientist and researcher says technology cannot be neutral. What gets made, who makes it and uses it, and why is dependent upon our societies — and all societies are biased.

    "Technology will only replicate who we are," he explains. "Our social interactions will still occur online anyway. So, there’s nothing magical about technology where it somehow brings neutrality or brings equality or equity."

    https://www.wuwm.com/post/quest-inclusive-ethical-technology

    Trump wants to “detect mass shooters before they strike.” It won’t work.
    Sigal Samuel. 8/7/2019. “Trump wants to “detect mass shooters before they strike.” It won’t work.” Vox.Abstract

    New article on Vox highlights the work of Desmond Patton, Technology and Human Rights Fellow.

    Patton, emphasized that current AI tools tend to identify the language of African American and Latinx people as gang-involved or otherwise threatening, but consistently miss the posts of white mass murderers.

    "I think technology is a tool, not the tool," said Patton. "Often we use it as an escape so as to not address critical solutions that need to come through policy. We have to pair tech with gun reform. Any effort that suggests we need to do them separately, I don’t think that would be a successful effort at all.”

    Read full article here. 

    2020 Jul 16

    Viral Justice: Pandemics, Policing, and Portals with Ruha Benjamin

    Registration Closed 12:00pm to 1:00pm

    Location: 

    Virtual Event (Registration Required)

    Join us for a conversation with Ruha Benjamin, Associate Professor of African American Studies at Princeton University.

    Panelists: 

    • Ruha Benjamin | Associate Professor of African American Studies, Princeton University
    • Sushma Raman (Moderator)Executive Director, Carr Center 

     

    Ruha Benjamin is an Associate Professor of African American Studies at Princeton...

    Read more about Viral Justice: Pandemics, Policing, and Portals with Ruha Benjamin

    Registration: 

Pages