Search

Search results

    Trump wants to “detect mass shooters before they strike.” It won’t work.
    Sigal Samuel. 8/7/2019. “Trump wants to “detect mass shooters before they strike.” It won’t work.” Vox.Abstract

    New article on Vox highlights the work of Desmond Patton, Technology and Human Rights Fellow.

    Patton, emphasized that current AI tools tend to identify the language of African American and Latinx people as gang-involved or otherwise threatening, but consistently miss the posts of white mass murderers.

    "I think technology is a tool, not the tool," said Patton. "Often we use it as an escape so as to not address critical solutions that need to come through policy. We have to pair tech with gun reform. Any effort that suggests we need to do them separately, I don’t think that would be a successful effort at all.”

    Read full article here. 

    We Can't Future-Proof Technology. But Here are 5 Ways to Forward Plan.
    Alexa Koenig and Sherif Elsayed-Ali. 1/5/2019. “We Can't Future-Proof Technology. But Here are 5 Ways to Forward Plan.” World Economic Forum . See full text.Abstract
    New article co-authored by Carr Center Technology and Human Rights Fellow Sherif Elsayed-Ali.

    "We know that the technologies of the Fourth Industrial Revolution are drastically changing our world. This change is happening at a faster rate and greater scale than at any point in human history – and with that change come significant challenges to the ability of our public institutions and governments to adequately respond.

    From the plough to vaccines to computers, technological innovations have generally made human societies more productive. Over time, people have figured out how to mitigate their negative aspects. For example, electrical applications are much safer to use now than in the early days of electrification. Though we came close to disaster, since the Second World War the international political system has managed to contain the threat of nuclear weapons for mass destruction.

    However, the accelerating pace of change and the power of new technologies mean that negative unintended consequences will only become more frequent and more dangerous. What can we do today to help ensure that new technologies make life better, not worse?"

    https://www.weforum.org/agenda/2019/01/how-to-plan-for-technology-future-koenig-elsayed-ali/

    2020 Feb 10

    We Didn't Cross the Border, the Border Crossed Us

    5:30pm to 6:45pm

    Location: 

    Rubenstein 414-AB

    Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Mathias Risse, Director of the Carr Center for Human Rights Policy and Lucius N. Littauer Professor of Philosophy and Public Administration. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of...

    Read more about We Didn't Cross the Border, the Border Crossed Us
    Why the AI We Rely on Can’t Get Privacy Right (Yet)
    Neal Cohen. 3/7/2020. “Why the AI We Rely on Can’t Get Privacy Right (Yet).” VentureBeat. See full text.Abstract

    Neal Cohen analyzes why AI technologies fall short on privacy. While artificial intelligence (AI) powered technologies are now commonly appearing in many digital services we interact with on a daily basis, an often neglected truth is that few companies are actually building the underlying AI technology.

     

    You Purged Racists From Your Website? Great, Now Get to Work
    Joan Donovan. 7/1/2020. “You Purged Racists From Your Website? Great, Now Get to Work.” Wired. See full text.Abstract
    Joan Donovan explains that the covid-19 infodemic has taught social media giants an important lesson: they must take action to control the content on their sites. 

    For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.

    Read the full article.

Pages