Search

Search results

    You Purged Racists From Your Website? Great, Now Get to Work
    Joan Donovan. 7/1/2020. “You Purged Racists From Your Website? Great, Now Get to Work.” Wired. See full text.Abstract
    Joan Donovan explains that the covid-19 infodemic has taught social media giants an important lesson: they must take action to control the content on their sites. 

    For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.

    Read the full article.

    From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance
    Sabelo Mhlambi. 7/8/2020. “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.” Carr Center Discussion Paper Series, 2020-009. See full text.Abstract

    What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

    The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes.

    Read the full paper.

    Mass Incarceration and The Future: An Urgent Need to Address the Human Rights Implications of Criminal Background Checks and the Future of Artificial Intelligence
    Teresa Y. Hodge and Laurin Leonard. 7/17/2020. “Mass Incarceration and The Future: An Urgent Need to Address the Human Rights Implications of Criminal Background Checks and the Future of Artificial Intelligence.” Carr Center Discussion Paper Series, 2020-009. See full text.Abstract
    Between 70 and 100 million Americans—one in three— currently live with a criminal record. This number is expected to rise above 100 million by the year 2030.

    The criminal justice system in the U.S. has over-incarcerated its citizen base; we have 5% of the world's population but 25% of the world's prison population. America became known as the “incarceration nation” because our prison and jail population exploded from less than 200,000 in 1972 to 2.2 million today, which became a social phenomenon known as mass incarceration. And along the way, there was a subsequent boom in querying databases for data on citizens with criminal records.

    Once a person comes in contact with the U.S. criminal justice system, they begin to develop an arrest and/or conviction record. This record includes data aggregated from various databases mostly, if not exclusively, administered by affiliated government agencies. As the prison population grew, the number of background check companies rose as well. The industry has grown and continues to do so with very little motivation to wrestle with morality, data integrity standards, or the role of individual rights.

    This paper address the urgent need to look towards a future where background screening decisions and artificial intelligence collide.

    Read full paper here. 

     

    Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?
    Mathias Risse. 8/17/2020. “Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?” Carr Center Discussion Paper Series, 2020-010. See full text.Abstract

    This paper was prepared for an interdisciplinary conference on Gefährliche Forschung? (Dangerous Science?) held at the University of Cologne in February 2020 and is scheduled to appear in a volume of contributions from that event edited by Wilfried Hinsch and Susanne Brandstätter, the organizers, and to be published by de Gruyter. The paper delves into the question proposed to me—might population genetics or artificial intelligence undermine philosophical ideas about equality—without locating the context of this debate or offering a preview of its contents. The first section discusses the ideal of equality, the next two talk about genetics in the context of responses to racism, and the remaining two speak about possible changes that might come from the development of general Artificial Intelligence.

    Read full text here

    AI Principle Proliferation as a Crisis of Legitimacy
    Mark Latonero. 9/30/2020. “AI Principle Proliferation as a Crisis of Legitimacy.” Carr Center Discussion Paper Series, 2020-011. See full text.Abstract

    While Artificial Intelligence is a burgeoning field today, there is a growing concern about the mushrooming of proposed principles on how AI should be governed.

    In his latest Carr Center discussion paper, fellow Mark Latonero posits that human rights could serve to stabilize AI governance, particularly if framed as an anchor to guide AI usage that could avert both everyday and catastrophic social harms.

    Read the full document here. 

Pages