Search

Search results

    Human Rights Documentation in Limited Access Areas: The Use of Technology in War Crimes and Human Rights Abuse Investigations.
    Sushma Raman and Steven Livingston. 5/15/2018. Human Rights Documentation in Limited Access Areas: The Use of Technology in War Crimes and Human Rights Abuse Investigations.. Carr Center Discussion Paper Series. 003rd ed. Cambridge: Carr Center for Human Rights Policy. See full text.Abstract
    Human Rights Documentation in Limited Access Areas: The Use of Technology in War Crimes and Human Rights Abuse Investigations:

     

    We offer a theoretical framework for understanding the role of technological capabilities (affordances) in documenting war crimes and human rights abuses in limited access areas.  We focus on three digital affordances: geospatial, digital network, and digital forensic science.  The paper argues that by leveraging digital affordances, human rights groups gain access to otherwise inaccessible areas, or to information that has been degraded in an effort to obfuscate culpability.  We also argue that the use of digital technology invites a reassessment of what we mean when we speak of a human rights organization.  Organizational morphology in digital space is hybrid in nature, with traditional organizations also taking on or joining more virtual or solely digital forms.

    Human Rights and Artificial Intelligence: An Urgently Needed Agenda?
    Mathias Risse. 4/15/2018. Human Rights and Artificial Intelligence: An Urgently Needed Agenda? . Carr Center Discussion Paper Series. 2018002nd ed. Cambridge, MA: Carr Center for Human Rights Policy. See full text. Abstract
    Human Rights and Artificial Intelligence: An Urgently Needed Agenda? by Mathias Risse 

     

    Artificial intelligence generates challenges for human rights. Inviolability of human life is the central idea behind human rights, an underlying implicit assumption being the hierarchical superiority of humankind to other forms of life meriting less protection. These basic assumptions are questioned through the anticipated arrival of entities that are not alive in familiar ways but nonetheless are sentient and intellectually and perhaps eventually morally superior to humans. To be sure, this scenario may never come to pass and in any event lies in a part of the future beyond current grasp. But it is urgent to get this matter on the agenda. Threats posed by technology to other areas of human rights are already with us. My goal here is to survey these challenges in a way that distinguishes short-, medium-term and long-term perspectives

    2021 Mar 29

    Hacking//Hustling and sex worker advocacy online

    3:00pm to 4:00pm

    Location: 

    Virtual Event (Registration Required)

    Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Mathias Risse, Director of the Carr Center for Human Rights Policy and Lucius N. Littauer Professor of Philosophy and Public Administration. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and...

    Read more about Hacking//Hustling and sex worker advocacy online

    Registration: 

    From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance
    Sabelo Mhlambi. 7/8/2020. “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.” Carr Center Discussion Paper Series, 2020-009. See full text.Abstract

    What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

    The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes.

    Read the full paper.

    2021 Apr 26

    From Citizens United to Bots United: Reinterpreting "Robot Rights" as a Corporate Power Grab

    3:00pm to 4:00pm

    Location: 

    Virtual Event (Registration Required)

    Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Mathias Risse, Director of the Carr Center for Human Rights Policy and Lucius N. Littauer Professor of Philosophy and Public Administration. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.

    ...

    Read more about From Citizens United to Bots United: Reinterpreting "Robot Rights" as a Corporate Power Grab

    Registration: 

    Embedding Ethics in Computer Science Curriculum
    Kate Vredenburgh. 1/25/2019. “Embedding Ethics in Computer Science Curriculum.” The Harvard Gazette. See full text.Abstract
    New article in The Harvard Gazette features work by Carr Center Technology and Human Rights Fellow Kate Vredenburgh.
     

    "A module that Kate Vredenburgh, a philosophy Ph.D. student, created for a course taught by Professor Finale Doshi-Velez asks students to grapple with questions of how machine-learning models can be discriminatory, and how that discrimination can be reduced."

Pages