Search

Search results

    Women’s Rights Are a National Security Issue
    Dara Kay Cohen and Valerie M. Hudson. 12/16/2016. “Women’s Rights Are a National Security Issue.” The New York Times . See full text.Abstract
    Dara Key Cohen's Op-Ed published in the New York Times. 

    "The Trump transition team asked the State Department last week to submit details of programs and jobs that focus on promoting gender equality. Maybe it’s for benign purposes — or better, a signal that the administration wants to make women’s empowerment a cornerstone of its foreign policy. But this seems unlikely, to put it mildly, given that such a commitment was absent from Donald J. Trump’s campaign, and alongside Mr. Trump’s vow to defund Planned Parenthood.

    Whatever the reason for their request, Mr. Trump and Rex W. Tillerson, his pick for secretary of state, should remember that women’s rights are tied directly to national security. The State Department’s gender equality programs are not just politically correct fluff — they deal with matters of life and death, like rape during war, genital cutting, forced marriage and access to education. The State Department provides essential funding to combat these problems."

    Read the full Op-Ed in the New York Times

    "I Feel Like We Are People Who Have Never Known Each Other Before": The Experiences of Survivors of Human Trafficking and Sexual Exploitation Transitioning From Shelters to Life in the Community
    Laura Cordisco Tsai, Vanntheary Lim, and Channtha Nhanh. 1/2020. “"I Feel Like We Are People Who Have Never Known Each Other Before": The Experiences of Survivors of Human Trafficking and Sexual Exploitation Transitioning From Shelters to Life in the Community.” Forum: Qualitative Social Research, 21, 1. See full text.Abstract
    Journal article by Carr Fellow Laura Cordisco Tsai analyzes how survivors of sexual exploitation transition back to life in their communities.

    In this article, we explore the experiences of survivors of human trafficking and sexual exploitation in Cambodia as they transition from living in trafficking-specific shelter facilities to living in the community. We analyzed data from Chab Dai's Butterfly Longitudinal Research (BLR) project, a 10-year longitudinal study with survivors of human trafficking and sexual exploitation in Cambodia utilizing a prospective panel design. We present findings from our analysis of 236 interviews and narrative summaries of interviews conducted with survivors between the years 2011 and 2016 (n=79). An interpretive phenomenological approach was used to understand survivors' experiences during this transition. Themes included: conflicted feelings about life in the community; difficulties completing school and securing employment; violence in the community; limited follow-up; unfulfilled expectations; feeling loved like a family member in the shelter, but abandoned in the community; vulnerability in the community due to dramatic differences between shelters and the community; and varied experiences with case closure. We underscore the importance of understanding and listening to the voices of survivors about their experiences in the anti-human trafficking sector and discuss implications for the design and implementation of services for survivors of human trafficking and sexual exploitation in Southeast Asia.

    The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification
    Neal Cohen. 4/4/2020. “The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification.” Carr Center Discussion Paper Series, 2020-004. See full text.Abstract
    Artificial Intelligence (AI) technologies have the capacity to do a great deal of good in the world, but whether they do so is not only dependent upon how we use those AI technologies but also how we build those AI technologies in the first place.

    The unfortunate truth is that personal data has become the bricks and mortar used to build many AI technologies and more must be done to protect and safeguard the humans whose personal data is being used. Through a case study on AI-powered remote biometric identity verification, this paper seeks to explore the technical requirements of building AI technologies with high volumes of personal data and the implications of such on our understanding of existing data protection frameworks. Ultimately, a path forward is proposed for ethically using personal data to build AI technologies.

    Read the paper here. 

    Upholding Non-Discrimination Principles in the Covid-19 Outbreak
    Jacqueline Bhabha, Laura Cordisco-Tsai, Teresa Hodge, and Laurin Leonard. 4/10/2020. “Upholding Non-Discrimination Principles in the Covid-19 Outbreak.” Carr Center Covid-19 Discussion Paper Series, 03. See full text.Abstract
    Carr Center faculty and fellows discuss how we can employ principles of non-discrimination to address the pandemic’s disproportionate impact on our most vulnerable communities.

    In our third Covid-19 Discussion Paper, Professor of the Practice of Health and Human Rights, Jacqueline Bhabha; Technology and Human Rights Fellows Laurin Leonard and Teresa Hodge; and Carr Center Fellow, Laura Cordisco-Tsai, outline how Covid-19 disproportionately impacts the world's most vulnerable communities. From prison populations to survivors of human trafficking, "Vulnerable communities often are not positioned to ensure their human rights are preserved in times of a crisis—they are often a historical afterthought."

    Read the full text here. 

    The Floyd Protests Are the Broadest in U.S. History — and Are Spreading to White, Small-Town America
    Lara Putnam, Erica Chenoweth, and Jeremy Pressman. 6/6/2020. “The Floyd Protests Are the Broadest in U.S. History — and Are Spreading to White, Small-Town America.” Washington Post. See full version.Abstract
    Erica Chenoweth discusses the Floyd protests and its impact on law, social policies, and the 2020 elections.

    Across the country, people are protesting the killings of George Floyd, Breonna Taylor and Ahmaud Arbery and demanding action against police violence and systemic racism. National media focuses on the big demonstrations and protest policing in major cities, but they have not picked up on a different phenomenon that may have major long-term consequences for politics. Protests over racism and #BlackLivesMatter are spreading across the country — including in small towns with deeply conservative politics.

    You Purged Racists From Your Website? Great, Now Get to Work
    Joan Donovan. 7/1/2020. “You Purged Racists From Your Website? Great, Now Get to Work.” Wired. See full text.Abstract
    Joan Donovan explains that the covid-19 infodemic has taught social media giants an important lesson: they must take action to control the content on their sites. 

    For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.

    Read the full article.

    From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance
    Sabelo Mhlambi. 7/8/2020. “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.” Carr Center Discussion Paper Series, 2020-009. See full text.Abstract

    What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

    The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes.

    Read the full paper.

    Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?
    Mathias Risse. 8/17/2020. “Dangerous Science: Might Population Genetics or Artificial Intelligence Undermine Philosophical Ideas about Equality?” Carr Center Discussion Paper Series, 2020-010. See full text.Abstract

    This paper was prepared for an interdisciplinary conference on Gefährliche Forschung? (Dangerous Science?) held at the University of Cologne in February 2020 and is scheduled to appear in a volume of contributions from that event edited by Wilfried Hinsch and Susanne Brandstätter, the organizers, and to be published by de Gruyter. The paper delves into the question proposed to me—might population genetics or artificial intelligence undermine philosophical ideas about equality—without locating the context of this debate or offering a preview of its contents. The first section discusses the ideal of equality, the next two talk about genetics in the context of responses to racism, and the remaining two speak about possible changes that might come from the development of general Artificial Intelligence.

    Read full text here

Pages