Search

Search results

    Disinformation Campaigns Target Tech-Enabled Citizen Journalists
    Steven Livingston. 3/2/2017. “Disinformation Campaigns Target Tech-Enabled Citizen Journalists.” Brookings.Abstract
    New blog post by Carr Center Senior Fellow Steven Livingston published on Brookings. 

    "Governments hoping to evade responsibility for war crimes and rights abuses are having a much tougher time of it these days. Denying entry to nettlesome investigators is still standard while many places are simply too dangerous to investigate. But even where investigators cannot go, digital technologies can sometimes overcome barriers to investigation. A recent Harvard Kennedy School report published by the Carr Center for Human Rights Policy underscores how various digital technologies undermine attempts to hide abuses and war crimes. Commercial high-resolution remote sensing satellites, some capable of distinguishing objects on the ground as small as 30-cm across, allow human rights groups to document military forces deployments, mass graves, forced population displacements, and damage to physical infrastructure."


    Read the full blog at Brookings.

    Climate Change Induced Displacement: Leveraging Transnational Advocacy Networks to Address Operational Gaps
    Steven Livingston and Joseph Guay. 2/21/2017. “Climate Change Induced Displacement: Leveraging Transnational Advocacy Networks to Address Operational Gaps.” UNHCR .Abstract
    An article on climate change and induced displacement, by Carr Center's Senior Fellow Steven Livingston and Joseph Guay. 

    According to the latest Intergovernmental Panel on Climate Change (IPCC) report, “Few aspects of the human endeavor…are isolated from possible impacts in a changing climate. The interconnectedness of the Earth system makes it impossible to draw a confined boundary around climate change impact, adaptations, and vulnerability.”1 This includes human population displacements, which amounted to a staggering 51.2 million refugees, asylum-seekers, and internally displaced people (IDPs) in 2013.2

    Unfortunately, as the frequency, duration, and intensity of extreme events affecting populations are on the rise, the humanitarian aid community is stretched thin in the face of multiple complex emergencies and protracted challenges around the world

    Read the full post.

    Can Facebook’s Oversight Board Win People’s Trust?
    Mark Latonero. 1/29/2020. “Can Facebook’s Oversight Board Win People’s Trust?” Harvard Business Review. See full text.Abstract

    Technology & Human Rights Fellow, Mark Latonero, breaks down the larger implications of Facebook's global Oversight Board for content moderation. 

    Facebook is a step away from creating its global Oversight Board for content moderation. The bylaws for the board, released on Jan. 28, lay out the blueprint for an unprecedented experiment in corporate self-governance for the tech sector. While there’s good reason to be skeptical of whether Facebook itself can fix problems like hate speech and disinformation on the platform, we should pay closer attention to how the board proposes to make decisions.

    Why the AI We Rely on Can’t Get Privacy Right (Yet)
    Neal Cohen. 3/7/2020. “Why the AI We Rely on Can’t Get Privacy Right (Yet).” VentureBeat. See full text.Abstract

    Neal Cohen analyzes why AI technologies fall short on privacy. While artificial intelligence (AI) powered technologies are now commonly appearing in many digital services we interact with on a daily basis, an often neglected truth is that few companies are actually building the underlying AI technology.

     

    The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification
    Neal Cohen. 4/4/2020. “The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification.” Carr Center Discussion Paper Series, 2020-004. See full text.Abstract
    Artificial Intelligence (AI) technologies have the capacity to do a great deal of good in the world, but whether they do so is not only dependent upon how we use those AI technologies but also how we build those AI technologies in the first place.

    The unfortunate truth is that personal data has become the bricks and mortar used to build many AI technologies and more must be done to protect and safeguard the humans whose personal data is being used. Through a case study on AI-powered remote biometric identity verification, this paper seeks to explore the technical requirements of building AI technologies with high volumes of personal data and the implications of such on our understanding of existing data protection frameworks. Ultimately, a path forward is proposed for ethically using personal data to build AI technologies.

    Read the paper here. 

    Smart City Visions and Human Rights: Do They Go Together?
    Tina Kempin Reuter. 4/24/2020. “Smart City Visions and Human Rights: Do They Go Together?” Carr Center Discussion Paper Series, 2020-006. See full text.Abstract
    Over half of the world’s population lives in cities today. According to the latest predictions, more than two thirds of all people will inhabit an urban environment by 2050. The number and size of cities has increased over the last decades, with the highest projections for future growth in the Global South. As cities continue to expand, so does their impact on policy generation, as political players, as drivers of states’ economies, and as hubs for social innovation and cultural exchange. Cities are important actors on the national and international stage, with mayors’ conferences, city grassroots organizations, and urban citizens driving the search for today’s most pressing problems, including climate change, inequity, migration, and human rights concerns. Many have expressed hope that “cities [will] deliver where nation states have failed.” Organizing this ever-growing, dynamic human space, enabling people from diverse backgrounds to live together, addressing the spatial and social challenges of urban life, and delivering services to inhabitants are challenges that cities have struggled with and that continue to dominate the urban policy agenda.
     

    Read full text here. 

    Reimagining Reality: Human Rights and Immersive Technology
    Brittan Heller. 6/12/2020. “Reimagining Reality: Human Rights and Immersive Technology.” Carr Center Discussion Paper Series, 2020-008. See full text.Abstract

    This paper explores the human rights implications of emergent technology, and focuses on virtual reality (VR), augmented reality (AR), and immersive technologies. Because of the psychological and physiological aspects of immersive technologies, and the potential for a new invasive class of privacy-related harms, she argues that content creators, hardware producers, and lawmakers should take increased caution to protect users. This will help protect the nascent industry in a changing legal landscape and help ensure that the beneficial uses of this powerful technology outweigh the potential misuses.

    In the paper, Heller first reviews the technology and terminology around immersive technologies to explain how they work, how a user’s body and mind are impacted by the hardware, and what social role these technologies can play for communities. Next she describes some of the unique challenges for immersive media, from user safety to misalignment with current biometrics laws. She introduces a new concept, biometric psychography, to explain how the potential for privacy-related harms is different in immersive technologies, due to the ability to connect your identity to your innermost thoughts, wants, and desires. Finally, she describe foreseeable developments in the immersive industry, with an eye toward identifying and mitigating future human rights challenges. The paper concludes with five recommendations for actions that the industry and lawmakers can take now, as the industry is still emerging, to build human rights into its DNA.
     

    You Purged Racists From Your Website? Great, Now Get to Work
    Joan Donovan. 7/1/2020. “You Purged Racists From Your Website? Great, Now Get to Work.” Wired. See full text.Abstract
    Joan Donovan explains that the covid-19 infodemic has taught social media giants an important lesson: they must take action to control the content on their sites. 

    For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.

    Read the full article.

    From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance
    Sabelo Mhlambi. 7/8/2020. “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.” Carr Center Discussion Paper Series, 2020-009. See full text.Abstract

    What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

    The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes.

    Read the full paper.

    Mass Incarceration and The Future: An Urgent Need to Address the Human Rights Implications of Criminal Background Checks and the Future of Artificial Intelligence
    Teresa Y. Hodge and Laurin Leonard. 7/17/2020. “Mass Incarceration and The Future: An Urgent Need to Address the Human Rights Implications of Criminal Background Checks and the Future of Artificial Intelligence.” Carr Center Discussion Paper Series, 2020-009. See full text.Abstract
    Between 70 and 100 million Americans—one in three— currently live with a criminal record. This number is expected to rise above 100 million by the year 2030.

    The criminal justice system in the U.S. has over-incarcerated its citizen base; we have 5% of the world's population but 25% of the world's prison population. America became known as the “incarceration nation” because our prison and jail population exploded from less than 200,000 in 1972 to 2.2 million today, which became a social phenomenon known as mass incarceration. And along the way, there was a subsequent boom in querying databases for data on citizens with criminal records.

    Once a person comes in contact with the U.S. criminal justice system, they begin to develop an arrest and/or conviction record. This record includes data aggregated from various databases mostly, if not exclusively, administered by affiliated government agencies. As the prison population grew, the number of background check companies rose as well. The industry has grown and continues to do so with very little motivation to wrestle with morality, data integrity standards, or the role of individual rights.

    This paper address the urgent need to look towards a future where background screening decisions and artificial intelligence collide.

    Read full paper here. 

     

Pages