Search

Search results

    Deepfakes are Solvable—but Don’t Forget That “shallowfakes” are Already Pervasive
    Mark Latonero. 3/25/2019. “Deepfakes are Solvable—but Don’t Forget That “shallowfakes” are Already Pervasive.” MIT Technology Review. See full text.Abstract
    New article features Carr Center Technology and Human Rights Fellow Mark Latonero.

    " Mark Latonero, human rights lead at Data & Society, a nonprofit institute dedicated to the applications of data, agreed that technology companies should be doing more to tackle such issues. While Microsoft, Google, Twitter, and others have employees focused on human rights, he said, there was so much more they should be doing before they deploy technologies—not after."
    Big Tech Firms are Racing to Track Climate Refugees
    Mark Latonero. 5/17/2019. “Big Tech Firms are Racing to Track Climate Refugees.” MIT Technology Review. See full text.Abstract
    The MIT Technology Review features new report by Carr Center Technology and Human Rights Fellow Mark Latonero.

    "Simply layering technology on top of existing humanitarian problems tends to exacerbate the issues it intended to resolve. In a new report on the role of digital identity in refugee and migrant contexts, a team of researchers at the Data & Society Research Institute, led by Mark Latonero, detail the various ways these initiatives can reproduce and worsen existing bureaucratic biases."

    https://www.technologyreview.com/s/613531/big-tech-firms-are-racing-to-track-climate-refugees/

    Digital Identity in the Migration & Refugee Context: Italy Case Study
    Mark Latonero, Keith Hiatt, Antonella Napolitano, Giulia Clericetti, and Melanie Penagos. 4/2019. Digital Identity in the Migration & Refugee Context: Italy Case Study. Data & Society. Data & Society. See full text.Abstract
    New Report by Carr Center Technology and Human Rights Fellow Mark Latonero.

    "Increasingly, governments, corporations, international organizations, and nongov-ernmental organizations (NGOs) are seeking to use digital technologies to track the identities of migrants and refugees. This surging interest in digital identity technologies would seem to meet a pressing need: the United Nations Refugee Agency (UNHCR) states that in today’s modern world, lacking proof of identity can limit a person’s access to services and socio-economic participation, including employment opportunities, housing, a mobile phone, and a bank account. But this report argues that the tech-nologies and processes involved in digital identity will not provide easy solutions in the migration and refugee context. Technologies that rely on identity data introduce a new sociotechnical layer that may exacerbate existing biases, discrimination, or power imbalances.How can we weigh the added value of digital identification systems against the potential risks and harms to migrant safety and fundamental human rights? This report provides international organizations, policymakers, civil society, technologists, and funders with a deeper background on what we currently know about digital identity and how migrant identity data is situated in the Italian context. "
    The Future Impact of Artificial Intelligence on Humans and Human Rights
    Steven Livingston and Mathias Risse. 6/7/2019. “The Future Impact of Artificial Intelligence on Humans and Human Rights.” Ethics and International Affairs, 33, 2, Pp. 141-158. See full text.Abstract
    What are the implications of artificial intelligence (AI) on human rights in the next three decades?

    Precise answers to this question are made difficult by the rapid rate of innovation in AI research and by the effects of human practices on the adaption of new technologies. Precise answers are also challenged by imprecise usages of the term “AI.” There are several types of research that all fall under this general term. We begin by clarifying what we mean by AI. Most of our attention is then focused on the implications of artificial general intelligence (AGI), which entail that an algorithm or group of algorithms will achieve something like superintelligence. While acknowledging that the feasibility of superintelligence is contested, we consider the moral and ethical implications of such a potential development. What do machines owe humans and what do humans owe superintelligent machines?

    Read the full article here

    Critical Skill for Nonprofits in the Digital Age: Technical Intuition
    Alix Dunn. 5/7/2019. “Critical Skill for Nonprofits in the Digital Age: Technical Intuition.” Stanford Social Innovation Review. Listen to the Interview.Abstract

    Not everyone needs to become a tech expert, but all activists and nonprofit leaders must develop skills to inquire about, decide on, and demand technological change. Tech Fellow Alix Dunn talks to Stanford's Social Innovation Podcast. 

    In a world where the pace of organizational learning is often slower than the pace of technological change, activists and nonprofit leaders must develop their “technical intuition.” Not everyone needs to become a tech expert, explains Alix Dunn, of the consulting firm Computer Says Maybe, but this ongoing process of imagining, inquiring about, deciding on, and demanding technological change is critical.

    In this recording from the Stanford Social Innovation Review's 2019 Data on Purpose conference, Dunn walks through her guidelines to help anyone to develop these skills.

    We Can't Future-Proof Technology. But Here are 5 Ways to Forward Plan.
    Alexa Koenig and Sherif Elsayed-Ali. 1/5/2019. “We Can't Future-Proof Technology. But Here are 5 Ways to Forward Plan.” World Economic Forum . See full text.Abstract
    New article co-authored by Carr Center Technology and Human Rights Fellow Sherif Elsayed-Ali.

    "We know that the technologies of the Fourth Industrial Revolution are drastically changing our world. This change is happening at a faster rate and greater scale than at any point in human history – and with that change come significant challenges to the ability of our public institutions and governments to adequately respond.

    From the plough to vaccines to computers, technological innovations have generally made human societies more productive. Over time, people have figured out how to mitigate their negative aspects. For example, electrical applications are much safer to use now than in the early days of electrification. Though we came close to disaster, since the Second World War the international political system has managed to contain the threat of nuclear weapons for mass destruction.

    However, the accelerating pace of change and the power of new technologies mean that negative unintended consequences will only become more frequent and more dangerous. What can we do today to help ensure that new technologies make life better, not worse?"

    https://www.weforum.org/agenda/2019/01/how-to-plan-for-technology-future-koenig-elsayed-ali/

    The Quest For Inclusive & Ethical Technology
    Sabelo Mhlambi. 6/10/2019. “The Quest For Inclusive & Ethical Technology.” WUWM Milwaukee NPR. Bonnie North. See full text.Abstract
    New interview with Technology and Human Rights Fellow Sabelo Mhlambi.

    "Most of us think of technology as a neutral force. Objects or processes are designed and implemented to solve problems and there are no biases, implied or overt, at work. But Sabelo Mhlambi says, not so fast. The computer scientist and researcher says technology cannot be neutral. What gets made, who makes it and uses it, and why is dependent upon our societies — and all societies are biased.

    "Technology will only replicate who we are," he explains. "Our social interactions will still occur online anyway. So, there’s nothing magical about technology where it somehow brings neutrality or brings equality or equity."

    https://www.wuwm.com/post/quest-inclusive-ethical-technology

    Embedding Ethics in Computer Science Curriculum
    Kate Vredenburgh. 1/25/2019. “Embedding Ethics in Computer Science Curriculum.” The Harvard Gazette. See full text.Abstract
    New article in The Harvard Gazette features work by Carr Center Technology and Human Rights Fellow Kate Vredenburgh.
     

    "A module that Kate Vredenburgh, a philosophy Ph.D. student, created for a course taught by Professor Finale Doshi-Velez asks students to grapple with questions of how machine-learning models can be discriminatory, and how that discrimination can be reduced."

    Stop Surveillance Humanitarianism
    Mark Latonero. 7/11/2019. “Stop Surveillance Humanitarianism.” The New York Times. See full text.Abstract
    Mark Latonero – Carr Center Technology and Human Rights Fellow, and research lead at Data & Society – discusses surveillance humanitarianism for The New York Times

    A standoff between the United Nations World Food Program and Houthi rebels in control of the capital region is threatening the lives of hundreds of thousands of civilians in Yemen.

    Alarmed by reports that food is being diverted to support the rebels, the aid program is demanding that Houthi officials allow them to deploy biometric technologies like iris scans and digital fingerprints to monitor suspected fraud during food distribution.

    The Houthis have reportedly blocked food delivery, painting the biometric effort as an intelligence operation, and have demanded access to the personal data on beneficiaries of the aid. The impasse led the aid organization to the decision last month to suspend food aid to parts of the starving population — once thought of as a last resort — unless the Houthis allow biometrics.

    Read the full article.

Pages