The Carr Center for Human Rights Policy serves as the hub of the Harvard Kennedy School’s research, teaching, and training in the human rights domain. The center embraces a dual mission: to educate students and the next generation of leaders from around the world in human rights policy and practice; and to convene and provide policy-relevant knowledge to international organizations, governments, policymakers, and businesses.
The international standing of the United States has taken a serious hit over the past four years. Former U.S. President Donald Trump’s strident “America first” foreign policy is partly to blame, but so are his attacks on democracy and human rights, both internationally and domestically. Abroad, Trump set the cause of human rights back by embracing authoritarians and alienating democratic allies. At home, he launched an assault on the electoral process, encouraged a failed insurrection at the U.S. Capitol, and systematically undermined civil rights protections, leaving his successor to grapple with multiple, overlapping human rights crises. As if that were not enough, a host of other problems await, from the pandemic to increasing competition with China and the overall decline of American power.
Far too often, global supply chains distribute value in ways that contribute to income inequality and the uneven accumulation of wealth. Despite a surge of innovations to address this problem—such as fair trade, corporate social responsibility, and creating shared value—the issue of value distribution persists as a pressing priority for the international development and business communities. This article puts forth a first attempt at theorizing profit sharing as a potential mechanism for more equitable value distribution in global value chains. Drawing on two in-depth, multi-method case studies of companies that share profits in the coffee sector, we develop eight theoretical propositions about the applicability and efficacy of profit sharing as a tool for redistribution. Our research suggests that profit sharing can distribute value without requiring suppliers to compromise price stability, profit maximization, value creation, or alternative economic opportunities. This conclusion challenges extant theory which asserts (based on studies of fair trade certification, direct trade, and solidarity trade) that these tradeoffs are typically necessary or inevitable. We also extend the literature on profit sharing. Extant literature examines firm-level attempts to maximize productivity and minimize dissent. We contribute by theorizing profit sharing’s fitness for redistributive objectives in the context of value chains. The implication of our findings is that, in some contexts, companies may be able to increase prices and improve income stability without requiring suppliers to compromise other economic priorities.
Human rights impact assessments (HRIAs) have recently emerged as a way for technology companies to identify, mitigate, and remedy the potential risks and harms of artificial intelligence (AI) and algorithmic systems. The purpose of this paper is to assess whether HRIAs are a tool fit for purpose for AI. Will HRIAs become an effective tool of AI governance that reduces risks and harms? Or, will they become a form of AI “ethics washing” that permits companies to hide behind a veneer of human rights due diligence and accountability? This paper finds that HRIAs of AI are only in their infancy. Simply conducting such assessments with the usual methods will miss the mark for AI and algorithmic systems, as demonstrated by the failures of the HRIA of Facebook in Myanmar. Facebook commissioned an HRIA after UN investigators found that genocide was committed in the country. However, the HRIA did not adequately assess the most salient human rights impacts of Facebook’s presence and product in Myanmar. HRIAs should be updated if they are to be used on AI and algorithmic systems. HRIAs for AI should be seen as an analysis of a sociotechnical system wherein social and technical factors are inherently intertwined and interrelated. Interdisciplinary expertise is needed to determine the appropriate methods and criteria for specific contexts where AI systems are deployed. In addition, HRIAs should be conducted at appropriate times relative to critical stages in an AI development lifecycle and function on an ongoing basis as part of a comprehensive human rights due diligence process. Challenges remain, such as developing methods to identify algorithmic discrimination as one of the most salient human rights concerns when it comes to assessing AI harms. In addition, a mix of voluntary actions and mandatory measures may be needed to incentivize organizations to incorporate HRIAs for AI and algorithmic systems in a more effective, transparent, and accountable way. The paper concludes with considerations for the technology sector, government, and civil society.
“The Carr Center is building a bridge between ideas on human rights and the practice on the ground. Right now we are at a critical juncture. The pace of technological change and the rise of authoritarian governments are both examples of serious challenges to the flourishing of individual rights. It’s crucial that Harvard and the Kennedy School continue to be a major influence in keeping human rights ideals alive. The Carr Center is a focal point for this important task.”
- Mathias Risse