The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Responsibility for algorithmic injustice

Lunch seminar 8 May 2024

We apologize for a strange aspect ratio. We are trying to correct it.

Topic: Responsibility for algorithmic injustice

When: 8 May at 12.00-13.15

Where: Online

Speaker: Henrik D. Kugelberg, postdoctoral fellow at the London School of Economics. 

Moderator: Anamaria Dutceac Segesten, Researcher, Strategic Communication, Lund University

Spoken language: English

Abstract

Artificial intelligence systems often produce biased outputs. However, there is widespread disagreement over how this bias should be understood, conceptualised, and measured. There is also disagreement over what kind of responsibility is appropriate for addressing the wrongs. This paper examines two prominent accounts for analysing algorithmic injustices: the local distributive model and the structural injustice framework. The former focuses on developing statistical criteria for measuring unequal algorithmic outputs, whilst the latter highlights systemic societal injustices.

The article proposes that neither view is fully apt for theorising algorithmic injustice. The local distributive model helps us see that algorithmic injustices are at their core distributive but overlooks normatively relevant factors, whereas the structural injustice framework shows how injustices often have structural features, whilst leaving insufficient space for individual and corporate responsibility. By synthesising insights from both perspectives, the paper outlines a new way forward for theorising algorithmic justice.

Bio: Henrik D. Kugelberg is a British Academy postdoctoral fellow at the London School of Economics. He works primarily on the political philosophy of artificial intelligence and the digital public sphere. Previously, he was an interdisciplinary ethics fellow at Stanford University and Apple. His DPhil (PhD) is from the University of Oxford.