top of page
- PAST EVENTS -
 

​

 

​

4 pm - 5 pm:


"Algorithmic fairness, controversial variables and conceptual engineering"

Elizabeth K. Stewart, (University of South Carolina)

​

​

​

5 pm - 6pm:


"A conceptual framework for personalized privacy assistants"

Sarah E. Carter

(SFI Centre for Research Training in Digitally-Enhanced Reality (D-Real) Data Science Institute, National University of Ireland, Galway)

                                       NUIG staff pageDSI staff page, LinkedIn
 

​

​

6 pm - 7 pm:

​

"“Just” accuracy? Procedural fairness requires explainable and accountable algorithms in AI-based medical resource allocations."

Jon Rueda 

(La Caixa INPhNIT Fellow, University of Granada)

Twitter, ResearchGate profile

​

​

​

 

​

​

Untitled-Artwork (1).png


ETHICS  AND TECHNOLOGY WORKSHOP 
(April 26, 2021)

16.00 - 19.00 (CET)

WIP SESSIONS (Spring 2021)

​

​

​

​

Session 3: Wednesday 21st of July 2021 - 17:00-18:30 CEST

     

    "Lethal autonomous weapon systems and the responsibility gap: why command responsibility is (not) a solution"

Presenter: Ann-Katrien Oimann - KU Leuven

​

Abstract:

 

Modern weaponry is constantly looking at ways to generate maximum damage to targets while minimising the risk for the operator. As a result, there has been a rise in the use of semi-autonomous systems and research into fully autonomous systems. In recent years, both in the legal sphere as in philosophy attention has been paid to the difficulty of allocating responsibility and liability in the case of errors made by LAWS. Some authors even argue that the increasing level of autonomy in weapon systems will lead to so-called 'responsibility gaps'. In order to close this gap, very different solutions have been devised. One solution that is increasing in popularity and is being discussed by both philosophers and legal scholars is the doctrine of command responsibility. The aim of this paper is to contribute to the ongoing debate on attributing responsibility for serious violations of IHL by LAWS by reviewing whether the doctrine command responsibility could offer a solution. I will argue that the requirement of a superior-subordinate relationship will be the decisive factor for the success of the analogous application of the doctrine of command responsibility to LAWS.

 

 

 

 

Session 2: Wednesday 30th of June 2021 - 17:00 -18:30 CEST 

​

"From Responsibility Gaps to Responsibility Maps"

Presenter: Fabio Tollon - Bielefeld University

 

Abstract:

​

When it comes to socially and politically important issues, it is important that we hold the guilty parties responsible for the harm they inflict. However, it is also essential that the means by which they are held responsible is fair. In the case of manufacturers or engineers, it is normally thought that us holding them responsible should meet certain conditions of knowledge (foresight), control, and intention. Conversely, harmed parties, should feel as though justice has been done, and that those responsible for the harm have been held to account, sanctioned, or fined, etc. However, some aspects of AI systems might call in to question whether it is indeed fair to hold engineers or manufacturers responsible in this way. Some claim that AI will complicate our ascriptions of moral responsibility, creating a so-called “responsibility-gap”. In this paper I will argue that there are no backward-looking gaps in responsibility due to AI. The senses of responsibility I discuss are responsibility as liability, attributability, answerability, and accountability. I go through each of these four potential senses of responsibility, and in each case show that while AI does indeed complicate our ability to hold agents morally responsible, it does not undermine our ability to do so.

​

​

​

​

Session 1:  Wednesday 9th of June 2021 - 17:00 -18:30 CEST 

     

     "Vulnerability, Trust and Human-Robot Interaction"

    Presenter: Zachary Daus - University of Vienna

​

Abstract:

​

Recent attempts to engineer trustworthy robotic systems often conceive of trust in terms of predictability. Accordingly, to trust a robotic system (or a human) is to be able to predict what the robotic system (or human) will do. Design elements of robotic systems that seek to engender trust thus often focus on strategies such as making decision procedures transparent, replicating human movements, and developing trust-building training programs. I argue that all of these design strategies for engendering trust overlook a significant condition for trustworthiness: mutual vulnerability. Humans trust one another not merely as a result of being able to predict the actions of the other, but as a result of being mutually vulnerable to similar risks. Co-workers, for example, trust each other not merely because they can predict each other’s actions, but because both are mutually vulnerable to the consequences of the potential failure of their joint work project. The necessary condition of mutual vulnerability for trustworthy relations poses a significant obstacle to the establishment of trustworthy human-robot interaction. This is because robotic systems lack the affective intelligence that is necessary to be vulnerable. Despite the problems posed by mutual vulnerability for the achievement of trust in human-robot interaction, I will nonetheless propose potential solutions. These solutions will center around bringing users and creators of robotic systems into greater interaction, so that users of robotic systems can recognize the vulnerability of the creators of robotic systems and how this vulnerability is tied to the success (or failure) of the robotic systems they are using. Keywords: vulnerability, trust, human-robot interaction.

​

​

​

​

​

  • Twitter
bottom of page