top of page

WORK IN PROGRESS SESSIONS (Autumn 2021 Schedule)

​

​

​

 

​

​

​

​

 

​

 

WORK IN PROGRESS SESSIONS
Autumn 2021

 

SESSION 3: November 10, Wednesday (17:30 CET)

“Ethical and Epistemological Roles of AIs in Collective Epistemology
Presenter: Ori Freiman (Centre for Ethics at the University of Toronto)

(see the abstract and Zoom link below)

​

Session 1: September 29, Wednesday (17:30 - 19:00 CEST)

​

Talk Title:

“Missing Ingredients in Artificial Moral Agency”

​

Presenter:

Zach Gudmunsen (Interdisciplinary Ethics Applied Centre, University of Leeds)


Abstract:
Most people think that artificial systems aren’t ‘full’ moral agents like humans are. If that’s so, we ought to be able to isolate what artificial systems would need to be ‘full’ moral agents. I consider some recently proposed candidates of that ‘missing ingredient’: consciousness, life history and rationality. I argue that none are particularly convincing and propose ‘autonomy’ as an improved candidate. Autonomy has a messy history, but with clarifications seems to be a good approximation of what artificial systems need to be ‘full’ moral agents.

​

​

​

Session 2: October 20, Wednesday (17:30 - 19:00 CEST)

​

Talk Title:

"Artificial Moral Agency: Why machines need social emotions"

​

Presenter:

Dilara Boga (Central European University, Vienna)

​

Abstract:

In recent years, the discussion on the criteria of moral agency for robots has become one of the significant issues for the future of artificial intelligences (AIs). Although there is no consensus on which properties are sufficient to make robots moral agents, many agree that robots need certain properties to be proper moral agents (such as consciousness, autonomy, sentience, etc.). However, it is interesting enough that there is no substantial account that claims that what robots lack, related to morality, is emotions. Here, I will claim that robots lack, in particular, social emotions. Based on the theory of evolution of cooperation, I will claim that sociality is fundamental for morality. During the process of group forming and social communication, many cultures (if not all) consider cooperative behaviours ‘morally good’ or ‘ethical’ or ‘good for humankind’ because cooperative actions resolve conflicts or problems. The moral agent in a collective has a moral choice to cooperate or compete based on the tension between her self-interest against the group-interest. Social emotions play a role in the moral choice of the moral agent when this specific tension occurs. To be able to have social interactions with humans as ‘moral equals’, to make moral decisions, robots need to have social emotions; otherwise, they are moral ‘tools’, rather than moral ‘agents’.

​

​

​

Session 3: November 10, Wednesday (17:30 - 19:00 CEST)

​

Talk Title:

Ethical and Epistemological Roles of AIs in Collective Epistemology

​

Presenter

Ori Freiman (University of Toronto)

https://www.orifreiman.info 

​

Abstract

Collective epistemology is the branch of social epistemology that deals with the epistemic states of groups, collectives, and corporate agents. Despite the central role AI technologies play in the generation and transmission of knowledge, analytic philosophical theory has largely overlooked the role of technologies in general, and AIs in specific, in collective epistemic phenomena such as group knowledge and group belief. First, I argue that this is due to an anthropocentric assumption - that only humans can be considered as members of an epistemic group. I identify this assumption in the main debates within collective epistemology and show that all sides, in all these main debates, hold the anthropocentric assumption. Second, I vigorously argue against the anthropocentric assumption, since it prevents the inclusion of technological artifacts in groups, despite their influence on the epistemic and ethical outcomes of the group. Third, I rethink conditions of membership in a collective, and suggest a compelling alternative: that membership of an epistemic group is, in principle, open to anyone, and moreover – anything – such as non-human epistemic agents (i.e. technologies), that shape the epistemological and ethical outcomes of a group. Fourth and last, I utilize the suggested alternative, and propose an account that assigns epistemological and ethical responsibility to a hybrid collective - as a group, and within a hybrid collective - to its individual members.

​

​

​

 

 

 

 

Session4: December 1, Wednesday 

(available)

​

*We request that people presenting try to attend as many sessions as they can, to promote collaboration and consistency in the feedback.

​

We hold a WIP session* every three weeks for people to present their ideas.

Keep in mind that this does not have to be a structured paper. 

It can be an idea for a chapter or a concept you are troubled with!

​

IF YOU'D LIKE TO GIVE A PRESENTATION,

​

please fill out this Google Form https://forms.gle/xsiRUHakxhnD1p9V7  

​

​

​

WIP SESSIONS (Spring 2021)

​

​

To get detailed information on presentations (related to their abstracts, papers, etc.), please check 'Past Events' page of the website.  

 

​

 

Session 1:  Wednesday 9th of June 2021 - 17:00 -18:30 CEST 

     

     "Vulnerability, Trust and Human-Robot Interaction"

    Presenter: Zachary Daus - University of Vienna

 

 

Session 2: Wednesday 30th of June 2021 - 17:00 -18:30 CEST 

​

"From Responsibility Gaps to Responsibility Maps"

Presenter: Fabio Tollon - Bielefeld University

 

​

Session 3: July 21, 2021

     

    "Lethal autonomous weapon systems and the responsibility gap:

why command responsibility is (not) a solution"

Presenter: Ann-Katrien Oimann - KU Leuven

​

​

​

​

bottom of page