The Uppsala Vienna AI Colloquium is a series of colloquium style talks focused on philosophical issues surrounding AI technology.
Each talk will focus on a specific issue of relevance to AI systems (e.g., intelligence, agency, responsibility, etc.) and will be delivered by an expert with research background on the topic. The intended audience of the talks are philosophically informed individuals with an interest in the philosophy of artificial intelligence.
The schedule of talks for Academic year 2025-26 is as follows:
Does consciousness require biology or can systems made out of other materials be conscious? I develop an argument for the view that it is (nomologically) possible that some non-biological creatures are conscious, including conventional, silicon-based AI systems. It assumes the iterative natural kind (INK) strategy, according to which one should investigate consciousness by treating it as a natural kind which iteratively explains observable patterns and correlations between potentially consciousness-relevant features. The argument is based on the insight that we can already anticipate that future developments would give us reasons to attribute consciousness to some non-biological creatures. According to the argument, an idealized scientific investigation – based on the INK strategy – would deliver the result that some possible non-biological creatures are conscious, and the outcome of such an ideal application corresponds to what is actually the case. My argument for the former premise is based on the claim that theoretical virtues and pre-theoretical principles support attributing consciousness to psychological duplicates, i.e., non-biological, silicon-based creatures which share the coarse-grained functional organization of humans.
Hide abstract
The question whether a machine – a computer, a robot or any other form of artificial system – could be sentient is certainly entertaining, no end of science fiction deals with the question and sometimes very engagingly. But why is the question of artificial sentience (or “awareness”, or “consciousness”) raised in science and why invest public funding in this research? Is conscious AI at all possible, or even desirable?
show less
5 December 2025, 1730 Hrs CET: Mona Simion, “AI: Explainability vs. Trustworthiness” (co-authored work, with Chris Willard-Kyle)
Show Abstract
Here is a very popular view on what user rational trust in AI requires: The Explanation View of AI Trust: User rational trust in AI requires an explanation of why the AI has reached the conclusion it has. We think that The Explanation View of AI Trust is wrong. It’s not true of trust in general that rational trust (even typically) requires understanding why, and it’s not the case that AI communication generates any special normative requirement that there should be an explanation why that grounds rational trust. This doesn’t mean that we think there is nothing to be gained by XAI—we prefer explainability all else being equal! But understanding how to increase trust (when appropriate) in AI requires the right diagnosis. In order to understand how to increase trust in AI, we think it’s better to focus not on AI explainability but instead on AI trustworthiness. That is, in this talk I will defend what we call the Simple View: The Simple View of AI Trust: User rational trust in AI requires AI trustworthiness.
show less
All the talks will be organized online except the final colloquium talk of the academic year, which will be held in person, either in Uppsala or in Vienna.
To participate, send an email to nikhil.mahant[AT]filosofi.uu.se expressing your interest. We will add you to the mailing list.