Talks in Academic Year 2024-25:
25 October 2024, 1830 Hrs CET: Charles Rathkopf, “Hallucination, justification, and the role of generative AI in science”
Show AbstractGenerative AI models are now being used to create synthetic climate data to improve the accuracy of climate models, and to construct virtual molecules which can then be synthesized for medical applications. But generative AI models are also notorious for their disposition to “hallucinate.” A recent Nature editorial defines hallucination as a process in which a generative model “makes up incorrect answers” (Jones, 2024). This raises an obvious puzzle. If generative models are prone to fabricating incorrect answers, how can they be used responsibly? In this talk I provide an analysis of the phenomenon of hallucination, and give special attention to diffusion models trained on scientific data (rather than transformers trained on natural language.) The goal of the paper is to work out how generative AI can be made compatible with reliabilist epistemology. I draw a distinction between parameter-space and feature-space deviations from the training data, and argue that hallucination is a subset of the latter. This allows us to recognize a class of cases in which the threat of hallucination simply does not arise. Among the remaining cases, I draw an additional distinction between deviations that are discoverable by algorithmic means, and those that are not. I then argue that if a deviation is discoverable by algorithmic means, reliability is not threatened, and that if the deviation is not so discoverable, then the generative model that produced it will be relevantly similar to other discovery procedures, and can therefore be accommodated within the reliabilist framework.
Hide Abstract15 November 2024, 1830 Hrs CET: Mihaela Constantinescu, “Generative AI avatars and responsibility gaps”
Show AbstractIn this talk I address the extent to which digital and robotic generative AI avatars that represent individual persons complicate responsibility gaps opened by increasingly autonomous AI systems. I argue that using GenAI avatars requires us to lose some level of agency in terms of control and knowledge, which are precisely the two main criteria widely used to ascribe moral responsibility. Use of digital and physical GenAI avatars therefore opens new responsibility gaps, which refer to the exact nature of the relationship between the human users and their avatars powered by generative AI, and which can adequately be called “proximity gaps”.
Hide abstract13 December 2024, 1830 Hrs CET: Fabio Tollon and Ann-Katrien Oimann, “Responsibility Gaps and Technology”
Show AbstractRecent work in philosophy of technology has come to bear on the question of responsibility gaps. Some authors argue that the increase in the autonomous capabilities of decision-making systems makes it impossible to properly attribute responsibility for AI-based outcomes. In this article we argue that one important, and often neglected, feature of recent debates on responsibility gaps is how this debate maps on to old debates in responsibility theory. More specifically, we suggest that one of the key questions that is still at issue is the significance of the reactive attitudes, and how these ought to feature in our theorizing about responsibility. We will therefore provide a new descriptive categorization of different perspectives with respect to responsibility gaps. Such reflection can provide analytical clarity about what is at stake between the various interlocutors in this debate. The main upshot of our account is the articulation of a way to frame this ‘new’ debate by drawing on the rich intellectual history of ‘old’ concepts. By regarding the question of responsibility gaps as being concerned with questions of metaphysical priority, we see that the problem of these gaps lies not in any advanced technology, but rather in how we think about responsibility.
Hide abstract17 January 2025, 1830 Hrs CET: Patrick Butlin, “AI Assertion in 2025”
Show AbstractWhile LLMs’ capacity for semantic understanding has been widely debated, less attention has been paid to whether they or other AI systems can perform speech acts. The speech act of assertion involves not only producing outputs with descriptive functions, but also making substantive commitments to the aptness (perhaps truth) of these outputs, according to the norms of assertion. This entails that only entities that can be sanctioned for breaching these norms can make assertions. In ‘AI Assertion’ (with Emanuel Viebahn), we argued that this means that current AI systems cannot assert. In this talk, I will present our arguments and briefly consider whether anything has changed since we wrote the paper.
Link to ‘AI Assertion’: https://osf.io/preprints/osf/pfjzu
21 February 2025, 1730 Hrs CET: Giovanni Sileno, “The case for Normware”
Show AbstractWith the digitalization of society, the debates and research efforts relating computational systems with regulations have been widely increasing. Yet, most arguments and solutions refer to established computational/formal frameworks, rather than targeting more fundamental mechanisms. Aiming to go beyond this conceptual limitation, I will elaborate on taking “normware” as an explicit additional stance — complementary to software and hardware — for the interpretation and the design of artificial devices, highlighting the opportunities of a normware-centred engineering, as well as the problems it brings to the foreground.
Hide abstract21 March 2025, 1730 Hrs CET: Peter Königs, “Negativity bias in AI ethics”
Show AbstractFlipping through the major journals in the ethics of technology, one gets the impression that the rise of AI is an ethical catastrophe. The big debates in AI ethics almost invariably revolve around problems, while the positive aspects of AI are rarely talked about. Among ethicists, there is a ‘rising tide of panic about robots and AI’ (John Danaher), with AI-optimists generally hailing from outside philosophy.
In my presentation, I challenge the pessimistic sentiment within AI ethics by suggesting that it stems from a problematic negativity bias within the discipline. The problem, in a nutshell, is that AI ethicists have little choice but to come up with ethical concerns if they want to have a career. The incentives faced by AI ethicists must be assumed to lead to a systematic exaggeration of ethical problems with AI.
If this is correct, one takeaway is that AI is probably not as ethically problematic as the AI ethics community makes it out to be. We possess incriminating higher-order evidence regarding the community’s ability to correctly estimate how problematic AI is. It provides us with reason to assume that AI ethicists are ‘over-diagnosing’ ethical problems with AI, which entitles us to more positivity. Another lesson is that we should consider tweaking the incentives within the system to correct this dysfunction.
Hide abstract25 April 2025, 1730 Hrs CET: Tom Mcclelland, “Consciousness, Comprehension and Creativity in AI”
Show AbstractIs AI capable of creativity? This question is bound up with other challenging questions about the capacities of artificial systems. Human creativity typically involves some conscious experience of the creative project and some comprehension of the domain in which one is being creative. But are consciousness and comprehension necessary conditions of creativity? And, if so, what are the prospects of AI satisfying those conditions? I explore the role of consciousness and comprehension in the three stages of creativity – preparation, incubation and evaluation – and consider the challenges of attributing consciousness and comprehension to AI. I argue that although consciousness is not necessary for evaluation as such it is plausibly necessary for certain kinds of evaluation. Doubts about artificial consciousness then entail doubts about certain kinds of artificial creativity.
Hide abstract30 May 2025, 1730 Hrs CET: Julian Hauser, “AI am I: Personal assistants and the self”
Show AbstractThe integration of AI personal assistants into our daily lives promises to radically transform how we experience and represent ourselves. While technology’s ability to extend human agency has been widely discussed, AI assistants introduce a novel phenomenon: they can be simultaneously experienced as part of the self and as an other with whom we converse. Through an analysis of a near-future scenario involving an AI personal assistant, I show how these technologies can become transparently integrated into our perception and action, contribute to self-knowledge, and help us shape ourselves into who we want to be. At each stage, we encounter a peculiar duality: the AI assistant functions both as equipment that disappears from conscious awareness (becoming part of the pre-reflective sense of self) and as an interlocutor who provides an ‘insider’s outsider perspective’ on who we are. Rather than seeing this as undermining selfhood, I argue that this novel form of self-relation -— which I call the ‘self-as-other’ — may enhance our ability to know and shape ourselves. The paper thus contributes to debates about extended cognition and the impact of technology on human selfhood by identifying a novel way in which technology may transform self-experience: not through radical enhancement or replacement, but through the introduction of an other that is simultaneously experienced as self.
11 July 2025, 1730 Hrs CET: Steven S. Gouveia, “Abductive Medical AI: a Solution to the Trust Gap?”
Show AbstractThe application of AI in Medicine (AIM) is producing health practices more reliable, accurate and efficient than Traditional Medicine (TM) by assisting partly/totality of the medical decision-making, such as the use of deep learning in diagnostic imagery, designing treatment plans or preliminary diagnosis. Yet, most of these AI systems are pure “black-boxes”: the practitioner understands the inputs and outputs of the system but cannot have access to what happens “inside” it and cannot offer an explanation, creating an opaque process that culminates in a Trust Gap in two levels: (a) between patients and the medical experts; (b) between the medical expert and the medical process itself. This creates a “black-box medicine” since the practitioner ought to rely (epistemically) on these AI systems that are more accurate, fast and efficient but are not transparent (epistemically) and do not offer any kind of explanation. In this seminar, we aim to analyze a potential solution to address the Trust Gap in AI Medicine. We argue that a specific approach to Explainable AI (xAI) can succeed in reintroducing explanations into the discussion by focusing on how medical reasoning relies on social and abductive explanations and how AI can reproduce, potentially, this kind of abductive reasoning.
Hide abstract