DAWID KASPROWICZ
Talking about philosophy means talking about the art of reasoning. The way how to do this can differ, but the goal is always to find arguments that lead to true statements, or to statements with a high probability. Such statements should also be understandable and traceable by one or more agents, thanks to a certain capability that the art of reasoning presupposes. In a metaphysical sense, this might be a divine creativity, the Logos, or reason itself. In an epistemological sense, however, there are rules how to relate beliefs, experiences, or theoretical hypotheses to infer statements that claim truth or a high probability. In a more practical sense, we can also look for arguments to infer the best possible way how to act in a certain situation. We then argue in favor of an answer to the practical question: “What should we do?”. Thus, the art of reasoning differs from rhetoric in the sense that reasoning aims for true statements that stay true, while rhetoric represents techniques how to convince other agents of an opinion. Plato’s dislike against the Sophists is one of philosophy’s best-known examples that underlines this distinction.

Dawid Kasprowicz
Dawid Kasprowicz is a research assistant at c:o/re, where he coordinates the fellowship program. In his habilitation, he develops a phenomenological approach to the philosophy of computational sciences, based on the concept of experience and a theoretical intersection of phenomenology of technology and philosophy of science. His main research fields include theory and history of embodiment, phenomenology, human-robot-interaction, and philosophy of computer simulation.
However, today, in the age of AI-tools, -fakes, and -hallucinations, not only does the task of distinguishing between reasoning and rhetoric become challenging, but it is also hard to know who reasons at all. This is a difficult question, since a human reasoner is an epistemic agent that has beliefs. Those beliefs not only form the base in the reasoning process (e.g., as premises or expectations), but they also represent a systematic unity that is updated during the reasoning process. Can an epistemic agent be called a reasoner when it is devoid of beliefs? AI output or functioning may resemble human reasoning, but can they be about anything if they are not based on beliefs? Is there reasoning at all if we do not deal with a subject that has intentionalities? These questions were at the center of the one-day-workshop Reasoning in the Age of AI at c:o/re on December 16, 2026. The workshop put the question of reasoning agents to the forefront and presented answers ranging from the history of philosophical reasoning to questions of practical reasoning in the case of joint actions.
In his opening contribution, Markus Pantsar (Aachen) presented the example of the “large reasoning model” AI platforms, such as the large language model (LLM) GPT-5. One of GPT-5’s advertised features is the promise to retrace “chains of thoughts” in its capability to reason. To retrace the steps of inferred statements is indeed an important characteristic in the formal understanding of reasoning. However, as Pantsar showed, such chains of thoughts are just another output generated by the LLM, instead of revealing the process behind generating the original output. The AI system may follow valid rules and make correct inferences, but evidence from the generated outputs alone does not tell us whether it has followed this or that rule. This problem is particularly relevant when assessing AI failures, such as hallucinations and reasoning errors. Such failures are often taken as direct evidence of lack of reasoning capacities in AI systems, while human failures in reasoning are seen as characteristics. To deal with such problems, Pantsar called for identifiable criteria for reasoning that are independent of the physical (e.g., biological vs. non-biological) characteristics of the subject. This is where “Epistemology of AI,” as Pantsar calls it, should start in the case of reasoning. Pantsar’s suggested criterion was “robustness,” referring to the system performing on a consistently high level in a variety of circumstances.

In the second talk, philosopher of technology Jaqueline Bellon (Tübingen) showed a variety of user cases with AI, especially with ChatGPT. Similar to the impression of Markus Pantsar, she argued that these AI systems give answers during the chats that seem adequate, but that at the same time make us think: “How did the machine reason?”. Following Bellon, this first thought turns out to be a trap, especially if we consider the increasing amount of examples where answers in OpenAI platforms, such as ChatGPT, do not make sense at all or even become abstruse contributions to the discussion, something that has been subsumed as AI hallucinations. While Pantsar highlighted the semantic part of these failures in the outcome of reasoning, Bellon focused on the technological part in Generative Neural Networks (GNN). There, one main challenge for designers of AI algorithms is to model effectively latent space states. Latent spaces help to reduce the complexity and redundancy of data. An AI with good latent space algorithms, for instance, in a Large Language Model (LLM), can quickly learn how to omit irrelevant data and even manipulate its search for the most relevant data samples to improve its robustness or its predictions. However, for humans, this kind of statistical knowledge is still hard to understand, since we expect semantic relations, which are only one possible outcome in the search for latent space states. Therefore, as Bellon concluded, the research on the design of latent space algorithms can help to explain these big semantic gaps in the outcome of LLMs.
A perspective from practical reasoning was offered by Jakob Ohlhorst (Aachen) in his talk on “Theory of Mind, (joint) Reasoning, and Large Language Models.” What to do, asked Ohlhorst, if we have the same goal as the AI system but different attitudes on how to reach it? The theory of joint action subsumes a lot of scenarios how a shared goal can be attained. However, we can hardly know if the other agent has the same intentionalities as we do, let alone a technological agent. But, as Ohlhorst showed, different modes of reasoning can also be understood as an argument for joint action. This could be the case, for instance, in a complementary way, to reach a common goal. In this case, the black box of machine intentionalities would not represent an obstacle. However, as Ohlhorst mentioned, what lacks with regard to AI systems, such as LLMs, is the experience of a speech act as an embodied performance of a goal-seeking agent. Speech acts can introduce mimics and gestures, but also a situatedness that helps to improve the mutual adjustments of both reasoning agents – something that can barely happen with written texts, as in Chat GPT5.
In the afternoon session, the focus shifted towards what could be a history of philosophical reasoning with regard to AI. According to this, Pirmin Stekeler-Weithofer (Leipzig) made clear that reason has to be distinguished from reasoning. While reason means the overall capability to make truth-related statements, reasoning represents different modes of reflecting about the ways how to truth-seeking statements. This can happen in a more rational and schematized way, as in the act of calculating or deducing. But reasoning does not exhaust in these schematized modes, as Stekeler-Weithofer emphasized. There is a limit in reasoning about the formalization of knowledge and the amount of knowledge that can be formalized (for human agents). In arguing with Hegel, Stekeler-Weithofer pointed out the relationship between the idea of one reality as domain of our truth: asserting statements that reality turns into a “conceptually evaluated possibility.” While this offers one answer to the problem of meta-reasoning that was posed by Hegel (among others), we still have to accept that, in the case of reasoning, the human is taken out of the loop, and we deal with concepts that should be valid independent of time and space. These meta-concepts are presupposed, but not objects of experience anymore.
In the second historical contribution, philosopher of mathematics and phenomenologist Stefania Centrone (Munich) took up Stekeler-Weithofer’s question on the formalization of reasoning, in going back to the Hellenic antiquity. There, we find a crucial epistemological transition in reasoning: from Plato’s dialectic approach towards the best arguments for true statements, and the demand for universal rules that guide the process of reasoning – as stated in Aristotle’s famous book, “Topoi,” where even the way how to dispute is given in a formalized schema. As Centrone argued, in these antique modes of formalized reasoning, the variation of concepts resulted also in a variation of reasoning, even if the concepts had their explicit definition, as already Plato demanded. This changes in the Modern Age with Hobbes, as Centrone showed, and reaches its climax with Leibniz’s idea of a mathesis universalis, a discipline purely based on algebraic operations of reasoning. From there on, one can observe a growing tension between the language of logic and the language of the calculus – culminating in Bertrand Russell’s and Alfred N. Whitehead’s Principia Mathematica, that grounded mathematical operations on a logistic theory. However, concerning reasoning, Turing also tried to find an answer to the tension of logic and language. In a sense, the Turing machine was the materialization of such an answer, since both input data and the machine program (today better known as software) are formalized via finite signs that can be operationalized infinitely. But, Centrone pointed out, this represents only one way of reasoning that offers the space a calculatory solution. The question then would be if all semantic possibilities can be represented by the horizon of mechanically calculated possibilities.

In the last talk of the workshop, Daniel Wenz (Aachen) reminded us in his presentation “What is the Reason Behind Artificial Reasoning?” that the reason we want to talk about artificial reasoning instead of artificial intelligence is that the latter concept misleadingly links systems of information processing to phenomena and concepts like consciousness and understanding. Nevertheless, to be able to talk about artificial reasoning still presupposes an epistemological criterion that singles out a specific group of such systems. Accordingly, the task is to find general features of such systems that have no epistemic qualities in themselves but actualize our epistemic abilities by interacting with them. This links the search for a minimal condition to call something a system of artificial reasoning to the task of finding the preconditions that enable us to interact with them successfully. At first glance, this seems to imply a kind of “natural” (non-artificial) reasoning that comprises the (semantic) abilities to ask questions about something, to reformulate these questions so that they can be processed by a system of artificial reasoning, and to interpret the output of the system in a way so that it can be used to answer the original question. Wenz argued that this picture is (still) based on the flawed idea of a clear cut difference between natural and artificial reasoning, that ultimately leads either to the notion of an insurmountable gap between natural and artificial reasoning, or to a conflation of both concepts that makes the difference they are supposed to mark meaningless. He discussed the historical origins of this picture and its modern counterparts. His presentation concluded with the introduction of an alternative approach that he applied in the context of combinatorial based systems of artificial reasoning.
Taking these contributions into account, the workshop showed that a variety of approaches about the modes of reasoning are not only possible, but highly recommended. The challenge for current philosophy of technology and science lies in finding more case-driven, conceptual, and historical examples that show not only the limits of reasoning of AI. There is much more to gain – it would be to shed some light on the basic philosophical practice of reasoning, which has much more dimensions than just delivering the right result for a given question or to infer appropriately. In doing so, philosophy does not have to compare human to mechanical reasoning all the time, but focus more on the imperfections and improvements in reasoning when it comes to joint actions of finding the best solutions for a given problem.
