“We can look forward to four more exciting years” – Interview with Gabriele Gramelsberger and Stefan Böschen

The Käte Hamburger Kolleg: Cultures of Research (c:o/re) at RWTH Aachen University will begin its second funding phase in May 2025. The German Federal Ministry of Education and Research (BMBF) will fund the center for another four years. With the start of the second phase, KHK c:o/re directors Gabriele Gramelsberger and Stefan Böschen look back, reflect on the achievements and developments of the past four years, and set out the goals and expectations for the coming years.
Looking back over the past four years: What were the highlights of the first funding phase?
It is already a highlight that c:o/re is the first Käte Hamburger Kolleg at a technical university and will probably remain the only one. It is also the only center for advanced studies in history, philosophy, and sociology of science and technology worldwide. The first funding phase was a development phase. This development has been successfully completed. We got a wonderful location for this Kolleg on Theaterstraße and the best possible team. We have had great fellows in all four cohorts, with whom we have developed exciting intellectual perspectives in very different ways. In addition, we have been able to organize a large number of events, networks, and collaborations both within and outside RWTH. Therefore, there are plenty of highlights to report on from the first funding phase.

What are the lessons learned, especially with regard to the interdisciplinary exchange with the fellows?
Experience shows how demanding this collaboration ultimately is, but also how fruitful. You wouldn’t necessarily expect different branches of the humanities and social sciences to work together with natural sciences and technology, but they do. We also work intensively with our colleagues from the natural sciences and engineering. That’s why we’ve developed various formats like lab talks to make these collaborations easier. We’ve also developed projects with some fellows that are now being carried out in cooperation with the Human Technology Center (HumTec). The most important lesson learned, however, is certainly that this type of collaboration not only requires more time and a more relaxed attitude, but also that specific opportunities should be created so that the work can lead to joint results. Goal-orientation fuels interdisciplinary cooperation. In the “Software Group” (a working group at the KHK c:o/re – editor’s note), for example, an article was written with many fellows from a wide area of disciplines and published in Nature Computational Science.
What do you enjoy most about working at the KHK c:o/re?
It’s just wonderful that the KHK gives us a platform that allows us such unusual freedom in our research. This has to do with a number of important boundary conditions. On the one hand, generous funding from the German Federal Ministry of Education and Research (BMBF) allows us to invite a large number of fellows from all over the world every year to work with us on fundamental questions in science. Second, we have a great team that not only supports the work but also enables us to work together on our research goals. Finally, we receive exceptional support from the Rectorate of RWTH Aachen University, which regards the work of the Kolleg as an important asset for its strategy for excellence.
What are the goals of the second funding phase?
In the second funding phase, we are taking seriously the feedback from last year’s evaluation of our research group. The evaluation went very well fostering our research profile more strongly. This will enable us to produce results of even greater relevance and visibility. Against this background, we are pursuing two lines of research, one dealing with the digitization of research (“Varieties of the Digital”) and the other with the cosmopolitization of science (“Varieties of Science”). These lines of inquiry are not only significant in their own right, but also allow us to advance the program of an integrated interdisciplinary methodology of science studies itself, which we bundle under the heading “Expanded STS”. Furthermore, we will link “Expanded STS” to the historical reflection of computing, philosophy of science, and STS. We are working on two book series. One will consist of three volumes dealing with the history, philosophy, and sociology of computing and computational science. The other will consist of two volumes on Expanded STS.
What are you looking forward to?
We can look forward to four more exciting years. We will certainly cultivate even more freedom for individual and joint research than we have done so far. In addition, the Kolleg allows us to further develop and strengthen our international networks related to our research topics. In this way, we hope not only to achieve insightful research results, but also to support the development of a special epistemic culture at our University. This is based on the ideal of tailor-made, integrated, interdisciplinary research practices for understanding science itself, but also for finding more targeted solutions to collective problems.

What challenges do you see in the current research landscape and how does the KHK c:o/re address them?
There are a number of significant changes in the research landscape. These can be described using the triad of transformation, transformation of science, and transformative research. These changes challenge our self-image as researchers, but also the institutionalized self-understanding of science. Although science represents an institutionalized special space for the production of epistemically sound knowledge, it is also increasingly caught up in the maelstrom of contemporary transformations. Making these transformations analyzable in terms of their structure and dynamics is the central concern of our research at the Kolleg.
What is your current research focus and how does it relate to your work at the KHK?
Gabriele Gramelsberger: My research focuses on a long-term narrative of the digitization of science as part of the philosophy of computer science. The history of computer science and computational science on the one hand and current developments towards AI on the other are linked to better understand today’s “digitality”. In my view, digitality began long before the invention of the digital computer in the 1940s. Digitality is the result of the operationalization of the mind of modern philosophy in the 18th century. In the 19th century, mathematics took over, and in the 20th century, engineering took over. However, with this broader perspective, we can better integrate the humanities and social sciences into the current understanding of the digital, which is dominated by science and technology. Above all, we need to better understand the cultural impact of software, which has become the general infrastructure of research and everyday life. Its cultural impact is based on the fact that programming has introduced a new and very powerful way of using written language that not only describes operations but also executes them. Nevertheless, it is a product of written language that is worthy of being archived as cultural heritage and researched by historians, philosophers and sociologists of science and technology.
Stefan Böschen: My research focuses on a wide range of issues in the sociology of science and expanded STS (science and technology studies). Of particular importance are the different forms of collaborative research in a variety of settings. These typically relate to a wide range of fields of innovation and transformation (from neuromorphic computing to a DC-driven energy transition). In this context, concepts of research infrastructures (such as living labs) or those of the analysis of innovation and transformation processes (such as innovation ecosystems) can be further developed. This also creates highly productive new interfaces with research at the Kolleg. For example, the form and dynamics of living labs can be examined in their region-specific differences and thus investigated with regard to the differences brought about by varieties of science (cultural-institutional varieties of scientificity).
What is your “culture of research”? How would you describe the way you conduct research?
Gabriele Gramelsberger: My research culture combines epistemic and historical research in order to better understand current developments. The historical dimension encompasses a wide range of interdisciplinary practices and aspects that I am interested in. Therefore, the Kolleg is the perfect place for me to live my research culture.
Stefan Böschen: My research culture can be characterized by the combination of engineering (I am a trained chemical engineer) and sociology. This has not only given me a keen interest in technology assessment and science and technology studies, but also a great enjoyment of interdisciplinary collaboration at the interfaces between very different disciplines. The productive connection between the various disciplines of science studies plays a particularly inspiring role for me.
The interview was conducted by Jana Hambitzer.
Quo vadis, Cultures of Research?

ALIN OLTEANU AND THE C:O/RE TEAM
The Käte Hamburger Kolleg: Cultures of Research (c:o/re) celebrated itself, as it completed the first 4-year cycle of funding and is now successfully entering a second funding cycle. The center is funded by the German Federal Ministry of Education and Research (BMBF) within its framework program for the humanities and social sciences “Shaping the Future”. On March 25-27, 2025, we were delighted to get together for a conference targeted on the specific but encompassing theme of this center, namely Cultures of Research, which, we dare say, has recently become a more prominent academic topic due to the center’s efforts.
Who are we? All of us – c:o/re team members and fellows, both current and alumni, with a scientific advisory board that has steered the center’s activities. Almost all c:o/re fellows, who have carried out research here over four years, were present. This enabled a fascinating, for us, intersectional and inter-paradigmatic academic dialogue, the kind that makes the object of Cultures of Research. Chaired by the c:o/re team, fellows and scientific advisory board members have presented their research in approximately 40 talks. It was a most enjoyable opportunity for us to discuss, in hindsight, what emerged from four years of sustained academic work, having started from scratch, and how we see the center evolving in the future.

Alin Olteanu
Alin Olteanu is an Associate Professor of Semiotics at Shanghai International Studies University. Until July 2024, he worked as a a postdoctoral researcher and publications coordinator at the KHK c:o/re.
Many of us, team members and alumni fellows, deem the conference not just useful, but necessary. c:o/re has become an important dimension in the work of several of us, intellectually and institutionally. As such, gathering altogether is as important as the regular meeting of many themed academic associations. c:o/re has opened new career opportunities and perspectives for several of us. The center was formative and instrumental in the professional development of many, not just fostering the next step on a linear trajectory, such as from postdoc to tenure, but also enabling shifts in research focus, such as from engineering to science and technology studies. A small minority of alumni fellows has even found long-term academic placement at RWTH Aachen University. Even for such colleagues, who never fully left the center, the conference was needed, to reconnect with others. Many remark that it was particularly interesting to have the chance to dialog with the scientific advisory board in a collective, transparent and friendly setting.




c:o/re directors Professors Gabriele Gramelsberger and Stefan Böschen started off the conference, welcoming what was a heterogenous but familiar gathering. They shared their views on the first four years of this center, the main research topics that channel its work and how these evolved. This ushered in the first keynote, “Historicizing Epistemology” by Hans-Jörg Rheinberger, a fitting way to start off a Cultures of Research conference, setting the optics for further conversation.

The conference was structured thematically in eight panels under three main c:o/re study foci, as follows. To address the theme of Change of research practices, we organized the panels Dealing with Complexity and Digitalization of Science. The theme Organizational transformations in science was addressed through panels on Lifelikeness, “Expanded STS” & Euregio, Freedom of Research, Art and Research. The Historical and intercultural comparison of varieties of science was organized into the panels Historicizing Science and Varieties of Science. This thematic organization results from a dialectics that is both top-down and bottom-up, to follow the research center’s rationale and mission, which have been channeled, in time, through the research it produced, one step at a time.

Being part of the c:o/re team, we feel privileged to be in a position to listen to the various studies that have emerged from this research center, observing how they have shaped the center and how its entailed research topics have changed over time. To illustrate, for someone who has been a part of this four-year effort throughout, it was fascinating to listen to dialogues among the scientific advisory board with and across four generations of fellows, who seldomly knew each other. This was not just a meeting of individual scholars, but of academic groups that have crystallized during their respective fellowships, having each developed their research subculture. In this exercise, we saw first-hand the importance of institutional academic funding structured in this Käte Hamburger Kolleg format. Until now, we have worked with these scholars individually and in well-focused formats, such as thematically organized fellow cohorts.

Our festive conference opened the doors to intersectional dialogue, releasing the, however interdisciplinary, strictly focused work of individuals and clusters within c:o/re into a productive and creative chaos. As some fellows attest, while at first glimpse the range of topics brought together under the roof of the center, as seen in this conference, may seem unrelated, they epistemologically connect very well. It is such facilitating of interdisciplinary research that positioned some fellows to discover that the issues they tackle are of interest beyond the disciplinary confines within which they each operate.

We see c:o/re having enabled new and unexpected quo vadis reflections on Cultures of Research, something we can observe regarding the topic of “Expanded STS”, a c:o/re coinage that is drawing growing attention, as an anticipating consideration on scientific and technological futures. Actually, we contend that the conference panel dedicated to Expanded STS demonstrated how much STS is shaped by ‘othering’ and internal demarcation between disciplines (especially the sociology and philosophy of science). However, at the same time, our conversations reveal not only that a multitude of approaches co-exist, dealing with these boundaries differently and more productively, but also that a growing scholarly community is willing to explore new interdisciplinary avenues for cooperation.

We do not want to give the wrong impression that the research carried out at c:o/re is free of contradicting or even controversies – far from it. The conference has seen plenty of contradictory arguments and contestations among the speakers, in a way that accounts for two important matters for any research institute, namely that (1) this center is a platform for free academic debate and that (2) the approaches it hosts are epistemically compatible (that two positions on a topic are contradictory implies that they are mutually relevant). Actually, the one claim on which we found total agreement is that Freedom of Research is currently one of the most important issues for the academe, as well as society broadly. All fellows, team and scientific advisory board members see the urgent need of freely (!) discussing the freedom of scholars in the current context when sociotechnical shifts have consequences for the freedom of speech and expression.

Of course, discussion on what freedom in research is, how it is practiced and how it should be supported institutionally was fiery, encompassing a broad variety of perspectives. Overall, there is agreement that this is how an exercise in academic freedom looks like: we are free and enabled institutionally to contradict each other. We note that the Cultures of Research conference took place shortly after a new US administration started exercising pressure on scientists and universities. Political pressure on academia will undoubtedly constitute a main concern for c:o/re in its second cycle of funding, shaping its future development, as we hope and anticipate that it will shape the future development of philosophical and social inquiry on technology in general.

Unless otherwise noted, photos by Christian van’t Hoen.
The program with all speakers and titles of the conference can be found in this document.
Emotional AI in the Japanese and German Workplace: Exploring Cultural Diversity in AI Ethics as Variety of Science

STEFAN BÖSCHEN, MASAHIKO HARA, PETER MANTELLO AND ALIN OLTEANU
The Käte Hamburger Kolleg: Cultures of Research (c:o/re) is a partner on the project Emotional AI in the Japanese and German Workplace: Exploring Cultural Diversity in AI Ethics, led by Professor Peter Mantello of Ritsumeikan Asia Pacific University (APU) and funded by the Japan Society for the Promotion of Science. The project has been ongoing for about a year, and in January 2025, c:o/re director Stefan Böschen and Alin Olteanu visited Professor Mantello and conducted a first field study in Japan for this project.


As the title claims, this project examines the developments in Emotional AI from a comparative perspective between Japan and Germany. Such research is, of course, conducted in the context of increasingly rapid developments in the field of artificial intelligence (AI). So far, these developments have occurred in waves. Phases of great innovative momentum alternated with phases in which this topic appeared dormant. The recent technological platform of large language models such as ChatGPT suggests that this field is now entering a phase of lasting and disruptive development. The great geopolitical competition between world regions is exemplified in the sudden appearance of China’s DeepSeek. These developments raise questions about the specific technological development under the respective cultural-institutional framework conditions. With its AI Act, the European Union has issued the strictest, risk-based regulation of AI, striking a balance between protection against technology and industrial development, while the United States and Japan appear, at least at the present moment, reluctant to lay down a concrete regulatory policy, preferring instead a market state approach.
Our project offers the opportunity to reflect not only on the specific challenges for science studies, but importantly, in the context of the increasingly quantified workplace, where insights can be fed into the varieties of science discussions pioneered at c:o/re. The following itinerary documents the journey of c:o/re director Stefan Böschen and former team member Alin Olteanu through Japan, together with Principal Investigator Peter Mantello, along with Co-Investigator Hiroshi Miyashita of Chuo University, and the local assistance of Masahiko Hara, an alumni fellow of c:o/re who graciously gave us his time to accompany us from Tokyo to Beppu. The following blog post outlines some of the places and people we met along the way, as well as revealing insights we achieved on this whirlwind one week journey.
Tokyo, Monday, January 13th – AI Workshop

On the 13th of January, a workshop on the Future of AI was held at Ritsumeikan University’s Tokyo Campus at Sapia Tower. Bringing together stakeholders from the private sector, academia, and non-governmental organizations, the workshop explored various risks and opportunities of AI development in Japan. Some of the speakers included researcher Nicole Müller from the German Institute for Japanese Studies (DIJ), who spoke about the implications of Emotional AI and Extended Reality, Imam Habib, managing director of Menlo Park, a venture capital firm, who described the regulatory challenges facing AI start-ups in the field of healthcare, Professor Hiroshi Miyashita, speaking as a data privacy expert who examined emerging legal issues surrounding the nascent but rapidly growing field of neurotechnology and Dennis Tesolat, a spokesman for General Union, Japan’s largest labor advocacy group, who spoke about the increasing employer-employee conflicts due to the adoption of AI management systems by a growing number of Japanese companies.

Tokyo, Tuesday, January 14th – German Institute for Japanese Studies and TeamLabs
In the morning of the 14th of January, we visited the German Institute for Japanese Studies (DIJ) and held discussions with the Deputy Director of Sociology, Barbara Holhus, exploring the possibility of various short and longer-term collaboration avenues between DIJ, c:o/re and other Japanese universities. Having identified various research avenues of common interest, we agreed to meet regularly in the future. The trio of DIJ-c:o/re-APU brings a set of complementary competences to collaborate on the comparative study of research cultures and their technological evolution.


Later that day, before catching Japan’s famous Shinkansen highspeed train to our next destination, Kyoto, we took the opportunity of a mid-day hiatus to visit TeamLab Planets, an interactive museum providing customized AI-powered art experiences. Utilizing state-of-the-art technology, TeamLab Planets offers visitors seven different types of multi-sensory, fully immersive artistic environments. Not only did we all find this immersive experience very relaxing, we also noted how inspiring and motivating it can be, especially for the science and technology scholars.
Kyoto, Wednesday, January 15th – Kyoto University’s Disaster Prevention Research Institute and School of Informatics

On the 15th of January, we ventured to Kyoto University to meet researchers at the Disaster Prevention Research Institute (DPRI). Here we learned from various faculty members of the institute about the latest of current research developments in natural hazard reduction and integrated strategies utilizing the latest modeling software for disaster loss reduction. We were certainly surprised at the institute’s multidisciplinary embrace of artistic interventions in this regard, as a presentation by Naoko Tosa, a resident artist at DPRI, described her fascinating and thought-provoking designs in clothing fabric which weave digital sensors and actuators that are activated by cellular emergency warnings. Afterwards, we were invited to visit her studio, where we had a chance to get a closer look at the technical and conceptual aspects of her creative process in designing ‘disaster couture’.

The day concluded with a visit to the Department of Informatics at Kyoto University, where we met Professor Jaward Haqbeen, an Afghan AI researcher whose work focuses on the use of generative AI in language acquisition in developing countries such as Afghanistan and Nepal. We have discussed plans for future collaboration, particularly on matters of education, language learning and technological literacies.
Osaka, Thursday, January 16th – NTT’s Brain Lab Osaka University

On Thursday, we headed to Osaka to visit one of Japan’s leading research centers for brain science, NTT’s Human Information Science Laboratory (HISF) at Osaka University. Employing a multidisciplinary approach to neuroscience, HISF brings together some of the nation’s top scientists from the fields of information science, psychology, and neuroscience. Utilizing cutting-edge technologies, HISF researchers study the mechanisms underlying human perception, cognition, emotion, and movement with a current focus on understanding how environmental and social information is processed in the human body and brain. These findings are expected to serve as basis for future information technologies that are conceptually new and user-friendly. During their presentation, we learned about technology development in a surprising way, especially about the “Yuragi-Group”. This group originally develops AI according to the unique cultural principle of Yuragi, which not only enables new technical options, but also allows specific value judgments to be realized (the topic of transparency of AI). They have arrived at a unique form of AI programming that differs from deep learning programs in relevant parameters. In this way, an element of transparency has been built into the AI, as the map of affordances (this is, so to speak, the cognitive system of the AI; our formulation) can be transferred to the next system in each case. It relies on a cultural repertoire and the formation of epistemic heuristics.
Beppu, Friday, January 17th – Ritsumeikan Asia Pacific University
On Friday the 17th, the team along with Professor Masahiko Hara (Institute of Science, Tokyo) gave talks at Professor Mantello’s home institution, Ritsumeikan Asia Pacific University in Beppu, located on the east coast of Kyushu Island in the south of Japan. Targeting a primarily undergraduate audience, Stefan Böschen gave an informative lecture on the importance of science and technology studies. Alin Olteanu engaged students with a talk on the semiotics of Digital Nomadism and Masahiko Hara on his experimental artistic interventions into intelligent interfaces that read human emotions.
Outlook
Like most countries, Japan and Germany share fundamental values such as freedom, democracy, and the rule of law. They also agree that accountability, transparency, human rights and privacy should be built into AI. Yet where the EU/Germany wants a top-down government-led approach to mitigate AI harms, Japan (at least at the moment) prefers an industry/sector approach to give the technology a good chance to grow. At the government level, the annual Japan/Germany ICT Policy dialog forum promotes the need for common rules on AI. Importantly, Japanese labor law is influenced by the German legal contexts. But while German law on AI in the workplace is becoming increasingly precise and restrictive, current Japanese law is vague and ambiguous. Thus, it is important to refer to the development of both EU and German law for comparisons of Emotional AI in the workplace. As Co-Investigator Hiroshi Miyashita argues, “Japan is well-known for importing foreign laws, but a patchwork of copy/paste does not work well in a Japan”.
Concomitantly, our collaborative efforts to date suggest that Japan could offer a third way by heuristically exploring the space of AI development that seeks to create harmonious human-machine relationships, with a focus on AI that preserves human dignity. Keeping this in mind, the journey goes on, and we look forward to further opportunities to collaborate with the project of Professor Mantello, as well as with the DIJ and other stakeholders from both the private and public sector. It is interesting to note that the Japanese government is fueling the evolutionary dynamic in the field by creating far-reaching exchange opportunities for incoming researchers.
Photo Credits: Peter Mantello
“[T]here really isn’t a clear distinction between the analog and the digital” – Interview with Lori Emerson

As part of the interdisciplinary workshop “After Networks: Reframing Scale, Reimagining Connections”, which will take place at the SuperC of RWTH Aachen University on April 16 and 17, 2025, media scholar Lori Emerson will come to Aachen and give a keynote speech about her new book “Other Networks: A Radical Technology Sourcebook” (Anthology Editions, 2025). We asked her a few questions in advance to get a better understanding of how she thinks about and works with networks.

Lori Emerson
Lori Emerson is an Associate Professor of Media Studies and Associate Chair of Graduate Studies at the University of Colorado Boulder. She is also the Founding Director of the Media Archaeology Lab. Find out more on her website.
In your new book Other Networks: A Radical Technology Sourcebook, you present networks that existed before or outside of the internet, digital as well as analog. What would you say do all of these different models of networks have in common?
Many of the networks in Other Networks began as small experiments by a few individuals that didn’t necessarily have aspirations to make sure these networks had a global reach, or that these networks could not be replicated by other individuals, or that they would overtake every other kind of communication at a distance. More, because of their relative simplicity, most of these networks can be recreated today for small groups of people. My hope is that, along with my colleague Dr. libi striegl who is the Managing Director of the Media Archaeology Lab (the lab I direct and that has supported a lot of the research behind Other Networks), we will continue to create “recipes” for building small “other networks.” We have already published a small pamphlet called Build Your Own Mini FM Transmitter that very clearly and carefully walks people who have no background at all in electronics through the process of making what’s basically a micro-broadcasting station.

Do you have a favorite network example?

I am fond of all the networks in Other Networks! But one particular network I like to talk about is an example of an imaginary network: the pasilalinic-sympathetic compass, also referred to as ‘snail telegraph.’ This network was created by French occultist Jacques-Toussaint Benoît to demonstrate that snails are capable of instantaneously and wirelessly transmitting messages to each other across any distance. Benoît’s theory was that in the course of mating snails exchange so-called “sympathetic fluids” which creates a lifelong telepathic bond which also enables them to communicate with each other. He believed he could induce snails to transmit messages faster and more reliably than by wired telegraph by placing a snail on top of a letter and then, with the prodding of an electric charge, the snail would transmit the letter to another snail placed at some distance. The pasilalinic-sympathetic compass itself consisted of twenty-four different wooden structures containing a zinc bowl, cloth that had been soaked in copper sulphate, and a snail glued to the bottom of the bowl. Benoît unsuccessfully demonstrated the snail telegraph to a journalist from La Presse, Jules Allix, in October 1850.
What is the materiality of a network? In your new book, you separated them into four categories: Wireless, Wired, Hybrid and Imaginary. How was the process of organizing the networks into these different categories? Do you think it helps to visualize how a network can be materialized?
Creating a taxonomy for organizing these other networks was the most challenging part of writing Other Networks and, like any taxonomy, the system I settled on is still far from perfect. It took me many months to come up with a system for organizing networks according to their underlying infrastructure and that didn’t simply replicate historiographic conventions of accounting for technological inventions chronologically or by inventor. In other words, I wanted to underscore that networks emerge, disappear, and re-emerge slightly configured or re-combined over and over again; they also are rarely the work of a single person. More, accounting for networks in terms of chronology or “inventor” usually distracts us from seeing networks as material. Today, despite all the excellent work that has been done to reveal the material underpinnings of the internet (from its undersea cables to cable landing stations etc.), still the vast majority of people don’t know where the internet is, how it works, or where it came from. It might as well be immaterial! By contrast, I wanted to make it clear in Other Networks how, for example, there’s radio in the internet; that radio is part of the electromagnetic spectrum; that, even though we can’t see it, the electromagnetic spectrum is a ubiquitous natural resource; and that we as individuals and as communities can learn how to access this natural resource. Perhaps it’s old fashioned to say so, but I still believe that understanding the materiality of networks and how they work empowers us to build our own networks.

Where do you see the future of networks? In the digital or analog space?
One thing that became clear to me in the course of doing research for Other Networks is that there really isn’t a clear distinction between the analog and the digital like I was taught in graduate school. Telegraph communications that use, for example, morse code and that are transmitted over telegraph or telephone wires are digital in the sense that they are pulses of electricity in much the same way that digital computers use pulses of electricity to indicate 1’s and 0’s. In this sense, I think the future of networks is less about whether they’re analog or digital and more about whether they are built for small, local communities; whether they are cooperatively owned rather than corporate-owned; whether they can be maintained over the long run without resorting to blackboxing; and, finally, whether they have built-in structures to resist surveillance, tracking, and monetization.
Thank you so much for the interview, Lori!
More information about the workshop, the program and registration can be found on this website.
Header photo: Jenna Maurice
Towards a Philosophy of Digitality: Gabriele Gramelsberger was awarded the K. Jon Barwise Prize

DAWID KASPROWICZ
On Thursday, January 9, 2025, KHK c:o/re director Gabriele Gramelsberger gave a lecture at the 121st annual meeting of the American Philosophical Association (APA), Eastern Division, in New York. Her lecture, titled “Philosophy of Digitality: The Origin of the Digital in Modern Philosophy”, was given in relation to the award of the K. Jon Barwise Prize by the APA in 2023 for her significant and sustained contributions to philosophy and computing.

photo credits: American Philosophical Association
Robin Hill, computer scientist from the University of Wyoming and a longtime member of the APA, introduced Prof. Gramelsberger and chaired through the session. Named after the American mathematician and philosopher K. Jon Barwise, the prize honors since 2002 scholars for their lifelong efforts in the disciplines of philosophy and computing, especially in the fields of artificial intelligence and computer ethics. Next to Prof. Gramelsberger, who received the prize for 2023, the Israelian philosopher Oron Shagrir from the Hebrew University Jerusalem received the Barwise Prize for 2024. Among the former winners of the prize are well-known philosophers such as Daniel Dennett, David Chalmers or Jack Copeland. Gabriele Gramelsberger was the third woman who won this award.

Prof. Gramelsberger presented two parts in her lecture: in the first, she introduced her conception of a philosophy of digitality since the modern age, and in the second, she highlighted some current challenges for philosophers to describe digitality as a socio-cultural phenomenon. It is not common in philosophy to relate the digital to thinkers of the modern age. In doing so, Prof. Gramelsberger began her talk with a schema how a prehistory of the digital could be written – a history that does not start with machines and technological objects, but with a reinterpretation of writings such as René Descartes’ Discourse de la méthode from 1637. In this classical book, Descartes did not only introduce a procedure how to separate right from wrong in scientific judging. Following Prof. Gramelsberger, he was also one of the first who systematically described thinking as a cognitive process, a process which could be distinguished in several steps that build up on each other. Instead of only considering the right inference from the premises (as done in syllogistic reasoning), Descartes also conceived thinking as a series of discrete steps that one has to execute appropriately to split a bigger problem into several smaller ones. It is this discrete and procedural way to describe thinking that we also find in the papers of the AI-pioneers Allan Newell and Herbert Simon and their General Problem Solver, as Prof. Gramelsberger argued.

While Descartes introduced the first discretization of cognitive processes, Leibniz went further to describe cognitive operations with a symbolic system. This artificial language consisting of arithmetic, algebra and logic should constitute the adequation between the object and the concept, between the relations of objects and the judgments. In this sense, Leibniz not only introduced the symbolic order to formulate possible experiences in the real world, he was also able to replace the qualitative and substance-oriented with a formal and quantitative one. This equivalence of being with the formal calculus allowed him to extend the conditions of possible experiences into the transcendence of mathematical operations. From here, Prof. Gramelsberger argued, it is not far anymore to rule-based cognitive operations that could also be externalized – and this is exactly what pioneers of digital computers such as C. Babbage did in the 19th century (see also in Gramelsberger 2023, p. 40-44).


The execution of such mechanized operations happens today a billion times in a couple of seconds. Taking into consideration, as Prof. Gramelsberger highlighted, that there are more than five billion smartphones in the world, a philosophy of digitality has also to respond to digital cultures and their objects as an everyday experience of most people. In this regard, Prof. Gramelsberger presented in the second half of her talk a more critical and phenomenological approach. It is the operation of digital machines beneath our “phenomenological thresfold” that represents on the one hand a challenge for a philosophy of digitality, but on the other hand also a risk for the wellbeing of the users. In referring to the German concept of “cultural techniques” (Kulturtechniken) (Krämer and Bredekamp 2013), Prof. Gramelsberger illustrated that in cultural techniques such as writing, one always operates with discretized symbols – whether in alphabets or in the arithmetic sense. The fundamental difference with digital machines lies in the affective mode by which they address us, as the Barwise-awardee explained. Most often, the goal of social media communication would be to raise emotions, but the resources to do so are affects that are triggered beneath our threshold of intentional attention. At the end of her talk, Prof. Gramelsberger pointed sharply out to a threatening constellation where man has lost its ability to be “eccentric”, as the German philosopher Helmuth Plessner called it. Instead, in the age of an affective smartphone culture and massive data-storage (often owned by private companies), man becomes centric again and stays in one place to go through a myriad of affective-loaded communications that keep him in a loop to create even more data.

In his response to Prof. Gramelsberger’s talk, Zed Adams from the New School for Social Research in New York extracted three leading questions: These questions highlighted the relation of the analog and the digital, the question of the copy in the age of the digital, and the challenge how to describe the affective regime in our current smartphone culture. Adams offers to dig deeper into the challenge of a “Philosophy of Digitality” were also taken up vividly by the audience. Especially the distinction of affect and emotion evoked some discussions, but also the challenge how to describe the cultural impact of technologies such as AI with philosophical tools. A first answer was to find ways how to describe the less complex yet emotionally overwhelming ways we can observe in the use of social media apps. This could be a fist step to better understand how machines in the age of AI recentralize us as human beings – or decentralize us as the contingent result of data-management.
Gabriele Gramelsberger. 2023. Philosophie des Digitalen. Zur Einführung. Junius: Hamburg.
Sybille Krämer and Horst Bredekamp. 2013. Culture, Technology, Cultural Techniques – Moving Beyond Text. In: Theory, Culture & Society 30(6): 20-29. DOI: 10.1177/0263276413496287
European Dialogue: Freedom of Research and the Future of Europe in Times of Uncertainty

JANA HAMBITZER
During a day-long symposium, part of the Freedom of Research: A European Summit – Science in Times of Uncertainty, speakers and panelists explored various aspects of freedom of research and the future of Europe in the context of ongoing global crises and conflicts.
“We should not think that freedom is self-evident. Freedom is at danger in every moment, and it is fragile”. With these cautioning words, Prof. Dr Thomas Prefi, Chairman of the Charlemagne Prize Foundation, welcomed the participants of the symposium on freedom of research, which took place at the forum M in the city center of Aachen on November 5, 2024.

As part of the Freedom of Research: A European Summit – Research in Times of Uncertainty, the Foundation of the International Charlemagne Prize of Aachen, the Knowledge Hub and the Käte Hamburger Kolleg: Cultures of Research (c:o/re) of RWTH Aachen University jointly provided an interdisciplinary platform to discuss the crucial role of freedom in scientific, social and political contexts concerning the future of Europe with researchers, policymakers, business representatives and the public.
The aim was to critically explore different forms and practices of implementing freedom of research in line with European principles and in support of democratic governance and societal benefits. The thematic focus of the symposium was on dealing with the numerous complex crises of our time – from military conflicts to right-wing populism – as well as addressing challenges associated with new technologies such as AI and the metaverse.
Humanity and Collaboration in the Age of Emerging Technologies
The strategic importance of freedom in fostering innovation and maintaining democratic values in a globally competitive landscape was emphasized by Wibke Reincke, Senior Director and Head of Public Policy at Novo Nordisk, and Dr Jakob Greiner, Vice President of European Affairs at Deutsche Telekom AG. From an industry perspective, both speakers underscored the need for open societies that invest in innovation to ensure the continuity and growth of democratic principles.
The emergence of the metaverse and other cutting-edge technologies were discussed by Jennifer Baker, Reporter and EU Tech Influencer 2019, Elena Bascone, Charlemagne Prize Fellow 2023/24, Nadina Iacob, Digital Economy Consultant at the World Bank, and Rebekka Weiß, LL.M., Head of Regulatory Policy, Senior Manager Government Affairs, Microsoft Germany. The panelists pointed out the essential role of human-centered approaches and international collaboration in addressing the ethical and societal challenges associated with new technologies, and in shaping the metaverse according to European ideals.

The inherent tension between technological progress and the preservation of research freedom was highlighted by Prof. Dr Gabriele Gramelsberger, Director of the Käte Hamburger Kolleg: Cultures of Research (c:o/re), who raised the question of how AI is changing research. Prof. Dr Holger Hoos, computer scientist at RWTH Aachen University and a leading researcher in Machine Learning, stated that publicly funded academic institutions must remain free from any influence of money and market pressure to foster cutting-edge research motivated solely by intellectual curiosity. Prof. Dr Benjamin Paaßen, Junior Professor for Knowledge Representation and Machine Learning at Bielefeld University, further argued that AI in research and education should only be used as a tool to complement human capabilities, rather than replace them.

Conflicts over Academic Freedom and the Role of Universities
The de facto implementation of academic freedom worldwide was presented by Dr Lars Lott from the research project Academic Freedom Index at the Friedrich-Alexander-University Erlangen-Nuremberg. In a 50-year comparison, from 1973 to 2023, he illustrated a significant improvement of academic freedom in countries worldwide. However, looking from an individual perspective, the opposite is true: almost half of the world’s population lives in countries where academic freedom is severely restricted due to the rise of populist and authoritarian regimes.
Dr Dominik Brenner from the Central European University in Vienna reported firsthand on the forced relocation of the Central European University (CEU) from Budapest to Vienna and noted that such restrictions of academic freedom are an integral part of illiberal policies. Dr. Ece Cihan Ertem from the University of Vienna provided another example of increasing authoritarianism in academic institutions by discussing the suppression of academic freedom at Turkey’s Bogazici University by the government. Prof. Dr Carsten Reinhardt from Bielefeld University warned of the modern efforts in our societies to restrict academic freedom through fake news or alternative facts. From a historical perspective, these are fundamental attacks destroying the basis of truth-finding, to similar developments during the Nazi regime in Germany.
Another pressing issue, the precariousness of academic employment in Germany, was highlighted by Dr Kristin Eichhorn from the University of Stuttgart and co-founder of the #IchBinHanna initiative, protesting against academic labor reforms that disadvantage early and mid-career researchers. She pointed out that the majority of faculty work on fixed-term contracts, which significantly restricts researchers’ ability to exercise their fundamental right to academic freedom due to tendencies to suppress both structural and intellectual criticism.

How to deal with these challenges? Prof. Dr Stefan Böschen, Director of the Käte Hamburger Kolleg: Cultures of Research (c:o/re), stressed that political assumptions and politically motivated conflicts can make academic discourse more difficult. However, it is important to foster dialogue once a common basis for discussion has been established. Frank Albrecht from the Alexander von Humboldt Foundation advocated for greater efforts in science diplomacy and the vital role of academic institutions in international relations. Miranda Loli from the Robert Schuman Center for Advanced Studies, the European University Institute in Florence, and Charlemagne Prize Fellow 2023/24, emphasized the need for universities to act as reflexive communities that engage critically with the processes that shape academic freedom while recognizing their potential as informal diplomatic actors.

Research as a Basis for European Conflict Resolution
The intersection of academic freedom and conflict resolution was explored in a discussion between Dr Sven Koopmans, EU Special Representative for the Middle East Peace Process, and Drs René van der Linden, former President of the Parliamentary Assembly of the Council of Europe and Dutch diplomat, moderated by Dr Mayssoun Zein Al Din, Managing Director of the North Rhine-Westphalian Academy for International Politics in Bonn. They argued that research is essential for understanding and resolving global conflicts and emphasized the role of the EU as a key player in international peace efforts. The two discussed the challenges of assessing conflicts from a European perspective, particularly the differing opinions of member states, and highlighted the EU’s economic power as a crucial factor in in international peace efforts. Dr Koopmans emphasized the importance of an optimistic outlook, stating: “Let’s work on the basis – that there is a peace that we may one day achieve. It maybe sounds very difficult […], but you know: Defeat is not a strategy for success.”

The symposium underlined the critical importance of protecting freedom in research, science, and diplomacy. The discussions made clear that academic freedom is neither given nor a permanent state; rather, it requires continuous vigilance and proactive efforts to preserve. The collective message from the symposium reinforced that science in times of uncertainty can be navigated through regulation and governance for innovation, a strong European and international academic community, and independent universities as safe places to ensure the future of a democratic, secure and progressive Europe.
Photo Credits: Christian van’t Hoen
The Freedom We Stand For

RWTH KNOWLEDGE HUB
RWTH’s Freedom Late Night event brought a vibrant mix of guests to the Ludwig Forum, offering talks, discussions, performances, and entertainment that celebrated diverse perspectives on freedom.

“Why not cook a pot of soup and share it with your neighbors?” Publicist Marina Weisband’s suggestion at RWTH’s second Late Night event was one of the many unconventional ideas presented to bridge divides within society.
Held Monday evening at the Ludwig Forum für Internationale Kunst, RWTH hosted a dynamic, entertaining, and insightful program on the theme of freedom. Moderated by journalist Claudia Kleinert and poetry slammer Luca Swieter, the event featured guests from culture, politics, sports, and academia, including Marina Weisband, actress Luise Befort, podcaster Dr. Ulf Buermeyer, former national soccer player Andreas Beck, and Borussia Mönchengladbach’s chief data analyst, Johannes Riegger.
Discussions across three stages explored freedom from sporting, cultural, scientific, philosophical, political, and social perspectives. Musical and artistic highlights included a specially choreographed performance by the dance ensemble Maureen Reeor & Company, the lively Popchorn pop choir, and the RWTH Big Band.
Throughout the evening, the unique setting of the Ludwig Forum underscored the importance of unity and the need to avoid societal divides. As Weisband noted, “With a bowl of soup in hand, engage with your neighbors to confront populist narratives together. Take the liberty to try something a bit daring now and then.”

The complexities of today’s reality were echoed by Dr. Domenica Dreyer-Plum from RWTH’s Institute of Political Science, who observed that while many people are frustrated with the current political and social climate and are tempted to protest or support extremist parties, “the AfD only seemingly has an answer to the big questions.”
For the academic guests, discussions naturally turned to freedom in research. Professor Verena Nitsch, head of RWTH’s Institute of Industrial Engineering and Ergonomics and chair of the University’s Ethics Commission, emphasized that the Commission’s role is not to restrict research, “but to train researchers to anticipate risks”.

“We live in times where technology is powerful, but wisdom is lacking,” added Professor Stefan Böschen, spokesperson for RWTH’s Human Technology Center and co-director of the “Cultures of Research” Käte Hamburger Center, highlighting the ethical challenges posed by AI and advanced technology.
Former judge and podcaster Dr. Ulf Buermeyer offered a practical take on restoring trust in politics: “We need substantial investment in railways and infrastructure like bridges. People need to see and feel that progress is happening. We can’t just talk our way out of this crisis.”
For actress Luise Befort (Club der roten Bänder, Der Palast), freedom is something many take for granted: “I am allowed to work in my profession – unlike so many women around the world.” Befort sees this as a profound privilege she does not take lightly.
Professional footballers, however, face a more limited kind of freedom. Johannes Riegger, chief data analyst at Bundesliga club Borussia Mönchengladbach, and former national player Andreas Beck (VfB Stuttgart, Besiktas Istanbul) shared anecdotes about the intense monitoring they undergo. Beck described how their movements on the field are tracked with advanced technology, making performance data highly transparent. Yet, according to Riegger, the level of surveillance is even greater in the United States, where athletes in major leagues are subjected to round-the-clock monitoring. By comparison, the monitoring in Germany is seen as manageable and part of the job.

A diverse lineup of speakers shared their insights on freedom and technology. Among them, Luise Befort; queer artist Lukas Moll, who warned that “technology can discriminate, and algorithms can reinforce stereotypes”; Frank Albrecht of the Humboldt Foundation, who reflected on “the privilege of living in a country like Germany, where academic freedom is highly valued”; screenwriter Jana Forkel, who said, “When it comes to creative work like screenwriting, AI poses no threat yet – this is where human input remains essential”; Volucap CEO Sven Bliedung von der Heide, who noted, “At Volucap, we’re pioneering new possibilities in film production, though our goal isn’t to replace actors entirely”; and author Betül Hisim, who observed, “AI can be a source of inspiration but is far from replacing the essence of what makes us human.”
The RWTH Late Night event was organized by the RWTH Knowledge Hub as part of the Freedom of Research Summit, a collaboration between the Stiftung Internationaler Karlspreis zu Aachen, the Knowledge Hub, and the Cultures of Research Käte Hamburger Center.
The RWTH Knowledge Hub is a vital instrument for transferring knowledge to society. “Knowledge isn’t only created at RWTH; it’s essential that we also share it with society – as we are doing tonight with the Late Night,” said Professor Matthias Wessling, Vice-Rector for Research Transfer at RWTH.

Despite their diverse perspectives, all the speakers agreed on one message: that freedom and democratic values require active effort. To quote Goethe: “This is the highest wisdom that I own; freedom and life are earned by those alone who conquer them each day anew.”
Photo Credits: Christian van’t Hoen
After Memory: Recalling and Foretelling across Time, Space, and Networks

NATHALIA LAVIGNE
AFTER MEMORY: An introduction about the long-term project co-developed by KHK c:o/re Junior Fellow Nathalia Lavigne, followed by a brief report about the symposium which took place last October in Karlsruhe, gathering specialists from arts, science and technology discussing the temporal, spatial, and social dimensions of digital memory in current times.
What comes after memory? I came across this question in one of the first drafts of the project AFTER MEMORY, developed together with the researchers Lisa Deml and Víctor Fancelli, while writing the opening remarks for the symposium AFTER MEMORY: Recalling and Foretelling across Time, Space, and Networks. The event took place in October (between 23rd and the 26th) at the ZKM | Center for Art and Media and at the Karlsruhe University of Arts and Design (HfG), in Karlsruhe. During three and a half days, we had the chance to speculate about the temporal, spatial, and social dimensions of digital memory in an intense and vivid program – the first stage of this long-term project, which will continue in the following years with an exhibition and other formats.

Nathalia Lavigne
Nathalia Lavigne [she/her] works as an art researcher, writer and curator. Her research interests involve topics such as social documentation and circulation of images on social networks, cultural criticism, museum and media studies and art and technology.
This initial question still resonates, even if it’s hard to figure out any answer. Maybe it should be asked in a different way. It’s hard to imagine what is coming after memory since afterness is what has been lacking in recent times. Trapped, as we are, in an endless present, experiencing time perception obliterated by information overload, it is hard to find any sort of escape room that allows us to imagine what is about to come.
If modernism was marked by the ‘present future’ and many futuristic utopias, the end of the Cold War changed this perspective, when focus shifted to a ‘present past’ (Huyssen 2000). From autobiographies to the creation of different kinds of museums, from the emergence of new historiographical narratives to the reinvention of traditions, memory has become a trivial word, counted in the form of increasingly unlimited bytes. More recently, with the instantaneous mediation of reality and new archiving formats created by anyone, the goal of ‘total remembrance’, as Andreas Huyssen defined, has become unquestionable – although increasingly unattainable.
Different from other historical moments, we seem to be stuck in the present now. In a way, it shouldn’t be so bad: this is, after all, the only temporal condition that we can know. It’s in the present when memories are constantly updated; when we conceive in our imagination what is about to come. There are probably positive effects of changing the focus of the so-evoked future or past, as we did other times, and which have diverted our attention from what is happening now. But this is not what we can say based on our experience of being constantly “stuck on the platform”, to borrow the title of Geert Lovink’s recent book. If we have reached the end of “an era of possibilities and speculation”, as he affirms, what is the emergency exit for this reality in which platforms have closed any chance of collective imagination (p.42)?
If temporal fragmentation is far from a new thing, it is hard to deny that the internet complex (Crary, 2022) has made this feeling stronger. While our lives are displayed to us as thematic galleries assembled by automated digital systems whose rules we are unaware of, what happens in the present remains indecipherable and imperceptible. And especially under the circumstances imposed by the Covid-19, when the immersive experience of screens became the default perception, this effect was even stronger.
Needless to say that many of the ideas behind After Memory have their roots in what we lived during the pandemic, when most of us have experienced some episode of memory blur or digital amnesia. Although the impact of Covid-19 in our cognitive system is still unclear, recent studies reveal deficits in the performance of people a year or more after infection. Even the lockdown itself left marks, too, since spatial memory is essential in how we recollect events. And if time perception was especially obliterated during the pandemic, this feeling is inseparable from the well-known time-space compression, which was always related with capitalist expansion (Harvey 2012).But how different is this process nowadays, when the rise of generative AI, for instance, has created a new understanding about memory, making us confront a past that never really existed, as Andrew Hoskins has recently pointed out.

Photo Credits: Markus Breigt, KIT
Unmapping Landscapes, Endless Instants and Speculative (off-line) Networks
From some of these ideas, we developed the structure of After Memory’s symposium in three sections, each investigating an essential aspect of the conception and actualisation of memory: space (Unmapping Landscapes), time (Endless Instants), and communication (Speculative Networks). Dedicated to one of these specific programs, each day started with a workshop, which took place in a post-war modernist pavilion with glass walls and surrounded by a garden. Blankets on the floor invited participants to sit in a circle, or eventually to lie down as they saw fit. In some cases, the activities were interspersed with moments of meditation – either guided by sound or followed by a breathing technique such as Pranayama. In the end, we noticed how these morning sections played an important role in how the participants connected to each other, being more open to elaborate new ideas in a nonjudgmental atmosphere.

Photo Credits: Markus Breigt, KIT
When we were first offered this venue for hosting the workshops, the fact that there was no internet available was initially a concern. A wifi connection could be required in some activities, especially considering that networks and the digital sphere were some of the umbrella terms of the program. But we decided to keep the Pavilion in spite of that. On a more individual note, I am tempted to think that this was actually a reason which helped people to build connections that would continue beyond that moment. After this experience, I was more convinced to agree with the bold statement of Johnathan Crary in the opening of Scorched Earth – Beyond the Digital Age to a Post-Capitalist World: “If there is to be a liveable and shared future on our planet it will be a future offline, uncoupled from the world-destroying systems and operations of 24/7 capitalism” (2022, p.1).
In recent decades, social media has interwoven itself into the art system. Although the potential of the visual art field for creating connections has been present before the rise of these platforms, their constant use has made it nearly impossible for artists, cultural institutions, or the audience to avoid them, even as the controversies around how these platforms operate became more evident. In a moment when we have been talking about the end of a fantasy that Web 2.0 would be a democratic environment, especially due the problematic ties between platforms and authoritarian populism, it is crucial to imagine alternative ways of connecting which do not depend exclusively on them.

Photo Credits: Markus Breigt, KIT
During my fellowship at the Käte Hamburger Kolleg: Cultures of Research (c:o/re), I am interested in mapping how artists have been developing disruptive and speculative forms of networks from the mid-1990s to the present, but also, as a curator, in helping to implement projects that can contribute to generating new communications systems.
And if it is still not clear what comes after memory, or when, it seems important to experience these enquiries together, enabling memories to be updated more deeply through different understandings about time, space and, especially, communication.
Further reading and references:
Crary, Jonathan. 2022. Scorched earth: Beyond the digital age to a post-capitalist world. Verso Books: New York.
Harvey, David. 2012. “From space to place and back again: Reflections on the condition of postmodernity.” In: Mapping the futures, edited by John Bird, Barry Curtis, Tim Putnam and Lisa Tickner. Routledge: London, pp. 2-29.
Hoskins, Andrew. 2024. “AI and memory.” In: Memory, Mind & Media 3: e18.
Huyssen, Andreas. 2000. “En busca del tiempo futuro.” In: revista Puentes 1.2, pp. 12-29.
Lovink, Geert, et al. 2022. Extinction internet: our inconvenient truth moment. Institute of Network Cultures: Amsterdam.
Can nuclear history serve as a laboratory for the regulation of artificial intelligence?

ELISABETH RÖHRLICH
Artificial intelligence (AI) seems to be the epitome of the future. Yet the current debate about the global regulation of AI is full of references to the past. In his May 2023 testimony before the US Senate, Sam Altman, the CEO of Open AI, named the successful creation of the International Atomic Energy Agency (IAEA) a historical precedent for technology regulation. The IAEA was established in 1957, during a tense phase of the Cold War.

Calls for global AI governance have increased after the 2022 launch of ChatGPT, OpenAI’s text-generating AI chatbot. The rapid advancements in deep learning techniques evoke high expectations in the future uses of AI, but they also provoke concerns about the risks inherent in its uncontrolled growth. Next to very specific dangers—such as the misuse of large-language models for voter manipulation—a more general concern about AI as an existential threat—comparable to the advent of nuclear weapons and the Cold War nuclear arms race—is part of the debate.

Elisabeth Röhrlich
Elisabeth Röhrlich is an Associate Professor at the Department of History, University of Vienna, Austria. Her work focuses on the history of international organizations and global governance during the Cold War and after, particularly on the history of nuclear nonproliferation and the International Atomic Energy Agency (IAEA).
From nukes to neural networks
As a historian of international relations and global governance, the dynamics of the current debate about AI regulation caught my attention. As a historian of the nuclear age, I was curious. Are we witnessing AI’s “Oppenheimer moment,” as some have suggested? Policymakers, experts, and journalists who compare the current state of AI with that of nuclear technology in the 1940s suggest that AI has a similar dual use potential for beneficial and harmful applications—and that we are at a similarly critical moment in history.
Some prominent voices have emphasized analogies between the threats posed by artificial intelligence and nuclear technologies. Hundreds of AI and policy experts signed a Statement on AI Risk that put the control of artificial intelligence on a level with the prevention of nuclear war. Sociologists, philosophers, political scientists, STS scholars, and other experts are grappling with the question of how to develop global instruments for the regulation of AI and have used nuclear and other analogies to inform the debate.

(Credits)
There are popular counterarguments to the analogy. When the foundations of today’s global nuclear order were laid in the mid-1950s, risky nuclear technologies were largely in states’ hands, while today’s development of AI is driven much more by industry. Others have argued that there is “no hard scientific evidence of an existential and catastrophic risk posed by AI” that is comparable to the threat of nuclear weapons. The atomic bombings of Hiroshima and Nagasaki in August 1945 had drastically demonstrated the horrors of nuclear war. There is no similar testimony for the potential existential threats of AI. However, the narrative that because of the shock of Hiroshima and Nagasaki world leaders were convinced that they needed to stop the proliferation of nuclear weapons is too simple.
Don’t expect too much from simple analogies
At a time of competing visions for the global regulation of artificial intelligence—the world’s first AI act, the EU Artificial Intelligence Act, just entered into force in August 2024—a broad and interdisciplinary dialog on the issue seems to be critical. In this interdisciplinary dialog, history can help us understand the complex dynamics of global governance and scrutinize simple analogies. Historical analysis can place the current quest for AI governance in the long history of international technology regulation that goes back to the 19th century. In 1865, the International Telegraph Union was founded in Paris: the new technology demanded cross-border agreements. Since then, any major technology innovation spurred calls for new international laws and organizations—from civil aviation to outer space, from stem cell technologies to the internet.
For the founders of the global nuclear order, the prospect of nuclear energy looked just as uncertain as the future of AI appears to policymakers today. Several protagonists of the early nuclear age believed that they could not prevent the global spread of nuclear weapons anyway. After the end of World War II, it took over a decade to build the first international nuclear authority.
In my recent book Inspectors for Peace: A History of the International Atomic Energy Agency, I followed the IAEA’s evolution from its creation to its more recent past. As the history of the IAEA’s creation shows, building technology regulation is never just about managing risks, it is also about claiming leadership in a certain field. In the early nuclear age—just as today with AI—national, regional, and international actors competed in laying out the rules for nuclear governance. US President Dwight D. Eisenhower presented his 1953 proposal to create the IAEA—the famous “Atoms for Peace” initiative—as an effort to share civilian nuclear technology and preventing the global spread of nuclear weapons. But at the same time, it was an attempt to legitimize the development of nuclear technologies despite its risks, to divert public attention from the military to the peaceful atom, and to shape the new emerging world order.

Simple historical analogies tend to underestimate the complexity of global governance. Take for instance the argument that there are hard lines between the peaceful and the dangerous uses of nuclear technology, while such clear lines are missing for AI. Historically, most nuclear proliferation crises centered around opposing views of where the line is. The thresholds between harmful and beneficial uses do not simply come with a certain technology, they are the result of complex political, legal, and technical negotiations and learnings. The development of the nuclear nonproliferation regime shows that not the most fool-proof instruments were implemented, but those that states (or other involved actors) were willing to agree on.
History offers lessons, but does not provide blueprints
Nuclear history offers more differentiated lessons about global governance than the focus on the pros and cons of the nuclear-AI analogy suggests. Historical analysis can help us understand the complex conditions of building global governance in times of uncertainty. It reminds us that the global order and its instruments are in continuous process and that technology governance competes with (or supports) other policy goals. If we compare nuclear energy and artificial intelligence to inform the debate about AI governance, we should avoid ahistorical juxtapositions.
The Leibniz Puzzle

GABRIELE GRAMELSBERGER
On the occasion of an invitation to a lecture on Leibniz as a forerunner of today’s artificial intelligence at the Leibniz Library in Hannover, where most of his manuscripts are kept and edited, I had the opportunity to see some excerpts from his vast oeuvre. Prof. Michael Kempe, head of the research department of the Leibniz Edition, gave me some insights into the practice of editing Leibniz’s writings. Leibniz literally wrote on every piece of paper he could get his hands on. Hundreds of thousands of notes, because Leibniz wrote various notes on a large sheet of handmade paper and then cut it up himself to sort the individual notes thematically. A kind of early note box. However, he did not actually sort many of his notes and left behind a jumble of snippets.

How do you deal with the jumble of 100,000 snippets?
Nowadays, Artificial Intelligence (AI) technology is used to put together the “puzzle”, as Michael Kemper calls it. Supported by MusterFabrik Berlin, which specializes in such material cultural heritage puzzles, the snippets are reassembled again and again and reveal many surprises. For example, a snippet of Leibniz’s idea on “Motum non esse absolutum quiddam, sed relativum …” (Fig. 2 front/back side) showed a fragment of a geometric drawing. However, the snippet 22 preserved in box LH35, 12, 2 was not completed by any other snippet in this box. The notes were sorted by hand in the late 19th century by the historian Paul Ritter (Ritter catalog) as a basis for a later edition. Ritter’s catalog was a first attempt to bring some order to the scattered notes. Now, more than a hundred years later, AI technology is bringing new connections and affiliations to light. Snippet 43, shown in Figure 3 (front/back side), completed this part of the puzzle. It was located in box LH35, 10, 7 and had never before been connected to snippet 22.


Trains of thought made visible
“What these recombined snippets tell us,” says Michael Kempe, “is how Leibniz’s thinking worked. He used writing to organize and clarify his thoughts. He wrote all the time, from morning, just after waking up, until late at night. And he often used drawings to illustrate, but also to test his ideas. He changed the sketches and thus further developed his train of thought.” Combined snippets 22/43 are such an example. While writing about the relativity of motion, Leibniz made some geometric sketches of the motion of the planets and added some calculations (fig. 4b).


Leibniz’ contributions to AI
An interesting side aspect is that the AI technology, used for solving the Leibniz puzzle, is based on a modern version of Leonhard Euler’s polyhedron equation, which was inspired by Leibniz’s De Analysi situs. De Analysi situs, in turn, was the topic of my talk the day before on the influence of Leibniz’s ideas on AI technology. So, it all fitted together very well. However, Leibniz’ contributions to AI were manifold. Already his contribution to computation were outstanding—he had developed a dyadic calculation system, an arithmetic mechanism (Leibniz wheel), which was in use until the beginning of the 20th century, and directed the construction of a four species arithmetic machine. However, his contribution to a calculus of logic was even more significant, because he had to overcome sensory intuition and to develop an abstract intuition solely based on symbolic data. De Analysi situs was precisely about this abstract stance, which came into use only in 19th century’s symbolic logic. Furthermore, De Analysi situs is considered a precursor of topology, which inspired Euler’s polyhedron equation, which expresses topological forms with graphs. Graphs, in turn, play a crucial role in AI for network analysis of all types of data points and relationships. This closes the circle from Leibniz to AI.
De Analysi situs (1693)
How did Leibniz overcome sensory intuition and develop an abstract intuition solely based on data points and relationships? The text begins with the following sentences: “The commonly known mathematical analysis is one of quantities, not of position, and is thus directly and immediately related to arithmetic, but can only be applied to geometry in a roundabout way. Hence it is that from the consideration of position much results with ease which can be shown by algebraic calculation only in a laborious manner” (Leibniz, 1693, p. 69). Leibniz criticized the limited arithmetic operativity of algebraic analysis (addition, subtraction, multiplication, division, square root) and called for the expansion of operations through the analytical method for geometry and geometrical positions.
This expansion was the following: “The figure generally contains, in addition to quantity, a certain quality or form, and just as that which has the same quantity is equal, so that which has the same form is similar. The theory of similarity or of forms extends further than mathematics and is derived from metaphysics, although it is also used in mathematics in many ways and is even useful in algebraic calculus. Above all, however, similarity comes into consideration in the relations of position or the figures of geometry. A truly geometrical analysis must therefore apply not only equality and proportion […] but also similarity and congruence, which arise from the combination of equality and similarity” (p. 71).
Leibniz criticized that it was the fault of the philosophers, who were content with vague definitions. And now comes the decisive step: he proposed an exact definition for the concept of similarity. He writes: “I have now, by an explanation of the quality or form which I have established, arrived at the determination that similar is that which cannot be distinguished from one another when observed by itself” (pp. 71-72). Thus, he replaced similarity by indistinguishability and argued that indistinguishability only requires the comparison of data “salva veritate.” He thus established a concept of indistinguishability which can “be derived from the symbols by means of a secure computations and proof procedure” (p. 76), which is the basis of all data operations to this day.
With this algorithm, Leibniz hoped, that “all the questions for which the faculty of perception is no longer sufficient can be pursued further, so that the calculus of position described here represents the complement of sensory perception and, as it were, its completion. Furthermore, in addition to geometry, it will also permit hitherto unknown applications in the invention of machines and in the description of the mechanisms of nature” (p. 76). It is an algorithm that is intended to help recognize similarities purely on the basis of data. Today, we call this clustering and it is the central strategy of unsupervised learning, i.e. a method for discovering similarity structures in large data sets.
References and further readings
De Risi, Vincenzo: The Analysis Situs 1712-1716, Geometry and Philosophy of Space in the Late Leibniz, Basel: Birkhäuser 2006.
Gramelsberger, Gabriele: Operative Epistemologie. (Re-)Organisation von Anschauung und Erfahrung durch die Formkraft der Mathematik, Hamburg: Meiner 2020. Open access URL: https://meiner.de/operative-epistemologie-15229.html
Gramelsberger, Gabriele: Philosophie des Digitalen zur Einführung, Hamburg: Junius 2023.
Kempe, Michael: Die beste aller möglichen Welten: Gottfried Wilhelm Leibniz in seiner Zeit, S. Fischer: München 2022.
Leibniz, Gottfried W.: De analysi situs (1693), in: Philosophische Werke (ed. by Artur Buchenau and Ernst Cassirer), vol. 1, Meiner: Hamburg 1996, pp. 69–76. (All quotes translated by DeepL).
Ziegler, Günter M., Blatter, Christian: Euler’s polyhedron formula — a starting point of today’s polytope theory, Write-up of a lecture given by GMZ at the International Euler Symposium in Basel, May 31/June 1, 2007. URL: https://www.mi.fu-berlin.de/math/groups/discgeom/ziegler/Preprintfiles/108PREPRINT.pdf