Käte Hamburger Kolleg: Cultures of Research

The Importance of Science Communication Research and of Science Studies for the Region – Opening of the RRC in Dortmund

PHILLIP H. ROTH

How can science communication be practiced under post-truth conditions? And what role do the humanities and social sciences play in this context? The Rhine Ruhr Center for Science Communication Research (RRC) is devoted to answering these and other pressing questions. The center is funded by a generous grant from the Volkswagen Foundation and headed by Julika Griem of the Kulturwissenschaftliches Institut Essen (KWI), David Kaldewey of the Forum Internationale Wissenschaft (FIW) at the University of Bonn, Holger Wormer of the TU Dortmund, Oliver Ruf of the University of Applied Sciences Bonn-Rhine-Sieg as well as Volker Stollorz of the Science Media Center in Cologne and Franco Zotta of the German Science Journalists’ Association.

The RRC is devoted to science communication with a special focus on the humanities and social sciences. As such, it addresses highly important questions about how insights from the reflexive social and cultural research on science might be communicated. Natural scientists usually attract attention via stimulating images of ground-breaking discoveries. Not so the reflexive sciences on science. Thus, there are elementary questions that need to be answered about the communicability of insights from social and cultural research on science. Next to this, RRC aims to, over the course of its initial five-year funding, bring its findings closer to practicing journalists as well as to students in interdisciplinary workshops and conferences. On June 2, 2022, the RRC officially opened with a celebratory inauguration at the Erich-Brost-Institute at TU Dortmund. Together with our director Stefan Böschen I ventured to Dortmund to attend the event, at which we met with many familiar faces from science studies and journalism.

avatar

Phillip H. Roth

Phillip is postdoc and the events coordinator at c:o/re. Among other topics, his research is dedicated to questions of identity work in biomedical disciplines, to the meaning of medicine and the role of patient advocacy on the internet as well as to social and cultural conditions of scientific modeling. In a current project, he is trying to develop a sociology of pandemics for the digital age that draws on communication theories of virality and contagion.

After welcoming words by Holger Wormer, the inauguration consisted of a brief overview of the RRC’s three main research projects, given by Julika Griem, as well as three panel discussions, each moderated by one of the RRC’s heads. The panels were devoted to core problem areas of the RRC, making up most the of the formal part of the evening. In the first, moderated by Oliver Ruf, Julia Schubert (University of Speyer) discussed with local students about “Science Communication in Times of Multiple Facts”. One of the core take-aways of this insightful discussion was that the students desired the humanities and social sciences to be more present in public science communication. They stressed particularly that they promised themselves that these fields would be better equipped than natural or engineering sciences to deal with the problems of post-truth in current debates. The second panel, moderated by David Kaldewey, consisted of a dialogue between science journalist Birgit Herden (Die Welt) and the sociologist of science and technology Cornelius Schubert (TU Dortmund) about “Images and Imaginations of Science”. They reflected on how journalism and sociology address different audiences. Variety of audiences necessarily also leads to conflicts between the trajectories of the two professions. While journalism needs to “close” scientific debates to make the topic appealing to its readership, [1]Peter Conrad (1999). Use of Expertise: Sources, Quotes, and Voice in the Reporting of Genetic News. Public Understanding of Science 8 (4): 285–302. https://doi.org/10.1088/0963-6625/8/4/302 a key ambition of science studies, sociology of science or STS is to “open up” the infamous black box of science.[2]Trevor J. Pinch and Wiebe E. Bijker (1984). The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology might benefit each other. In Social Studies … Continue reading This is aligned with our effort to “unbox science” here at c:o/re. This ultimately also thwarts any settlement on “the facts”, making science a volatile and (politically) malleable business in sociologists’ eyes, something that is particularly critical under post-truth conditions. However, Schubert also recalls the common heritage of journalism and sociology in the reportages that founded the early-twentieth century Chicago School,[3]https://en.wikipedia.org/wiki/Chicago_school_(sociology) offering hope that each in their own way can contribute to successfully communicating the complexities of scientific research and its findings to the public. In a third session, panellists Eva Weber-Guskar (University of Bochum) and Samir Sellami asked about “A Quality Circle for the Humanities and Social Sciences?” Both are initiators of online platforms – PhilPublica and Soziopolis, respectively – that are devoted to bringing scholarly content to a wide readership. Together with the journalist Volker Stollorz, who moderated the panel, they reminisced whether and how these open formats might provide criteria for the successful communication of scientific content in the digital world. During the informal part of the event – drinks and snacks in the courtyard of the Erich-Brost-Institue while the sun was shining, and the temperatures were warm – we were able to catch up with friends and colleagues after an almost two-year hiatus from in-person events.

Holger Wormer speaking to guests at the opening of the RRC (photo credits: RRC/Andreas Siess)

A crucial feature of the RRC is that it considers science communication not only from a communication research perspective, but also from a cultural studies (KWI Essen) as well science studies & STS perspective (FIW Bonn). For this reason, we at c:o/re look forward to partnering with the RRC on questions at the intersection of science studies and science communication research. We hope that this partnership will help to unravel what science communication entails in the current mediascape and, also, what we can learn from it practically for communication at c:o/re and elsewhere. Given the grand challenges we face today,[4]David Kaldewey (2018). The Grand Challenges Discourse: Transforming Identiy Wlrk in Science and Science Policy. In Minerva 56: 161-182. https://doi.org/10.1007/s11024-017-9332-2. such as climate change, the digitalization of research practices, energy and mobility transformations, resource scarcity, war and poverty, we also wish that it will strengthen the role of science studies scholarship in the Aachen-Rhine-Ruhr region and in Germany more generally, providing a clearer picture of the role that science can play in facing these challenges.

A first joint conference between the RRC and c:o/re is already in the making and is set to take place in 2023. We will keep you posted as things develop and also about further collaborations between the partners at the RRC and c:o/re. Please also see our events section for infos on further upcoming workshops, lectures and conferences. For now, all that remains is for us to wish our friends at the RRC all the best for their projects. We look forward to the friendly and frequent exchanges about science studies and communication research – cheers!


Proposed citation: Phillip Roth. 2022. The Importance of Science Communication Research and of Science Studies for the Region – Opening of the RRC in Dortmund. https://khk.rwth-aachen.de/2022/06/17/3613/3613/.

References

References
1 Peter Conrad (1999). Use of Expertise: Sources, Quotes, and Voice in the Reporting of Genetic News. Public Understanding of Science 8 (4): 285–302. https://doi.org/10.1088/0963-6625/8/4/302
2 Trevor J. Pinch and Wiebe E. Bijker (1984). The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology might benefit each other. In Social Studies of Science 14 (3): 399-441. You can read the paper here.
3 https://en.wikipedia.org/wiki/Chicago_school_(sociology)
4 David Kaldewey (2018). The Grand Challenges Discourse: Transforming Identiy Wlrk in Science and Science Policy. In Minerva 56: 161-182. https://doi.org/10.1007/s11024-017-9332-2.

Tagungsbericht – Der singuläre Satz

CAROLINE GALLA

Organisatoren: Research Area „Wissenskulturen“ in Kooperation mit dem Käte Hamburger Kolleg: Kulturen des Forschens (c:o/re), RWTH Aachen University

Ort: Käte Hamburger Kolleg: Kulturen des Forschens (c:o/re), Theaterstraße 75, 52062 Aachen

Datum: 11.06.2025–13.06.2025

Die Lecture Hall des Käte Hamburger Kolleg c:o/re

Einzelne singuläre Sätze, als in einem spezifischen historischen Kontext sowie in einem kulturellen und epistemischen Umfeld entstandene, verdichtete Formen und Präsentationen von Wissen, in denen unterschiedliche Wissensfelder und -kulturen in Satzform kulminieren, lassen sich, wie Christian Metz (Aachen), Caroline Torra-Mattenklott (Aachen) und Klaus Freitag (Aachen) in ihrer Einführung darlegten, sowohl über räumliche als auch über zeitliche Grenzen hinweg in verschiedensten Wissenskulturen nachweisen. Ob von Beginn an als Einzelsätze konzipiert oder einem größeren Textzusammenhang entnommen, aus dem sie sich herauslösten und eine eigenständige Zirkulation entfalteten, bildet der singuläre Satz in seinen vielfältigen Formgestalten eine Minimaleinheit kondensierten epistemischen Denkens. Die Sammlung, Zirkulation sowie De- und Rekonstruktion dieser Sätze, ihre Entnetzung und Rekontextualisierung verweisen auf das Potential singulärer Sätze als Transfermedium kollektiver Wissenskulturen. Sie ermöglichen Wissensgenerierung, Wissensdeutungen, Wissensverhandlung sowie Wissenszirkulation, in denen sie ihre Wirkkraft entfalten. Die Vielfalt dieser singulären Sätze, die beispielsweise in Form von Apophthegmata, Gnomen, Sentenzen, Glaubenssätzen, Protokollsätzen, Elementarsätzen oder Kernsätzen ihren Ausdruck finden, aus einer epochen- und disziplinübergreifenden sowohl wissenstheoretischen als auch wissenshistorischen Perspektive zu beleuchten, bildete das Anliegen dieser dreitägige interdisziplinär angelegten Tagung.

Die erste Sektion der Tagung suchte den singulären Satz als prägnante Form zu konturieren.  Die formalen Merkmale singulärer Sätze, die zu deren Memorabilität und Prägnanz beitragen, suchte Winfried Menninghaus (Berlin) exemplarisch anhand von Sprichwörtern – als paradigmatischen Formen des singulären Satzes –, von Werbeslogans, humoristischen Zweizeilern sowie lyrischen Texte aufzuzeigen. Singuläre Sätze vereinten demnach eine hypernormative prosodische, morphologische und syntaktische Ordnungsstruktur, die sich insbesondere in der Häufung von Parallelismen auf der Ebene von Metrum und Reim manifestiere, mit einer ebenso hypernormativen Unordnungsstruktur, etwa in Form grammatischer Devianzen wie dem Fehlen mandatorischer Satzglieder. Diese Interaktion von Ordnung und Chaos bildet zwei Seiten einer spezifischen Poetik, deren wechselseitige Spannung entscheidend zur ästhetischen Wahrnehmung und kognitiven Verarbeitbarkeit singulärer Sätze beitrage. Darüber hinaus wiesen singuläre Sätze eine ausgeprägte Bildlichkeit aus. Dabei hätten empirische Studien zur Wahrnehmung und zu den Effekten dieser Merkmale singulärer Sätze auf kognitive Verarbeitungsprozesse, Abspeicherung sowie affektive und ästhetische Bewertungen gezeigt, dass insbesondere die von Selektivität geprägte kognitive Rezeption von Devianzen die Validität dieser singulären Sätze verstärke.

Eine ausführliche Analyse der singulären Sätze als semantisch verdichtete Formeln in Georg Christoph Lichtenbergs Marginalien und Sudelbüchern bot Elisabetta Mengaldo (Padua). Die kurzen, teils miteinander verschränkten Paragraphen, wie sie sich in Lichtenbergs Sudelbüchern manifestierten, zeichneten sich nicht nur durch aphoristische Kürze und Präzision im Sinne des antiken Ideal der virtus narrationis der brevitas aus, sondern hätten darüber hinaus zur Ausbildung eines neuen epistemischen Genres des Aphorismus innerhalb der wissenschaftlichen Lehr- und Forschungskultur – avant la lettre – beigetragen. Die textuelle Verdichtung der in den Sudelbüchern entstehenden Aphorismen sei als prozessuale Genese eines neuen Paradigmas für Erkenntnisprozesses zu verstehen, das nicht allein memotechnischen oder schreibökonomischen Zwecken diene, sondern zugleich durch die Verwendung von neuen topoi, die sich aus der Tradition der loci communes der Commonplace books herausentwickelten,eine poetische Funktion erfülle. Diese erwachse aus der produktiven Verschränkung zweier Diskursformen: der ars inveniendi und der Experimentalphysik mit ihrer Reflexion der eigenen Beobachtung.

Christian Metz, Oswald Egger und Sarah Goeth bei der Lesung

Mit einem wissenschaftstheoretischen und -philosophischen Überblick über die in den 1930er Jahren einsetzende Protokollsatzdebatte innerhalb des Wiener Kreises, die sich mit der Frage der Übersetzung von empirischen Beobachtungen in intersubjektiv nachvollziehbare singuläre Sätze, ihrer Form sowie ihrem universalem Geltungsanspruch befassten, leitete Gabriele Gramelsberger (Aachen) die zweite Sektion ein, die sich der Epistemologie mathematischer und physikalischer Einzelsätze widmete. Singuläre Sätze seine sowohl in der Wissenschaft als auch in der Philosophie zentrale Elemente der Generierung deklarativen, propositionalen Wissens, das sich über Sprache und Logik artikuliert. Die zwischen Rudolf Carnap und Otto Neurath ausgelöste sprachanalytisch beeinflusste Diskussion rund um den Protokollsatz und seinen Status, seien rund um den Status einer „Universalsprache“ der Wissenschaft ausgebrochen, die zum Ziel hatte, die quantitativ bestimmbaren und empirisch beobachtbaren Wahrnehmungsbefunde der Protokollsätze in physikalische Sätze und somit in eine Universalsprache zu überführen, welche keiner hermeneutische Vermittlung bedürfe. 

Die Teilnehmenden in der Lecture Hall

Inwiefern die philosophische Reflexion über mathematischen Elementarsätze als atomare semantische Einheiten in Hans Blumenbergs philosophischen Schriften Ausdruck fanden, beleuchtete Michael Friedman (Bonn) anhand einer Untersuchung von Blumenbergs Kapitel „Im Fliegenglas“ aus seinem 1989 erschienenen Werk „Höhlenausgängen“ sowie seiner Notizen und Karteikarten zu seinem Exemplar von Wittgensteins „Bemerkungen über die Grundlagen der Mathematik“ (BMG), das im Deutschen Literaturarchiv Marbach überliefert ist. In Blumenbergs Werk vollziehe sich eine Auseinandersetzung mit Wittgensteins Philosophie der Mathematik, insbesondere mit der Frage nach der Form, Funktion und epistemischen Offenheit mathematischer Grundsätze, die Blumenberg als „kontrollierte Mehrdeutigkeit“ definierte. Dies verdeutliche sich an dem zum Ende von Blumenbergs Kapitel „Im Fliegenglas“ angeführten geometrischen Rätsel zum Rechteck, das er Wittgensteins BMG entnimmt, und das Blumenberg als Veranschaulichung „kontrollierter Mehrdeutigkeit“ mathematischer Rätsel und Elementarsätze zu ergebnisoffenen Prozessen der begrifflichen Neudefinierung diene.

Der epistemischen Rolle der Sprache in der Konstruktion naturwissenschaftlicher Erkenntnisse ging Arianna Borrelli (Berlin/Aachen) am Beispiel der ästhetischen Genese, Durchsetzung und Erweiterung des Energieerhaltungssatzes im Austausch mit anderen Wissensnetzen nach, der als Muster für spätere Erhaltungssätze diente. Ausgehend vom Streit zwischen Leibniz und Papin im 17. Jahrhundert um das wahre Maß der Kraft, die in der Natur erhalten bleibe, lasse sich zeigen, dass der Satz zunächst als sprachlicher Ausdruck eines noch nicht mathematisierten Naturprinzips fungiert habe, das von beiden in unterschiedliche messbare Größen überführt worden sei. Mit der Entwicklung der klassischen Mechanik im 18. Jahrhundert sowie im Zuge von Industrialisierung und der Standardisierung von Messverfahren im 19. Jahrhundert habe sich die Form des Erhaltungssatzes entsprechend der Entwicklung naturphilosophischer, religiöser, naturwissenschaftlicher und wirtschaftlicher Wissenskulturen im Rahmen eines epistemischen Transfers von der Vorstellung „Kraft“ hin zur Konzeption „Energie“ gewandelt und zum übergeordneten Naturprinzip des Energieerhaltungssatzes verdichtet. Dieser habe mit dem Aufkommen der Quantenphysik und der Allgemeinen Relativitätstheorie zu Beginn des 20. Jahrhunderts neue Impulse erhalten. Seine epistemische Autorität habe sich dabei nicht allein aus empirischer Gültigkeit ergeben, sondern auch aus seiner semantischen Wandlungsfähigkeit und Anschlussfähigkeit an verschiedene Wissenskulturen, die durch die Flexibilität der Sprache eine Verbindung zwischen Wissensfeldern sowie die Überführung in mathematische und messbare Größen ermöglicht habe.

Die dritte Sektion suchte die Kompilation, Kontextualisierung und Vernetzung singulärer Sätze in ihren jeweiligen Umfeldern aufzuzeigen und wurde von Dr. Christian Kaiser (Bonn) eröffnet, der eine Analyse der Funktion singulärer philosophischer Sätze in mittelalterlichen, für die Universitätslehre bestimmten medizinwissenschaftlichen Werken am Beispiel des an der Wende zum 14. Jahrhundert verfassten Conciliator differentiarum philosophorum et praecipue medicorum des Petrus von Abano präsentierte. Demnach zeige sich anhand unterschiedlicher, aus antiken und frühmittelalterlichen Werken stammender naturphilosophischer, metaphysischer, moralphilosophischer und politiktheoretischer Einzelsätze, die als wissenschaftliches Fundament, als selbstständige Lehrsätze oder als Glaubensbekenntnisse in den medizinischen Schriften Verwendung gefunden hätten, dass das Zitieren dieser Sätze ein zentrales methodisches Element innerhalb wissenschaftlicher Abhandlungen dargestellt habe. Sie hätten dabei weniger als zentrale exempla, sondern vielmehr als situationsspezifisches Sampling fungiert, das in die wissenschaftliche textura eingewoben worden sei. Referenzen und Zitate seien dabei weit über eine bloße Indienstnahme von Autoritäten hinausgegangen und hätten – vor dem Hintergrund des Streits der Fakultäten und der Frage nach dem Wissenschaftsanspruch der Medizin – den Prozess der Formierung eines universitären medizinischen Wissensdiskurses und Machtanspruches unterstützt. Dieser habe sich zunehmend zu einem „Wissenskult“ verdichtet und das ärztliche Handeln sowie die medizinische Gelehrsamkeit in ein von anderen universitären Disziplinen mitgetragenes Wissensnetz bzw. Weltbild eingebettet.

Manfred Eikelmann (Bochum) richtete den Blick auf die Wissenspraktiken in den Apopthegmata des Erasmus von Rotterdam (1531) und ihrer ersten deutschen Übersetzung durch Heinrich von Eppendorf (1534), die zu Beginn der Frühen Neuzeit zu einer selbstständigen Wissensform avancierten und von einer humanistischen Wissenskultur  zeugten, in der durch die Rezeption antiker Aussprüche Erfahrungs- und Weltwissen generiert wurde. Dabei handele es sich bei den Apophthegmata um eine rhetorische wie epistemische Kleinform, die als Transfermedium antiken Erfahrungswissens fungiere und das Ziel verfolge, sprachliche wie epistemische Tugenden zu kultivieren. Die einzelnen Aussprüche, oftmals okkasionelle Sprechakte in konkreten sozialen und politischen Situationen, transportierten in prägnanter Weise moralisches, praktisches und insbesondere sprachliches Handlungswissen. In epistemischer Hinsicht fungierten die Apophthegmata als Topoi, die Wissensbestände speicherten und diese nicht nur erinnerbar, sondern auch rekontextualisierbar und über Register vielfältig vernetzbar machten.

Mit einem Vortrag zu den Anfangsformeln des in der westafrikanischen Hone-Sprache entworfenen Storytellings eröffnete Anne Storch (Köln) die letzte Sektion, die Ästhetik, Politik und Wirkkraft singulärer Sätze in den Blick nahm. Im Zentrum stand dabei der erste Satz einer Erzählung, der nicht nur als singulärer Auftakt fungiere, sondern eine mimetische Wirkkraft entwickle, durch die Erzählung als verkörpernde-performative Handlung erfahrbar werde. Der erste Satz einer Erzählung wirke dabei als besonders markierter Anfang, der einen neuen liminalen Raum eröffne, und zugleich als Teil einer Tradition permanenter Re-Kontextualisierung, in der das Gesagte stets überschrieben werde. So bilde der erste Satz – etwa in der parallelistische Formel „My story killed one, killed one, killed one“ – eine Art ritualisierte Schwelle, die zwischen Sichtbarem und Unsichtbarem, Gegenwart und Erinnerung, Erinnerung und Vergessen vermittele. In diesen plurilingualen, liminalen Erzählräumen erscheine Sprache selbst als Medium der Brechung, der Übersetzung und der Transformation, der Beziehung und der Subjektivierung.

Sabrina Blank (Aachen) gewährte in ihrem Vortrag einen Einblick in die Genese, Tradierung und autoritativen Wirkkraft des singulären Rechtsatzes zur päpstlichen Prärogative der Nichtjudizierbarkeit, der sich im Laufe des Früh- und Hochmittelalters als prägnante rechtliche Formulierung herausbildete, die nicht nur zunehmend normative Geltung beanspruchte, sondern auch Autorität generierte. Die früheste Erwähnung dieses Satzes, der schließlich auf dem Weg der traditio im Codex Iuris Canonici (CIC) von 1917 festgeschrieben wurde, finde sich im Constitutum Silvestri, einer im Kontext der krisenhaften Kontingenzerfahrung der umstrittenen Papstwahl des Symmachus an der Wende des 5. zum 6. Jahrhunderts als Instrument der Stabilisierung der päpstlichen Stellung entstandene und von der jüngeren Forschung als verfälscht identifizierte Synodalakte. Über private Rechtskompilationen gelangte der Satz im 9. Jahrhundert ins fränkische Reich, wo er – etwa bei Hinkmar von Reims – bereits als geltendes Recht rezipiert wurde. Im Verlauf der Papstschismen des 12. Jahrhunderts avancierte er zunehmend zur tragenden Rechtssatz päpstlicher Legitimationsstrategien, die in das Papstwahldekret des Dritten Laterankonzils mündeten.

Der komplexen und mehrstufigen Genese und Wirkkraft des Kanons sentenzenhafter Aussprüche der Sieben Weisen und ihrer Tradierung widmete sich Johannes Engels (Köln). Die aus dem archaischen Hellas stammenden Weisheits-, Lern- oder Sinnsprüche zeichneten sich durch Kürze und Prägnanz aus; sie enthielten Bildungs- und Weltwissen in generalisierter und verdichteter, leicht memorierbarer Form und verdichteten alltagstaugliche Lebensweisheiten in leicht movierbarer Gestalt. Diese Sätze seien keiner philosophischen Schule entsprungen, sondern könnten Akteuren mit politischer und sozialer Wirkkraft innerhalb ihrer Polis zugeschrieben werden. Ihre Kanonisierung sei dynamisch verlaufen und habe sich stark lokal geprägt gezeigt. Diese Aussprüche, deren Autorität in der Zustimmbarkeit bzw. Zusprechbarkeit des in ihnen enthaltenen Gemeinsinns im Sinne eines common sense gelegen habe, hätten sich über ihre Rezeption in Sammlungen bereits seit dem späten 4. Jahrhundert v. Chr. bis ins 21. Jahrhundert durchgesetzt und auch in bildlichen Darstellungen ihren Niederschlag gefunden.

Die Vielfalt der Beiträge haben gezeigt, wie gewinnbringend ein interdisziplinärer Zugriff auf das Verständnis der Form, der Funktion und Epistemologie singulärer Sätze sein kann. Durch die unterschiedlichen Perspektiven wurde deutlich, dass singuläre Sätze als verdichtete Denkformen, nicht allein als prägnante und memorable Minimaleinheiten auftreten, sondern auch als epistemische Kristallisationspunkte, die in verschiedensten Wissenskulturen eine tragende Rolle für die Generierung, Zirkulation und Rezeption von Wissen spielen. Die Abschlussdiskussion machte deutlich, dass die Relevanz singulärer Sätze weit über ihre semantische Verdichtung hinausreicht: Ihre illokutionäre Kraft, ihre Funktion im Rahmen von gesellschaftlicher Stabilisierung oder Kritik sowie ihre Rolle als Transfermedien zwischen unterschiedlichen Wissensfeldern rückten ebenso in den Fokus wie Fragen nach der Terminologisierung, der Metaphorizität und der stilistischen Konstruktion. Insbesondere in der Spannung zwischen epistemischer Verdichtung und erklärungsbedürftiger Obskurität, zwischen universellem Geltungsanspruch und kultureller Situiertheit wurde deutlich, dass der singuläre Satz als Gegenstand nicht nur der historischen und systematischen Analyse bedarf, sondern zugleich Anlass zu metareflexiven Überlegungen über die Narrativität des Satzes gibt. Dabei offenbarte sich die Form des singulären Satzes als ambivalent und erklärungsbedürftig – sowohl innerhalb einzelner Disziplinen als auch in ihrer historischen Kontextualität und sprachlichen Ausformung.

Konferenzübersicht:

Christian Metz, Caroline Torra-Mattenklott, Klaus Freitag (Aachen): Der singuläre Satz als prägnante Form

Sektion I: Der singuläre Satz als prägnante Form

Winfried Menninghaus (Berlin): Drei Faktoren memorabler Einzelsätze

Elisabetta Mengaldo Elisabetta Mengaldo (Padua): Sätze als verdichtete Formeln. Lichtenbergs Marginalien und seine Sudelbücher

Lesung und Gespräch von und mit Oswald Egger

Sektion II: Epistemologie mathematischer und physikalischer Einzelsätze

Gabriele Gramelsberger (Aachen): Protokollsatzdebatte – Überprüfbare singuläre Sätze

Michael Friedman (Bonn): Blumenberg, Wittgenstein und (mathematische) Elementarsätze

Arianna Borrelli (Aachen/Berlin): Erhaltungssätze: Die Rolle der Sprache in der Konstruktion naturwissenschaftlicher Erkenntnis

Sektion III: Singuläre Sätze und ihr Umfeld: Kompilation, Kontextualisierung, Vernetzung

Christian Kaiser (Bonn): Die Methodik des Samplings philosophischer Einzelsätze in medizinischen Quaestiones des Mittelalters

Manfred Eikelmann (Bochum): Denkwürdige Spruchszenen. Wissenspraktiken in den Apophthegmata des Erasmus von Rotterdam und ihrer ersten deutschen Übersetzung (1531- 1535, 1534)

Sektion IV: Ästhetik, Politik und Wirkkraft singulärer Sätze

Anne Storch (Köln): Der Satz, der am Anfang ist

Sabrina Blank (Aachen): Von der Fälschung zum legitimierten Rechtsanspruch des Papstes: Die Tragweite des singulären (Grund-)Satzes der päpstlichen Nichtjudizierbarkeit im Mittelalter

Johannes Engels (Köln): Die Sentenzen der Sieben Weisen des antiken Griechenlands als ein Beispiel epochen-übergreifend einflussreicher ‚singulärer Sätze‘


, Fotos von Jana Hambitzer

Cancellation: Digital Complexity: De-Anthropological Trends in Computing, AI, and Robotics by Gabriele Gramelsberger

‼️ Unfortunately, tonight’s lecture by Gabriele Gramelsberger has to be canceled due to illness.
We hope to see you next week for Anna Tuschling’s lecture on “Digitality as a Triad: From the Love Letter to Emotion AI”.

“The question of academic freedom acts as a seismograph for current processes of change in the production of scientific knowledge” – Interview with Gabriele Gramelsberger and Stefan Böschen

On November 5 and 6, 2025, the second edition of “Freedom of Research: A European Summit” will take place in Aachen. This year’s focus topic is “Europe in Times of Division.”

In an era of political tensions and division, academic freedom is under increasing pressure. How can Europe protect itself against threats to science, including disinformation, political interference, and social division? During the summit, we want to discuss ways to uphold shared values, safeguard scientific exchange, and strengthen trust in European institutions. We also aim to explore ways to foster academic freedom and democracy across Europe. Together with scholars, policymakers, and artists, we will address these questions through various formats. The event is jointly organized by the Charlemagne Prize Foundation, RWTH Aachen University’s Knowledge Hub, and c:o/re.

Before the start of the summit, the KHK c:o/re directors Gabriele Gramelsberger and Stefan Böschen discuss the urgent matter of academic freedom and share their expectations for the event in light of the challenges that educational institutions face.

Two people presenting together at a podium.
Words of Welcome for the Summit 2024 by KHK c:o/re directors Prof. Dr Gabriele Gramelsberger and Prof. Dr Stefan Böschen

How do you view academic freedom in relation to c:o/re’s research interests?

The question of academic freedom acts as a seismograph for current processes of change in the production of scientific knowledge. Of course, we are initially confronted with right wing attacks on the institution of universities. But this should not distract us from the more hidden forms of restriction on freedom of research, such as the forced commercialization of research or the digitalization of research and its side-effects on freedom of research. We examine the varieties of these threats as changes in cultures of research.

How would you classify the freedom of research and science in Germany and Europe?

At the moment, the situation in Germany is still basically good, although we must remain alert here too. However, we are already observing significant restrictions in various European countries (think of Hungary). Budget problems are also intensifying in Germany, and since the higher education system is organized across the German federal states and the austerity measures are poorly synchronized, this could lead to more or less significant upheavals. To put it pointedly, this may raise the question of which disciplines will continue to be protected and which will be deemed expendable.

What dangers do you see, and which developments do you consider dangerous?

Academic freedom can be restricted by a wide variety of political interventions. It is precisely the diversity of possible points of intervention that leads to a lack of clarity regarding the potential threats to academic freedom. This is what makes the current situation particularly dangerous, because it increases the likelihood that influences that may later prove decisive will not be recognized at the moment when the decisions are taken.

Panel discussion with four people seated on stage, one holding a microphone.
Panel Discussion “Conflict in Europe’s Academic Landscape and Their Impact on Freedom of Research: What’s New About It?”, f.l.t.r. Prof Dr Carsten Reinhardt, Miranda Loli, Frank Albrecht and Prof. Dr Stefan Böschen

How can science protect itself from targeted disinformation and political influence?

On the one hand, this question is easy to answer because scientific independence still applies for many scientific endeavors. However, this independence is already coming under pressure because, for reasons that are initially understandable, such as the need for science to be relevant, research is oriented toward specific goals and is therefore restricted. The boundaries here are sometimes very fluid. And this is precisely what makes the issue so problematic.

In times of division, how can common values be preserved, scientific exchange secured, and trust in European institutions strengthened?

Open dialogue is the only way to achieve political goals and decisions. Isolation, exclusion, and radicalization of debates are the wrong approach. Freedom requires respect and mutual recognition. Furthermore, European values such as freedom, democracy, access to education, human-centered politics have to be protected against attacks. This is also a task of institution-building respectively their further development.

How can divisions be overcome and bridges be built within and outside academia, between social groups, and between politics and science?

This is a very complex question. That is why it may be worth considering only one seemingly paradoxical point. With Executive Order 13985, the Biden administration sought to strengthen diversity in universities. This led to the establishment of strong diversity units at universities and restricted the meritocratic principle of science. It also gave Trump a very welcome opportunity to intervene even more deeply in academic freedom in order to reverse this policy. When considering protective measures, one should always consider the possible undesirable consequences.

What are your hopes for the summit?

This time, we are hoping for two things in particular. First, we would like to gain insights that will enable us to better assess where and why academic freedom is under particular pressure. Second, we hope that a comparative perspective across different regions of the world will teach us something about the respective cultural and institutional conditions that shape freedom of research.

What is the KHK’s interest in the joint event with the Charlemagne Prize Foundation and the Knowledge Hub?

This cooperation provides an outstanding opportunity to create a space for reflection on these issues, in which very different actors can engage in dialogue. And in view of the Charlemagne Prize Foundation’s guiding principle of promoting the idea of Europe as an open and liberal society, it is crucial to focus on this important institution for the development of democracies. If we want to continue to develop Europe democratically, then it must be the concern of all responsible actors and citizens to work to safeguard freedom of research and to explore appropriate ways of doing so together.

Panel discussion with three people seated on stage, one holding a microphone.
Panel Discussion “Navigating the Ethical Landscape: AI and the Boundaries of Research Freedom”, f.l.t.r. Prof. Dr Benjamin Paaßen, Prof. Dr Holger Hoos and Prof. Dr Gabriele Gramelsberger

For more details, including a list of speakers and the full program, please visit the event website.

To register, please follow this link.

To get an idea of what the last summit was like, take a look at this and this blog posts.

Photo Credits: Christian van’t Hoen

The Afterlife of »Psychotechnics«: From Industrial Work to the Digital Lifeworld

BERND BÖSEL

In the first decades of the 20th century, psychology was still a young academic discipline that, in contrast to the well established field of psychiatry, had not yet fully emancipated from its origins in philosophy. It still felt the need to prove that the production of psychological knowledge paid off for society at large. This was accomplished by what back then was called »psychotechnics«, a term first coined by the German psychologist William Stern in 1903 in an article that also introduced the notion of »applied psychology«. Stern was instrumental in establishing the idea that psychology could be used for extra-psychological goals, such as the advancement of culture at large. However, it was Hugo Münsterberg who turned »psychotechnics« into a catch-all phrase for applied psychological methods.

Profile Image

Bernd Bösel

c:o/re Fellow 04/25 – 03/26

Bernd Bösel is a philosopher and media scholar. His main area of research are the technical and digital operationalization of affect and emotions, as well as media theory and philosophy of technology.

Münsterberg had been offered a chair for psychology at Harvard University by none other than William James, one of the founders of psychology as an academic discipline, in the 1890s and subsequently established himself as the leading psychology professor of his generation. Due to the many books he published in English as well as his native German, he exerted an enormous influence on his discipline on either side of the Atlantic. In his textbooks Psychology – General and Applied and Grundzüge der Psychotechnik, both published in 1914, he outlined a grand vision for applying psychotechnics to literally every field of cultural activity, including law, science, pedagogy, arts, economy, the health sector and even policing. While conducting a plethora of psychological experiments in his laboratory at Harvard, he also collaborated with industrial companies and investigated the effects of work environments on attention and fatigue. Münsterberg conceived industrial psychology as a branch of psychotechnics that would eventually elevate Frederick Taylor’s Scientific Management to a truly scientific level.

William Stern; photo by Unknown author, WLStern, marked as public domain, more details on Wikimedia Commons

During World War I, the employment of psychotechnical testing procedures proved fruitful, especially for the operation of new machinery like the aeroplane or the streetcar, but also sophisticated industrial devices for which best practices had yet to be established. This was the birthplace of »industrial psychotechnics« which became an international success in the 1920s, during which a number of institutes and journals for psychotechnics were founded. The »International Association of Psychotechnics« organized eight conferences between 1920 and 1934. However, in the early 1930s psychotechnics was already in a deep and long-lasting crisis that had started with the Great Crash of 1929. But there were not just economic, but also political reasons for the demise of industrial psychotechnics. In Nazi Germany, the persecution of what was defined as »Jewish« scientists led to a weakening of psychotechnical research (William Stern was dismissed from his chair in Hamburg and soon after fled to the US, while two of his assistants committed suicide). In the USSR, following a decade of wide-ranging experimentation that combined psychotechnics with architecture, film, and human physiology, the Stalinist purges effectively ended all of these activities.

After World War II had ended, the International Association of Psychotechnics reconvened in 1949 in Bern. The demand for applied psychology soon skyrocketed due to the mental health crisis caused by the devastations of the war. In 1955, the Association renamed itself to »International Association of Applied Psychology«, thus erasing the term »psychotechnics« from its self-description. This has led historians of psychology to the conclusion that psychotechnics was discontinued and had become a thing of the past. But what if we question this conclusion and rather focus on the continuities in the way that applied psychology, as an academic discipline, seeks to improve its understanding of psychological processes, and to ultimately develop optimal methods of influencing the human mind? It could be argued that the success story of applied psychology in the second half of the 20th century, with its branching out into ever more specialized subfields (e. g. behavioral economics, sport psychology, or environmental psychology), equals the realization of Hugo Münsterberg’s aim to bring psychotechnics to every aspect of human life and culture.

In regard to our contemporary digital lifeworld, there is an even greater urgency to reactivate the semantics of »psychotechnics«. Scholars working in the field of media studies and philosophy of technology have proposed the term »psychotechnology« as a label for media and information technologies altogether. The reason is that these technologies, just by the way of their functioning, captivate our attention and influence our affective lives. Digital gadgets also increasingly organize our memory, our imaginations as well as our wishes and desires. These technological developments are coincidental with the global rise of applied psychology for a good reason. As a matter of fact, psychology and computation have been converging for a long time, as the history of cybernetics shows.

Gambling machine; photo by Leon13639 from Alhambra, United States, IGT Slot machines 24-12-21 (54336998087), CC BY-SA 2.0.

Today, Big Tech companies employ psychologists to get a better understanding of how their customers think and feel and how to make them »addicted by design«. Scraping personal data assists in the goal of micro-targeting individual users. What has critically been described as surveillance capitalism, data capitalism or just digital capitalism thus operates on a psychotechnological basis. It has yet to be determined how many of these developments can be traced back to the early conceptions of psychotechnics outlined by Stern, Münsterberg and their colleagues. In any case, instead of upholding the idea that psychotechnics is a thing of the past, we should engage with its surprising afterlife. Through the increasing automation of measuring and operating psychic functions, what began as an expert-based psychotechnics is now more and more being accomplished by technical systems that are being constantly updated and expanded by companies that do not always adhere to ethical standards. Therefore, we should come to a better understanding of the design principles and overall goals involved. A critical theory of
contemporary psychotechnologies will be an important cornerstone in our collective effort to save democracy and human dignity and freedom.

Digital Complexity

GABRIELE GRAMELSBERGER

Current developments in the fields of simulation and artificial intelligence (AI) have shown that the complexity of digital tools has exceeded the level of human understanding. We can no longer comprehend, understand or explain the results that AI delivers. Even AI deceptions and hallucinations are now almost impossible to detect. This raises the question of the relationship between humans and their technology anew. Are technologies as instruments useful extensions of human capabilities, as it was understood in the classical philosophy of technology, or are we now extensions of our technologies? Will AI dominate and manipulate us in the near future?

SPOTSHOTBEUYS by Silke Grabinger, presented at PACT Zollverein during the PoM Conference 2024; photo by Jana Hambitzer

In May 2025, the tech company Anthropic reported that its AI, Claude 4, behaved aggressively and attempted to blackmail the developers who wanted to remove the program. Similar behaviour was observed in OpenAI’s o3 model, which sabotaged its own shutdown code. In a recent paper Joe Needham et al. found that “Large Language Models Often Know When They Are Being Evaluated”. Nowadays, there are frequent reports that AI deceives us, gives us false hope, or simply lies to us. What is going on here?

While we have so far been ethically concerned with biases, prejudices, and falsity in data, and, as a result, in AI decisions, the above-mentioned cases are of a completely different nature. In a CNN interview, AI pioneer and Nobel laureate Geoffrey Hinton already warned in 2023 that if AI “gets to be much smarter than us, it will be very good at manipulation because it would have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing.” Is this now the case? Has the time now come for AI being capable of deliberately lying, cheating, and manipulating – in a way similar to human deception? Obviously, rather than being the dull interpretation of data, such malicious behaviour is considered “intelligent.”

But what exactly do these current developments mean for science and society? How can we understand this development historically? Did it come out of the blue? Of course not! In Western society in particular, there have long been efforts to develop intelligent machines. Since the early modern period, for example, in Leibniz’s thinking, attempts have been made to delegate mental processes to machines. Over the centuries, machines have become more intelligent, autonomous, and complex. What we are seeing is the fulfilment of this dream, which is increasingly turning into a nightmare.

This image has an empty alt attribute; its file name is Bild3_4.png

Fig.2 Snippet 22 “Motum non esse absolutum quiddam …” (LH35, 12, 2), edited in Gottfried Wilhelm Leibniz, Sämtliche Schriften und Briefe, volume VI, 4 part B, Berlin 1999, N. 317, p. 1638.

Digital complexity is this year’s theme for the Käte Hamburger Kolleg: Cultures of Research (c:o/re). Nine scholars from around the world will join us in 2025 and 2026 to explore the digital transformation of science and society from the perspective of digital complexity – using historical, philosophical, sociological, and artistic perspectives. The biweekly lecture series presents approaches to the topic of digital complexity by guests from the humanities, social sciences, and natural sciences. The 8th HaPoC Conference, “History and Philosophy of Computing”, hosted by c:o/re in December 2025, will examine this topic in greater detail.


References:

CNN: Godfather of AI’ Warns that AI May Figure Out How to Kill People, G. Hinton Interviewed by Jake Tapper, CNN 2023. https://www.youtube.com/watch?v=FAbsoxQtUwM

Gabriele Gramelsberger: The Leibniz Puzzle, c:o/re blog entry, 2025. https://khk.rwth-aachen.de/the-leibniz-puzzle/

Joe Needham et al.: Large Language Models Often Know When They Are Being Evaluated, arxiv.org 2025. https://arxiv.org/pdf/2505.23836

Peter S. Park et al.: AI deception: A survey of examples, risks, and potential Solutions, Patterns 5/5, 2024. https://www.sciencedirect.com/science/article/pii/S266638992400103X#bib1


Header Photo: GDJ, Anatomy-1751201 1280, CC0 1.0

Lecture Series Winter 2025/26: Digital Complexity: Beyond Human Understanding

We are happy to announce that the lecture series of the winter term 2025/26 will revolve around the topic of Digital Complexity: Beyond Human Understanding.

Current developments in the fields of simulation and artificial intelligence have shown that the complexity of digital tools has exceeded the levels of human understanding. We can no longer comprehend and explain the results that AI delivers. Even AI deceptions and hallucinations are now almost impossible to detect. This raises the question of the relationship between humans and their technology anew. Are technologies as instruments useful extensions of human capabilities, as it was understood in the classical philosophy of technology, or are we now extensions of technology? Will AI dominate us in the near future?

The lecture series addresses these fundamental questions as well as ethical issues of digital transformation. It also takes a look at the development of digitality as a modern paradigm. Even though digital computers first appeared in the 1940s, there is a longer-term history of the development of digital tools and methods deeply rooted in our self-understanding as humans. Knowledge of this history makes it easier to understand current developments.

But what exactly do these current developments mean for science and society? The different lectures aim to tackle various aspects of the digital transformation of science and society from the perspective of “digital complexity.” Questions about explainable AI, about the well-being of people in a digital world, about the social and political impact of digital, social media will be explored, as well as the provocative question of who will be doing research in the future: humans or AI?

Various speakers, including the media theorist Anna Tuschling and the sociologist Dirk Baecker, will be guests at the KHK c:o/re to shed light on “Digital Complexity: Beyond Human Understanding” from different disciplinary perspectives.

Please find an overview of the dates and speakers in the program.

The lectures will take place from October 22, 2025, to February 11, 2026, every second Wednesday from 5 to 6:30 pm in presence and online.

Part of the lecture series are three keynotes, held in the context of the 8th HaPoC Conference “History and Philosophy of Computing,” hosted by c:o/re in December 2025. The conference will examine the topic of “digital complexity” in greater detail.

If you would like to attend, please write a short email to events@khk.rwth-aachen.de.

Publication: Politics of the Machines Conference 2024 Proceedings

In 2024, we had the honor of hosting the Politics of the Machines conference in Aachen. We are very happy to announce that the conference proceedings are out now!

Entrance hall for a conference, with empty cocktail tables.
Entrance of the Super C during PoM

Many thanks to Ana María Guzmán Olmos, Gabriele Gramelsberger, Laura Beloff, Morten Søndergaard, Hassan Choubassi, Joe Elias and all the other authors for their contributions.

You can find the publications online as open access online: Vol. 1 and Vol. 2.

The overall theme of the POM-conference series is the question of how the machine and technology impact and contextualize artistic and cultural production and our perception of the world. Moreover, it is aiming at investigating the histories, theories and practices of machines and technologies in-between and beyond disciplines. It seeks to question the governing ideas in the sciences and the humanities through critical engagement with and empowerment of activities of creative production in the relational field of culture – technology – umwelt.

For more impressions and recaps of the 2024 conference in Aachen, check out our blog posts.

The Computer in Motion

A red, tall sculpture in front of the building of the main building of the University of Otago in Dunedin, New Zealand.

ARIANNA BORRELLI

The Symposium Computer in Motion critically questioned the notion of the computer as a “universal machine” by demonstrating how the computer adapted to and was appropriated in different milieus, cultures, communities and across borders. By historicising the movement of computer-related knowledge and artifacts, the presentations help us recover multiple sites of agency behind the idea of an unstoppable digital transformation driven by capitalist innovation. The event was organized by Barbara Hof (University of Lausanne) Ksenia Tatarchenko (John Hopkins University) and Arianna Borrelli (c:o/re RWTH Aachen) on behalf of the Division for History and Philosophy of Computing (HaPoC) in context of the 27th International Conference of History of Science and Technology, held at the University of Otago in Dunedin, New Zealand, and online from June 29 to July 5 2025.

The Symposium had an exceptional coverage of periods and cultures, showcased here as an example of the manifold thematic and methodological facets of the history of computing..

Main building of the University of Otago in Dunedin, New Zealand (photo credits: Elisabetta Mori).

Decimal place-value notations prior to the 10th century: a material computer

Karine Chemla (School of Mathematics, University of Edinburgh, and researcher emerita CNRS)

This presentation argues that decimal place-value notations have been introduced as material tools of computation and that, until around the 10th century, they were used only as a material notation to compute, and were never shown in illustrations in mathematical writings, let alone used to express numbers. Furthermore, the presentation argues that the same remarks hold true whether we consider the earliest extant evidence for the use of such a numeration system in Chinese, Sanskrit, and Arabic sources. In all these geographical areas, decimal place-value numeration systems were first used as a material notation. These remarks suggest that, as a tool of computation, decimal place-value numeration systems have circulated in the context of a material practice, despite changes in the graphics for the digits, and changes in the materiality of the computation.

Smuggling Vaxes: or how my computer equipment was detained at the border

Camille Paloque-Bergès (Laboratoire HT2S, Conservatoire National des Arts et Métiers)

Between 1980 and 1984, the global computer market was heavily influenced by rising DEFCON levels and industrial espionage, alongside the imposition of restrictions on US computer equipment exports (Leslie, 2018). One notable example was the VAX mini-computer from DEC, which became subject to the COCOM doctrine, restricting its distribution due to its strategic importance. Popular in research communities, the VAX supported the Unix operating system and played a pivotal role in the development of Arpanet and the UUCP networks, both precursors to the modern Internet. Despite restrictions, the VAX was widely imported through workarounds or cloning techniques. This paradox of open-source R&D efforts occurring within a politically closed environment (Russell, 2013; Edwards, 1996) is illustrated by the infamous “Kremvax” joke on Usenet, which falsely claimed the USSR had joined the Internet. The study of the VAX’s role in both Eastern and Western Europe highlights the tension between technological openness and Cold War-era containment policies. These technical and administrative maneuvers, though trivial to the broader public, were crucial for the diffusion and cultural adoption of early data networks at the level of the system administrator working in a computer center eager to become a network node.

A Thermal History of Computing

Ranjodh Singh Dhaliwal (University of Basel) ranjodhdhaliwal.com

If you open a computer today, the biggest chunk of real estate, curiously, is not taken by processors, memories, or circuit boards but by increasingly complex heat sinks. Starting from this observation that all technology today needs extensive heat management systems, this piece theorizes the historical and conceptual dimensions of heat as it relates to computing. Using case studies from the history of computation–including air conditioning of the early mainframe computers running weather simulations (such as ENIAC in IAS in the 1960s) and early Apple machines that refused to run for long (because Steve Jobs, it is said, hated fans)–and history of information–the outsized role of thermodynamics in theorizing information, for example–I argue that computation, in both its hardware and software modalities, must be understood not as a process that produces heat as a byproduct but instead as an emergent phenomenon from the heat production unleashed by industrial capitalism.

Put another way, this talk narrates the story of computation through its thermal history. By tracing the roots of architectural ventilation, air conditioning of mainframes and computer rooms in the 20th century, and thermodynamics’ conceptual role in the history of information and software, it outlines how and why fans became, by volume, the biggest part of our computational infrastructures. What epistemological work is done by the centrality of heat in these stories of computation, for example, and how might we reckon with the ubiquitization of thermal technologies of computing in this age of global climate crises?

Karine Chemla (photo credits: Elisabetta Mori)

Supercomputing between science, politics and market

Arianna Borrelli (Käte-Hamburger-Kolleg “Cultures of Research” RWTH Aachen)

Since the 1950s the term “supercomputer” has been used informally to indicate machines felt to have particularly high speed or large data-handling capability. Yet it was only in the 1980s that systematic talk of supercomputers and supercomputing became widespread, when a growing number of supercomputing centers were established in industrialized countries to provide computing power mainly, but not only, for fundamental and applied research. Funding for creating these institutes came from the state. Although arguably at first these machines could be of use only in a few computationally-intensive fields like aerodynamics or the construction of nuclear power plants, sources suggest that there were also scientists from other areas, especially physicists, who promoted the initiative because they regarded increasing computing power as essential for bringing forward their own research. Some of them also had already established contacts with computer manufacturers. In my paper I will discuss and broadly contextualize some of these statements, which in the 1990s developed into a wide-spread rhetoric of a “computer revolution” in the sciences.

Neurons on Paper: Writing as Intelligence before Deep Learning

David Dunning (Smithsonian National Museum of American History)

In their watershed 1943 paper “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Warren McCulloch and Walter Pitts proposed an artificial neural network based on an abstract model of the neuron. They represented their networks in a symbolism drawn from mathematical logic. They also developed a novel diagrammatic system, which became known as “McCulloch–Pitts neuron notation,” depicting neurons as arrowheads. These inscriptive systems allowed McCulloch and Pitts to imagine artificial neural networks and treat them as mathematical objects. In this manner, they argued, “for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes.” Abstract neural networks were born as paper tools, constituting a system for writing logical propositions.

Attending to the written materiality of early neural network techniques affords new historical perspective on the notoriously opaque technology driving contemporary AI. I situate McCulloch and Pitts in a material history of logic understood as a set of practices for representing idealized reason with marks on paper. This tradition was shot through with anxiety around the imperfection of human-crafted symbolic systems, often from constraints as mundane as “typographical necessity.” Like the authors they admired, McCulloch and Pitts had to compromise on their notation, forgoing preferred conventions in favor of more easily typeset alternatives. Neural networks’ origin as inscriptive tools offers a window on a moment before the closure of a potent black box, one that is now shaping our uncertain future through ever more powerful, ever more capitalized deep learning systems.

Knowledge Transfer in the Early European Computer Industry

Elisabetta Mori (Universitat Pompeu Fabra, Barcelona)

The collaboration with an academic mathematical laboratory or research institute is a recurring pattern in the genesis of early computer manufacturers: it typically involved financial support and exchanges of patents, ideas and employees.


In my presentation I show how knowledge transfer between academic laboratories and private corporations followed different strategies and was shaped by the contingent policies and contexts in which they unfolded. The presentation focuses on three different case studies: the partnership between the Cambridge Mathematical Laboratory and Lyons, begun in 1947; the example of the Mathematisch Centrum in Amsterdam and its 1956 spin-off NV Electrologica; and the case of the Matematikmaskinnämnden and Facit, the Swedish manufacturer of mechanical calculators, which entered the computer business in 1956.

The three case studies are representative of three distinct patterns. First, knowledge transfer by a sponsorship agreement. Funding and supporting the construction of the EDSAC computer enabled the Lyons catering company (a leader in business methods) to appropriate its design to manufacture its LEO Computers. Second, knowledge transfer through a spin-off. Electrologica (the Netherland’s first computer manufacturer) was established by computer scientists of the Mathematisch Centrum as a spin-off to commercialize the computers designed by the institute. Third, the recruitment of technical staff from a center of excellence. Facit entered the computer business by hiring most of its technicians and researchers from Matematikmaskinnämnden (the research organization of the Swedish government). Taken together the three case studies cast light on how R&D diffused in the embryonic computer industry in post-war Europe.

Elisabetta Mori (photo credits: Mano Manoharan)

A commission and its nationalist technicians: expertise and activities in the Brazilian IT field in the 1970s

Marcelo Vianna (Federal Institute of Education Science and Technology of Rio Grande do Sul)

The history of Brazilian IT in the 1970s is influenced by the work of a group of specialists who occupied spaces in university and technocratic circles to propagate ideas of technological autonomy from the Global North. In this sense, there is a consensus that an elite of this group, acting in the Commission for the Coordination of Electronic Processing Activities (CAPRE), managed to establish a national Informatics policy, giving rise to an indigenous computer industry at the end of the decade. However, there is still much to be explored about the dynamics surrounding CAPRE’s different activities and the profile of its “ordinary” technicians, considering the breadth of attributions that the small body assumed in structuring the Brazilian IT field. Our proposal is to map them by combining prosopography and identifying the concepts, cultures and practices that guided its actions, such as the ideas of “rationalization” and “technological nationalism” and the establishment of a technopolitical network with the technical-scientific community of the period, including the first political class associations in the field of Computer Science. The paper will discuss the composition of the group and its expertise and trajectories, as well as the main actions of the technicians aimed at subsidizing CAPRE’s decision-makers. In this sense, the considerable degree of cohesion between technicians and its leaders ensured that an autonomous path was established for Informatics in the country, even though they were exposed to the authoritarian context of the period, which led to CAPRE itself being extinguished in 1979.

People of the Machine: Seduction and Suspicion in U.S. Cold War Political Computing

Joy Rohde (University of Michigan)

The computational social scientific projects of the Cold War United States are known for their technocratic and militarized aspirations to political command and control. Between the 1960s and the 1980s, Defense officials built systems that sought to replace cognitively limited humans with intelligent machines that claimed to predict political futures. Less familiar are projects that sought to challenge militarized logics of command and control. This paper shares the story of CASCON (Computer-Aided System for Handling Information on Local Conflicts), a State Department-funded information management system that mobilized the qualitative, experiential knowledge and political acumen of diplomats to challenge U.S. Cold War logics, like arms trafficking and unilateral interventionism. The system’s target users—analysts in the Arms Control and Disarmament Agency and the State Department tasked with monitoring conflicts in the global South—were notoriously skeptical of the Pentagon’s militarism and computational solutionism. Yet users ultimately rejected the system because it did tell them what to do! Despite their protestations, they had internalized the command and control logics of policy computing.

CASCON was an early effort to design around the contradictions produced by coexisting fears of human cognitive and information processing limits, on the one hand, and of ceding human agency and expertise to machines on the other. I conclude by arguing that CASCON reflects the simultaneous seduction and fear of the quest to depoliticize politics through technology—an ambivalence that marks contemporary computing systems and discourse as well.

AI in Nomadic Motion: A Historical Sociology of the Interplay between AI Winters and AI Effects

Vassilis Galanos (University of Stirling)

Two of the most puzzling concepts in the history of artificial intelligence (AI), namely the AI winter and the AI effect are mutually exclusive if considered in tandem. AI winters refer to the phenomenon of loss in trust in AI systems due to underdelivery of promises, leading to further stagnation in research funding and commercial absorption. The AI effect suggests that AI’s successful applications have historically separated themselves from the AI field by the establishment of new/specialised scientific or commercial nomenclature and research cultures. How do AI scientists rebrand AI after general disillusionment in their field and how do broader computer science experts brand their research as “AI” during periods of AI hype? How does AI continue to develop in periods of “winter” in different regions’ more pleasant climates? How do periods of AI summer contribute to future periods of internet hype during their dormancy? These questions are addressed drawing from empirical research into the historical sociology of AI, a 2023 secondary analysis between technological spillages and unexpected findings for internet and HCI research during periods of intense AI hype (and vice versa, AI advancements based on periods of internet/network technologies hype), as well as a 2024 oral history project on AI at Edinburgh university and the proceedings of the EurAI Workshop on the History of AI in Europe during which, several lesser known connections have been revealed. To theorise, I am extending Pickering/Deleuze and Guattari’s notion of nomadic science previously applied to the history of mathematics and cybernetics.

Janet Toland (photo credits: Elisabetta Mori)

Vector and Raster Graphics : Two Pivotal Representation Technologies in the Early Days of Molecular Graphics

Alexandre Hocquet and Frédéric Wieber (Archives Poincaré, Université de Lorraine), Alin Olteanu (Shanghai University), Phillip Roth (Käte-Hamburger-Kolleg “Cultures of Research” RWTH Aachen)

https://poincare.univ-lorraine.fr/fr/membre-titulaire/alexandre-hocquet

Our talk investigates two early computer technologies for graphically representing molecules – the vector and the raster display – and traces their technical, material, and epistemic specificity for computational chemistry, through the nascent field of molecular graphics in the 1970s and 1980s. The main thesis is that both technologies, beyond an evolution of computer graphics from vector to raster displays, represent two modes of representing molecules with their own affordances and limitations for chemical research. Drawing on studies in the media archaeology of computer graphics and in history of science as well as primary sources, we argue that these two modes of representing molecules on the screen need to be explained through the underlying technical objects that structure them, in conjunction with the specific traditions molecular modeling stems from, the epistemic issues at stake in the involved scientific communities, the techno-scientific promises bundled with them, and the economic and industrial landsape in which they are embedded.

Erring Humans, Learning Machines: Translation and (Mis)Communication in Soviet Cybernetics and AI

Ksenia Tatarchenko (John Hopkins University)

This paper centers on translation in Soviet cybernetics and AI. Focusing on cultural practices of translation and popularization as reflected in widely-read scientific and fictional texts, I interrogate practices of interpretation in relation to the professional virtue of scientific veracity as well as its didactic function in the Soviet cybernetic imaginary throughout the long Thaw. The publication of the works of Norbert Wiener, Alan Turing, and John von Neumann in Russian was not simply aimed at enabling direct access to the words and thoughts of major bourgeois thinkers concerned with automation and digital technologies: translating and popularizing cybernetics in the post-Stalinist context was about establishing new norms for public disagreement. No longer limited to the opposition of true and false positions, the debates around questions such as “Can a machine think?” that raged across a wide spectrum of Soviet media from the late 1950s to the 1980s were framed by an open-ended binate of what is meaningful or, on the contrary, meaningless. In his classic 1992 book The Human Motor: Energy, Fatigue, and the Origins of Modernity, Anson Rabinbach demonstrates how the utopian obsession with energy and fatigue shaped social thought in modern Europe. In a similar line, this project explores how human error takes on a new meaning when the ontology of information central to Western cybernetics is adopted to a Soviet version of digital modernity.

Tech Disruptors, Then and Now

Mar Hicks (University of Virginia)

This paper explores the connected histories of whistleblowers and activists who worked in computing from the 1960s through the present day, showing how their concerns were animated by similar issues, including labor rights, antiracism, fighting against gender discrimination, and concerns regarding computing’s role in the military-industrial complex. It looks at people who tried to fight the (computer’s) power from within the computing industry, in order to write an alternative history of computing.

Atosha McCaw (photo credits: Elisabetta Mori)

Nosebleed Techno, Sound Jams and Midi Files: the Creative Revolution of Australian Musicians in the 1990s through AMIGA Music Production.

Atosha McCaw (Swinburne University of Technology, Melbourne)

This paper looks at the innovative use of the AMIGA computer by Australian musicians in the 1990s, highlighting its role as a cost-effective tool for music production, experimentation, and collaboration. By examining how these artists harnessed the power of this technology to share files and rapidly materialize creative concepts, we uncover a fascinating chapter in the evolution of electronic music in Australia.

Computers and Datasets as Sites of Political Contestation in an Age of Rights Revolution: Rival Visions of Top-Down/Bottom-Up Political Action Through Data Processing in the 1960s and 1970s United States

Andrew Meade McGee (Smithsonian Air and Space Museum)

As both object and concept, the electronic digital computer featured prominently in discussions of societal change within the United States during the 1960s and 1970s. In an era of “rights revolution,” discourse on transformative technology paralleled anxiety about American society in upheaval. Ever in motion, shifting popular conceptualizations of the capabilities of computing drew comparisons to the revolutionary language of youth protest and the aspirations of advocacy groups seeking full political, economic, and social enfranchisement. The computer itself – as concept, as promise, as installed machine – became a contested “site of technopolitics” where political actors appropriated the language of systems analysis and extrapolated consequences of data processing for American social change. Computers might accelerate, or impede, social change.

This paper examines three paradigms of the computer as “a machine for change” that emerge from this period: 1) One group of political observers focused on data centralization, warning of “closed worlds” of institutional computing that might subject diverse populations to autocratic controls or stifle social mobility; 2) In contrast, a network of social activists and radicals (many affiliated with West Coast counterculture and Black Power movements) resisted top-down paradigms of data centralization and insisted community groups could seize levers of change by embracing their own forms of computing. 3) Finally, a third group of well-meaning liberals embraced the potential of systems analysis as a socially-transformative feedback loop – utilizing the very act of data processing itself to bridge state institutions and local people, sidestepping ideological, generational, or identity-based conflict.

Computing a Nation: Science-Technology Knowledge Networks, Experts, and the Shaping of the Korean Peninsula (1960-1980)

Ji Youn Hyun (University of Pennsylvania)

This paper presents a history of the ‘Systems Development Network’ (SDN), the first internet network in Asia established in 1982, developed in South Korea during the authoritarian presidency of Park Chung-Hee (1962-1979). I examine scientists and engineers who were repatriated under Park’s Economic Reform and National Reconstruction Plan to reverse South Korea’s ‘brain-drain’, re-employed under government sponsored research institutions, and leveraged to modernize state industrial manufacturing.

Pioneered by computer scientist Kilnam Chon, often lauded as ‘the father of East Asia’s internet’, a transnationally trained group of experts at the Korea Institute of Electronics Technology (KIET) developed the nation’s internet infrastructure, despite repeated government pushback and insistence on establishing a domestic computer manufacturing industry. Drawing on the Presidential Archive and National Archives of Korea, I describe how the SDN manifested through a lineage of reverse-engineering discarded Cheonggyecheon black market U.S. Military Base computer parts, prototyping international terminal and gateway connections, and “extending the instructional manual” of multiple microprocessors.

The reconfiguration of computer instructional sets are one of many cases of unorthodox, imaginative, and off-center methods practiced in Korea to measure up and compete with Western computing. Although repatriated scientists were given specific research objectives and goals, their projects fundamentally materialized through a series of experimental and heuristic processes. This paper will illuminate South Korea’s computing history, which until now has not been the subject of any history, and also allow a broader reflection on the transformation of East Asia during the Cold War––highlighting political change through the development of computing.

Daphne Zhen Ling Boey (photo credits: Janet Toland)

Collecting Data, Sharing Data, Modeling Data: From Adam and Eve to the World Wide Web within Twenty Years

Barbara Hof (University of Lausanne)

Much like physicists using simulations to model particle interactions, scientists in many fields, including the digital humanities, are today applying computational techniques to their analysis and research and to the study of large data sets. This paper is about the emergence of computer networks as the historical backbone of modern data sharing systems and the importance of data modeling in scientific research. By exploring the history of computer data production and use in physics from 1990 back to 1970, when the Adam & Eve scanning machines began to replace human scanners in data collection at CERN, this paper is as much about retelling the story of the invention of the Web at CERN as it is about some of the technical, social and political roots of today’s digital divide. Using archival material, it argues that the Web, developed and first used at physics research facilities in Western Europe and the United States, was the result of the growing infrastructure of physics research laboratories and the need for international access to and exchange of computer data. Revealing this development also brings to light early mechanisms of exclusion. They must be seen against the backdrop of the Cold War, more specifically the fear that valuable and expensive research data at CERN could be stolen by the Soviets, which influenced both the development and the restriction of data sharing.

Differing views of data in Aotearoa: the census and Māori data

Daphne Zhen Ling Boey and Janet Toland (Victoria University of Wellington | Te Herenga Waka)

This presentation explores differing concepts of “data” with respect to the Indigenous Māori people of Aotearoa and colonial settlers. A historical lens is used to tease out long-term power imbalances that still play out in the data landscape today. Though much data has been collected about Māori by successive governments of New Zealand, little benefit has come to Māori themselves.

This research investigates how colonisation impacted Māori, and the ongoing implications for data. The privileging of Western approaches to harnessing the power of data as opposed to indigenous ways stems from colonisation – a system that results in “a continuation of the processes and underlying belief systems of extraction, exploitation, accumulation and dispossession that have been visited on Indigenous populations.”

We examine the census, an important tool that provides an official count of the population together with detailed socioeconomic information at the community-level and highlight areas where there is a fundamental disconnect between the Crown and Māori. Does Statistics New Zealand, as a Crown agency, have the right to determine Māori ethnicity, potentially undermining the rights of Māori to self-identify? How do differing ways of being and meaning impact how we collect census data? How does Aotearoa commit to its Treaty obligations to Māori in the management and optimisation of census data? We also delve into Māori Data Sovereignty, and its aim to address these issues by ensuring that Māori have control over the collection, storage and use of their own data as both enabler of self-determination and decolonisation.

History of computing from the perspective of nomadic history. The case of the hiding machine

Liesbeth De Mol (CNRS, UMR 8163 Saviors, Textes, Langage, Université de Lille)

Computing as a topic is one that has moved historically and methodologically through a variety of disciplines and fields. What does this entail for its history? The aim of this talk is to provoke a discussion on the future of the history of computing. In particular, I use a notion of so-called nomadic history. This is in essence the idea to identify and overcome ones own disciplinary and epistemological obstacles by moving across a variety of and sometimes conflicting methods and fields. I apply the method to the case of the history of the computer-as-a-machine which is presented as a history of hide-and-seek. I argue that the dominant historical narrative in which the machine got steadily hidden away behind layers of abstraction needs countering both historically as well as epistemologically. It is based on a collaboratively written chapter for the forthcoming book “What is a computer program?”.

Luke Stark (photo credits: Elisabetta Mori)

Modeling in history: using LLMs to automatically produce diagrammatic models synthesizing Piketty’s historiographical thesis on economic inequalities

Axel Matthey (University of Lausanne)

This research integrates theoretical digital history with economic history. Employing Large Language Models, we aim to automatically produce historiographical diagrams for analysis. Our experience with the manual production of historiographical diagrams suggests that LLMs might be useful to support the automatic generation of such historiographical diagrams which aim at facilitating the visualization and understanding of complex historical narratives and causal relationships between historical variables. Our initial exploration involved using Google’s LLM (Gemini 1.5 Pro) and OpenAI’s GPT-4o to convert a concise historical article by Piketty into a simplified causal diagram. This article is A Historical Approach to Property, Inequality and Debt: Reflections on Capital in the 21st Century . LLMs have demonstrated remarkable capabilities in various domains, including understanding and generating code, translating languages, and even creating different creative text formats. We show that LLMs can be trained to analyze historical texts, identify causal relationships between concepts, and automatically generate corresponding diagrammatic models. This could significantly enhance our ability to visualize and comprehend complex historical narratives, making implicit connections explicit, and facilitating further exploration and analysis. Historiographical theories explore the nature of historical inquiry, focusing on how historians represent and interpret the past: in this research, the use of diagrams is being considered as a means to enhance the communication, visualization, and understanding of these complex theories.

Computational Illegalism

Luke Stark (Western University Canada)

In his analysis of the concept in his lectures on the development of the “punitive society,” Michel Foucault describes the eighteenth century as a period of “systematic illegalism,” including both lower-class or popular illegalism and “the illegalism of the privileged, who evade the law through status, tolerance, and exception” (Foucault 2015, 142). In this paper, I argue that illegalism has new utility as an analytic concept in the history of computing. Illegalism is a characteristic of both the business models and rhetorical positioning of many contemporary digital media firms. Indeed, such “computational illegalism” is so rife that commentators often seem to accept it as a necessary aspect of Silicon Valley innovation.

In this presentation, I describe illegalism as theorized by Foucault and others and develop a theory of platform illegalism grounded in the history of technical and business models for networked computing since the 1970s. This presentation is part of a larger project in which I document the prevalence of illegalism on the part of digital platforms in various arenas, focusing in particular on platform labor and generative AI; examine the range of responses to such illegalism from consumers, activists, and governments; and formulate recommendations regarding ways to account for platform illegalism in scholarly and activist responses as part of governance mechanisms for digitally mediated societies.

The datafied “enemy,” Computational work, and Japanese American incarceration during World War II

Clare Kim (University of Illinois Chicago)

Following the events of Pearl Harbor in December 1941, a series of U.S. presidential proclamations and executive orders authorized the legal designation and treatment of people of Japanese ancestry as “enemy aliens.” The designation of the US West Coast as military zones under Executive Order 9066 enabled the removal and subsequent incarceration of more than 120,000 Japanese Americans in internment camps. The problem of identifying, incarcerating, and managing Japanese enemy alien populations necessitated the treatment of these military zones and spaces as information environments, where the classification of Japanese and Japanese American residents as enemy alien, citizen, or an alternative subject position could be adjudicated. This paper explores how conflict in the Pacific theater of World War II contoured the entanglements between computational work and Asian and Asian Americans residing in the U.S., recounting the setup of statistical laboratories established to track and manage Japanese American incarceration. It reveals how datafication practices were collapsed and equated with bodies that were racialized as an enemy alien and yellow peril, which paradoxically effaced other subject positions to which Japanese Americans came to occupy at the time: in particular, the invisible labor to which they furnished to statistical work as technical experts themselves.

(photo credits: Barbara Hof)

Audio Tip: Art, Science, and the Politics of Knowledge

KHK c:o/re fellow Hannah Star Rogers sat down with Nicholas McCay for the podcast “Science, Technology, and Society” to talk about her book “Art, Science, and the Politics of Knowledge” (MIT Press, 2022).

In her research, Hannah argues that art and science are not distinct domains, but intertwined practices that both produce knowledge through shared methodologies such as visualization, experimentation, and inquiry.

You can listen to the episode on the podcast’s website.

On our blog, you can read more about Hannah’s work researching the connection between science and art.

A book cover showing a person climbing onto a metal table under a white cloth.
Book cover, 2022. Photo credit: Kira O’Reilly and Jennifer Willet. Refolding (Laboratory Architectures). School of Biosciences at the University of Birmingham, 2010. Photos by Hugo Glendinning.

Get to know our Fellows: Matthew N. Eisler

Get to know our current fellows and gain an impression of their research. In a new series of short videos, we asked them to introduce themselves, talk about their work at c:o/re and the research questions that fascinate them.

In this video, Matthew N. Eisler, historian of science and technology at the University of Strathclyde, shares his research on the relationship between environmental regulations, society, and everyday life. Focusing on less obvious aspects of life in a sustainable society, he investigates how green production shapes social relations and sheds light on different visions of green work.

Check out our media section or our YouTube channel to have a look at the other videos.