Category: c:o/re-Blog

Can nuclear history serve as a laboratory for the regulation of artificial intelligence? 

ELISABETH RÖHRLICH

Artificial intelligence (AI) seems to be the epitome of the future. Yet the current debate about the global regulation of AI is full of references to the past. In his May 2023 testimony before the US Senate, Sam Altman, the CEO of Open AI, named the successful creation of the International Atomic Energy Agency (IAEA) a historical precedent for technology regulation. The IAEA was established in 1957, during a tense phase of the Cold War.

The International Atomic Energy Agency (here: the IAEA’s Board of Governors Room in 1974), which was established in 1957, has been discussed as a potential model for a future international body to regulate AI. (Credits)

Calls for global AI governance have increased after the 2022 launch of ChatGPT, OpenAI’s text-generating AI chatbot. The rapid advancements in deep learning techniques evoke high expectations in the future uses of AI, but they also provoke concerns about the risks inherent in its uncontrolled growth. Next to very specific dangers—such as the misuse of large-language models for voter manipulation—a more general concern about AI as an existential threat—comparable to the advent of nuclear weapons and the Cold War nuclear arms race—is part of the debate.

Profile Image

Elisabeth Röhrlich

Elisabeth Röhrlich is an Associate Professor at the Department of History, University of Vienna, Austria. Her work focuses on the history of international organizations and global governance during the Cold War and after, particularly on the history of nuclear nonproliferation and the International Atomic Energy Agency (IAEA).

From nukes to neural networks

As a historian of international relations and global governance, the dynamics of the current debate about AI regulation caught my attention. As a historian of the nuclear age, I was curious. Are we witnessing AI’s “Oppenheimer moment,” as some have suggested? Policymakers, experts, and journalists who compare the current state of AI with that of nuclear technology in the 1940s suggest that AI has a similar dual use potential for beneficial and harmful applications—and that we are at a similarly critical moment in history.

Some prominent voices have emphasized analogies between the threats posed by artificial intelligence and nuclear technologies. Hundreds of AI and policy experts signed a Statement on AI Risk that put the control of artificial intelligence on a level with the prevention of nuclear war. Sociologists, philosophers, political scientists, STS scholars, and other experts are grappling with the question of how to develop global instruments for the regulation of AI and have used nuclear and other analogies to inform the debate.

Does artificial intelligence pose existential risks comparable to nuclear weapons? (Credits)

There are popular counterarguments to the analogy. When the foundations of today’s global nuclear order were laid in the mid-1950s, risky nuclear technologies were largely in states’ hands, while today’s development of AI is driven much more by industry. Others have argued that there is “no hard scientific evidence of an existential and catastrophic risk posed by AI” that is comparable to the threat of nuclear weapons. The atomic bombings of Hiroshima and Nagasaki in August 1945 had drastically demonstrated the horrors of nuclear war. There is no similar testimony for the potential existential threats of AI. However, the narrative that because of the shock of Hiroshima and Nagasaki world leaders were convinced that they needed to stop the proliferation of nuclear weapons is too simple.

Don’t expect too much from simple analogies

At a time of competing visions for the global regulation of artificial intelligence—the world’s first AI act, the EU Artificial Intelligence Act, just entered into force in August 2024—a broad and interdisciplinary dialog on the issue seems to be critical. In this interdisciplinary dialog, history can help us understand the complex dynamics of global governance and scrutinize simple analogies. Historical analysis can place the current quest for AI governance in the long history of international technology regulation that goes back to the 19th century. In 1865, the International Telegraph Union was founded in Paris: the new technology demanded cross-border agreements. Since then, any major technology innovation spurred calls for new international laws and organizations—from civil aviation to outer space, from stem cell technologies to the internet.

For the founders of the global nuclear order, the prospect of nuclear energy looked just as uncertain as the future of AI appears to policymakers today. Several protagonists of the early nuclear age believed that they could not prevent the global spread of nuclear weapons anyway. After the end of World War II, it took over a decade to build the first international nuclear authority.

In my recent book Inspectors for Peace: A History of the International Atomic Energy Agency, I followed the IAEA’s evolution from its creation to its more recent past. As the history of the IAEA’s creation shows, building technology regulation is never just about managing risks, it is also about claiming leadership in a certain field. In the early nuclear age—just as today with AI—national, regional, and international actors competed in laying out the rules for nuclear governance. US President Dwight D. Eisenhower presented his 1953 proposal to create the IAEA—the famous “Atoms for Peace” initiative—as an effort to share civilian nuclear technology and preventing the global spread of nuclear weapons. But at the same time, it was an attempt to legitimize the development of nuclear technologies despite its risks, to divert public attention from the military to the peaceful atom, and to shape the new emerging world order.

US President Eisenhower delivering his “Atoms for Peace” speech to the UN General Assembly in December 1953. (Credits)

Simple historical analogies tend to underestimate the complexity of global governance. Take for instance the argument that there are hard lines between the peaceful and the dangerous uses of nuclear technology, while such clear lines are missing for AI. Historically, most nuclear proliferation crises centered around opposing views of where the line is. The thresholds between harmful and beneficial uses do not simply come with a certain technology, they are the result of complex political, legal, and technical negotiations and learnings. The development of the nuclear nonproliferation regime shows that not the most fool-proof instruments were implemented, but those that states (or other involved actors) were willing to agree on.

History offers lessons, but does not provide blueprints

Nuclear history offers more differentiated lessons about global governance than the focus on the pros and cons of the nuclear-AI analogy suggests. Historical analysis can help us understand the complex conditions of building global governance in times of uncertainty. It reminds us that the global order and its instruments are in continuous process and that technology governance competes with (or supports) other policy goals. If we compare nuclear energy and artificial intelligence to inform the debate about AI governance, we should avoid ahistorical juxtapositions. 

The Leibniz Puzzle

GABRIELE GRAMELSBERGER

On the occasion of an invitation to a lecture on Leibniz as a forerunner of today’s artificial intelligence at the Leibniz Library in Hannover, where most of his manuscripts are kept and edited, I had the opportunity to see some excerpts from his vast oeuvre. Prof. Michael Kempe, head of the research department of the Leibniz Edition, gave me some insights into the practice of editing Leibniz’s writings. Leibniz literally wrote on every piece of paper he could get his hands on. Hundreds of thousands of notes, because Leibniz wrote various notes on a large sheet of handmade paper and then cut it up himself to sort the individual notes thematically. A kind of early note box. However, he did not actually sort many of his notes and left behind a jumble of snippets.

Fig.1 Leibniz-Library Hannover, Germany

How do you deal with the jumble of 100,000 snippets?

Nowadays, Artificial Intelligence (AI) technology is used to put together the “puzzle”, as Michael Kemper calls it. Supported by MusterFabrik Berlin, which specializes in such material cultural heritage puzzles, the snippets are reassembled again and again and reveal many surprises. For example, a snippet of Leibniz’s idea on “Motum non esse absolutum quiddam, sed relativum …” (Fig. 2 front/back side) showed a fragment of a geometric drawing. However, the snippet 22 preserved in box LH35, 12, 2 was not completed by any other snippet in this box. The notes were sorted by hand in the late 19th century by the historian Paul Ritter (Ritter catalog) as a basis for a later edition. Ritter’s catalog was a first attempt to bring some order to the scattered notes. Now, more than a hundred years later, AI technology is bringing new connections and affiliations to light. Snippet 43, shown in Figure 3 (front/back side), completed this part of the puzzle. It was located in box LH35, 10, 7 and had never before been connected to snippet 22.

Fig.2 Snippet 22 “Motum non esse absolutum quiddam …” (LH35, 12, 2), edited in Gottfried Wilhelm Leibniz, Sämtliche Schriften und Briefe, volume VI, 4 part B, Berlin 1999, N. 317, p. 1638.
Fig. 3 Snippet 43 (LH35, 10, 7), unedited, but digitally available here.

Trains of thought made visible

“What these recombined snippets tell us,” says Michael Kempe, “is how Leibniz’s thinking worked. He used writing to organize and clarify his thoughts. He wrote all the time, from morning, just after waking up, until late at night. And he often used drawings to illustrate, but also to test his ideas. He changed the sketches and thus further developed his train of thought.” Combined snippets 22/43 are such an example. While writing about the relativity of motion, Leibniz made some geometric sketches of the motion of the planets and added some calculations (fig. 4b).

Fig. 4a Combined snippets 22 and 43 (front side)
Fig. 4b Combined snippets 22 and 43 (back side)

Leibniz’ contributions to AI

An interesting side aspect is that the AI technology, used for solving the Leibniz puzzle, is based on a modern version of Leonhard Euler’s polyhedron equation, which was inspired by Leibniz’s De Analysi situs. De Analysi situs, in turn, was the topic of my talk the day before on the influence of Leibniz’s ideas on AI technology. So, it all fitted together very well. However, Leibniz’ contributions to AI were manifold. Already his contribution to computation were outstanding—he had developed a dyadic calculation system, an arithmetic mechanism (Leibniz wheel), which was in use until the beginning of the 20th century, and directed the construction of a four species arithmetic machine. However, his contribution to a calculus of logic was even more significant, because he had to overcome sensory intuition and to develop an abstract intuition solely based on symbolic data. De Analysi situs was precisely about this abstract stance, which came into use only in 19th century’s symbolic logic. Furthermore, De Analysi situs is considered a precursor of topology, which inspired Euler’s polyhedron equation, which expresses topological forms with graphs. Graphs, in turn, play a crucial role in AI for network analysis of all types of data points and relationships. This closes the circle from Leibniz to AI.

De Analysi situs (1693)

How did Leibniz overcome sensory intuition and develop an abstract intuition solely based on data points and relationships? The text begins with the following sentences: “The commonly known mathematical analysis is one of quantities, not of position, and is thus directly and immediately related to arithmetic, but can only be applied to geometry in a roundabout way. Hence it is that from the consideration of position much results with ease which can be shown by algebraic calculation only in a laborious manner” (Leibniz, 1693, p. 69). Leibniz criticized the limited arithmetic operativity of algebraic analysis (addition, subtraction, multiplication, division, square root) and called for the expansion of operations through the analytical method for geometry and geometrical positions.

This expansion was the following: “The figure generally contains, in addition to quantity, a certain quality or form, and just as that which has the same quantity is equal, so that which has the same form is similar. The theory of similarity or of forms extends further than mathematics and is derived from metaphysics, although it is also used in mathematics in many ways and is even useful in algebraic calculus. Above all, however, similarity comes into consideration in the relations of position or the figures of geometry. A truly geometrical analysis must therefore apply not only equality and proportion […] but also similarity and congruence, which arise from the combination of equality and similarity” (p. 71).

Leibniz criticized that it was the fault of the philosophers, who were content with vague definitions. And now comes the decisive step: he proposed an exact definition for the concept of similarity. He writes: “I have now, by an explanation of the quality or form which I have established, arrived at the determination that similar is that which cannot be distinguished from one another when observed by itself” (pp. 71-72). Thus, he replaced similarity by indistinguishability and argued that indistinguishability only requires the comparison of data “salva veritate.” He thus established a concept of indistinguishability which can “be derived from the symbols by means of a secure computations and proof procedure” (p. 76), which is the basis of all data operations to this day.

With this algorithm, Leibniz hoped, that “all the questions for which the faculty of perception is no longer sufficient can be pursued further, so that the calculus of position described here represents the complement of sensory perception and, as it were, its completion. Furthermore, in addition to geometry, it will also permit hitherto unknown applications in the invention of machines and in the description of the mechanisms of nature” (p. 76). It is an algorithm that is intended to help recognize similarities purely on the basis of data. Today, we call this clustering and it is the central strategy of unsupervised learning, i.e. a method for discovering similarity structures in large data sets.

References and further readings

De Risi, Vincenzo: The Analysis Situs 1712-1716, Geometry and Philosophy of Space in the Late Leibniz, Basel: Birkhäuser 2006.

Gramelsberger, Gabriele: Operative Epistemologie. (Re-)Organisation von Anschauung und Erfahrung durch die Formkraft der Mathematik, Hamburg: Meiner 2020. Open access URL: https://meiner.de/operative-epistemologie-15229.html

Gramelsberger, Gabriele: Philosophie des Digitalen zur Einführung, Hamburg: Junius 2023.

Kempe, Michael: Die beste aller möglichen Welten: Gottfried Wilhelm Leibniz in seiner Zeit, S. Fischer: München 2022.

Leibniz, Gottfried W.: De analysi situs (1693), in: Philosophische Werke (ed. by Artur Buchenau and Ernst Cassirer), vol. 1, Meiner: Hamburg 1996, pp. 69–76. (All quotes translated by DeepL).

Ziegler, Günter M., Blatter, Christian: Euler’s polyhedron formula — a starting point of today’s polytope theory, Write-up of a lecture given by GMZ at the International Euler Symposium in Basel, May 31/June 1, 2007. URL: https://www.mi.fu-berlin.de/math/groups/discgeom/ziegler/Preprintfiles/108PREPRINT.pdf

Towards Expanding STS?

MARCUS CARRIER

On October 9, 2024, KHK c:o/re director Prof. Dr. Stefan Böschen opened the new lecture series “Expanding Science and Technology Studies”. His talk titled “Towards Expanding STS?” was aimed at setting the scene for the lecture series and served as a starting point for further reflections on the topic. The talk was mostly designed around sketching out the problems that, as Prof. Böschen argues, classical Science and Technology Studies (STS) are not equipped to tackle alone. Instead, he argues for an expansion of STS towards other disciplines that investigate Science and Technology, namely History of Science and Philosophy of Science, to better grasp these problems.

Prof. Böschen started his talk with presenting his own personal starting points for thinking about this topic. First, there is the new concept of “Synthetica” or new forms of life that are designed by humans which also played a role in the opening talk by KHK c:o/re director Prof. Gramelsberger for the 2023/2024 lecture series “Lifelikeness”. Prof. Böschen now asked whether these Synthetica are epistemic objects or technical objects and if STS are equipped to describe the practice around them. Second, he talked about sustainable development goals. These are very knowledge intensive, but at the same time the knowledge management has to be done by different countries which also have to take into account different forms of knowledge and have to manage a lot of diversity in the system. Third, Prof. Böschen reflected on different formats he experienced that made him think further on expanding STS: The Temporary University Hambach that was designed around the structural change in the Rhinish Revier and based on the needs of local people, and the STS Hub 2023 in Aachen which was designed to bring together different disciplines doing “science on science.”

Stefan Böschen during his talk; photo: Jana Hambitzer

After having set the scene with these personal starting points, Prof. Böschen claimed that there are signs for science changing significantly. First, he concentrated on the cluster of topics around digitization and especially the digitization of problem-solving in science. This cluster includes topics like digital models both for scenario building and for reducing the space of options where real-world problems must be transformed to be computable by which models shape the way of thinking in science. But also, the digitization of scientific literature to grasp the ever-growing amount as well as the digitization of experiments which can pose challenges for expectations of reproducibility are part of this cluster.

In the tension of simplification for the sake of problem-solving and complexifying to better understand specific contexts, Prof. Böschen argued that digital tools are steered towards simplification. This, in turn, creates new and specific concerns about the epistemic quality of knowledge produced by these tools and about the way they transform research in practice.

The second cluster of topics that Prof. Böschen argued are a sign of significant change in science is the de-centralization of knowledge production exemplified in projects like living labs which were also part of recent talk by Dr. Darren Sharp at the KHK c:o/re. Programs like living labs, where science encourages society to participate in the making of solutions for local issues, can have two forms. On the one hand side, they can in collaborative ways explore the status quo and define what should be understood as the “problem” before bringing together local experiences and knowledge as well as scientific knowledge to solve it. On the other hand, living labs can start out with a technological innovation and can then locally look for applications and use-cases for this innovation. The technology can then be optimized towards local needs.

In both forms of living labs, the important new criterion for knowledge is relevance which entails the question for whom it should be relevant and who defines that. Also, these local solutions and optimizations face problems of scaling. How can they be scaled up and are the “problems” on all scaling level still the same? Lastly, how does it impact knowledge production on a deeper level?

According to Stefan Böschen, digital methods and objects of knowledge question the picture of knowledge of a “circulation of references”, and the fixed positions of the subject and object of knowledge. Photo: Jana Hambitzer

Both, the digitization of science and the de-centralization of knowledge production show that science is in the midst of a transformation according to Prof. Böschen. There is a need for a relational analysis of epistemic quality and epistemic authority. He shares his intuition to preserve the ideal of reliable scientific knowledge and that knowledge production for decision making processes has an epistemic as well as an institutional side. This, Prof. Böschen argues, can not be done by any discipline alone but needs collaboration between the sociologically and ethnographically centered STS and more philosophically and historically oriented research on science. Expanded STS as Prof. Böschen envisions it should tailor new concepts for analyzing research during transformation.

With this call to action, Prof. Böschen leaves not with a set program but with a description of problems that call for future interdisciplinary discussions.

On October 30, 2024, the next talk of the Lecture Series titled “An IAEA for AI? The Regulation of Artificial Intelligence and Governance Models from the Nuclear Age” will be by our fellow Elisabeth Röhrlich. We look forward to continuing the conversation!

Workshop “Epistemology of Arithmetic: New Philosophy for New Times”

The Käte Hamburger Kolleg on Cultures of Research hosted a philosophical Workshop on May 16th and 17th May. It was organized by Markus Pantsar and Gabriele Gramelsberger for good reasons: Gabriele Gramelsberger received as the first German philosopher the K. Jon Barwise Prize, while Markus Pantsar’s book “From Numerical Cognition to the Epistemology of Arithmetic” had been recently published by Cambridge University Press as the first book publication by a fellow at the KHK Aachen.

Markus Pantsar: “From Numerical Cognition to the Epistemology of Arithmetic”

The workshop kicked off with a presentation by Markus Pantsar (RWTH Aachen University) on how his book came to be. The leading question is: how can we use empirical knowledge about numerical cognition to gain a better understanding of arithmetical knowledge? His goal is to combine philosophy of mathematics with the cognitive sciences to gain a deeper understanding of how we develop and acquire number concepts and their arithmetic. It’s fascinating how these concepts develop differently across cultures, even though they are based on universal proto-arithmetical numerical abilities. Indeed, even animals have proto-arithmetical abilities, evidenced by their ability to differentiate on collections based on numerosities. This leads to an intriguing question: how do we come to develop and acquire number concepts? From an anthropological perspective, numbers are a fundamental aspect of human life in many cultures, yet there are also cultures without numbers. Hence, aside from the evolutionarily developed proto-arithmetical abilities, we also need to focus on the cultural foundations of arithmetic. All this, Pantsar argued, is relevant for the epistemology of arithmetic.

Dirk Schlimm: “Where do mathematical symbols come from?”

Dirk Schlimm from McGill University in Montreal was the next to present. He talked about his recent research project on mathematical notations. Grounding on the question of what notations are (according to Peirce), Schlimm introduced his newest findings that mathematical notations are sometimes arbitrary, but this is not the case generally. Mathematical symbols may resemble or draw from shapes in the real world, or have other characteristics that connect to our cognitive capacities. The issue is, however, very complex. Mathematical symbols, in particular, carry many purposes and their use needs to be studied with this in mind. In addition to purely scientific purposes, we should consider how academic practices and political dimensions influence the acceptance and use of notations.

Richard Menary: “The multiple routes of enculturation”

Richard Menary (Macquarie University, Sydney) then gave us insights into his research on enculturation, arguing that there are multiple cultural pathways to developing and acquiring number concepts and arithmetic. Menary calls this the multiple routes model of enculturation. He discussed aspects of Pantsar’s book, especially the developmental path from proto-arithmetical cognition to arithmetical cognition. Menary showed a variety of factors in how this transition can take place, like finger counting, writing and forming numbers on paper. Enculturation through cultural practices has a significant influence on the development of arithmetical abilities, but we should not be fooled into thinking that such enculturation is a uniform phenomenon that always follows similar paths.

Regina Fabry: “Enculturation gone bad: The Case of math anxiety”

Regina Fabry (Macquarie University) showed in her presentation on math anxiety how the relationship between cognition and affectivity needs to be included in accounts of arithmetical knowledge. While accounts of enculturation, like those of Menary and Pantsar, focus on the successful side of things, it is important to acknowledge that processes of enculturation can also go bad. Socio-cultural factors associated with mathematics education can lead to anxiety, which hinders the learning process with long-standing consequences. Empirical studies can contribute to a better understanding of where epistemic injustice may be present, and where there is a strong link to math anxiety. Accounts of arithmetical knowledge drawing from enculturation should be sensitive to such problems, but we can also use research on math anxiety to understand better the role of affectivity in enculturation in general.

Catarina Dutilh Novaes: “Dialogical pragmatism and the justification of deduction”

On Friday, Catarina Dutilh Novaes from Vrije University of Amsterdam discussed her ongoing investigation on the dialogical roots of deduction and posed the question what, if anything, can justify deductive reasoning. While her book The Dialogical Roots of Deduction offers an analysis of deduction as it is present in cultural practices, the question of its justification is left open. In her talk, she discussed whether pragmatist approaches could fill the gap to ground deduction. She argued that the justification for deduction comes from nothing beyond the pragmatics of the dialogical development of deduction.  She supported this claim by a discussion on pragmatist theories of truth and recent discussion on anti-exceptionalism in logic.

Frederik Stjernfelt: “Peirce’s Philosophy of Notations and the Trade-offs in Comparing Numeral Symbol Systems”

The former KHK Fellow Frederik Stjernfelt (Aalborg University Copenhagen) talked about his recent studies on Charles S. Peirce’s work on notations, co-conducted with Pantsar. Although better known for his work on logical notation, Peirce was deeply interested also in mathematical notation, including numeral symbol systems. He was eager to find a fitting notation for numbers which is easy to learn and allows easy calculations. Peirce focused in particular on the binary and heximal systems, the latter of which he considered superior to our decimal system. Stjernfelt presented Peirce-inspired criteria for different aims of numeral symbol systems, like iconicity, simplicity, and ease of calculation, arguing that the choice of a symbol system comes with trade-offs between them.

Image of Prof. Frederik Stjernfelt
Photo credits: Morten Holtum

Stefan Buijsman: “Getting to numerical content from proto-arithmetic”

Stefan Buijsman (TU Delft) discussed Pantsar’s account of how humans arrive from proto-arithmetical abilities to proper arithmetical abilities. Studies of young children suggest that the core cognitive object-tracking system (OTS) and approximate number system (ANS) can both play a role in this process, but a key stage is acquiring the successor principle (that for every number n, also n + 1 is a number). Buijsman emphasized the role of acquiring the number concept one and its importance in grasping the successor principle, noting that Pantsar’s account could benefit from more focus on the special character of acquiring the first number concept.

Alexandre Hocquet: “Reproducibility, Photoshop, Pubpeer, and Collective Disciplining”

With Alexandre Hocquet’s (Université de Lorraine/ Laboratory Archives Henri-Poincaré) talk, the workshop moved from the philosophy of arithmetic to digital and computational approaches to philosophy of science. Hocquet discussed Photoshopping scientific digital images and using them for fraud in academic research, focusing on the Voinnet affair. On this basis, he discussed the topics of trust, reproducibility and change of scientific methods. With the use of digital images as evidence, new considerations of transparency are needed to ensure trust in scientific practice.

Photo credits: Jana Hambitzer

Gabriele Gramelsberger and Andreas Kaminski: “From Calculation to Computation. Philosophy of Computational Sciences in the Making”

In the final talk of the workshop, Gabriele Gramelsberger (RWTH Aachen University) and Andreas Kaminski (TU Darmstadt) focused on the computational turn in science. While mathematics has been an indispensable part of science for centuries, the increasing use of computer simulations has replaced arithmetical calculations by Boolean computations. Gramelsberger discussed the cognitive limitations of interpreting non-linear computing systems. Kaminski then considered questions of epistemic, pragmatic, and ethical opacity that arise from these limitations.

From bio-ontologies to academic lives: What studying biocuration can tell us about the conditions of academic work

SARAH R. DAVIES

When I arrived at the Käte Hamburger Kolleg in February 2024, my plan was to study bio-ontologies: the systems that are used to categorise and organise biological data. As a Science and Technology Studies (STS) researcher, I had been interested in biocuration for a while, and one key aspect of biocuration work is developing and applying ontologies. Exploring bio-ontologies would, I thought, give me important insights into the practice of biocuration and what it is doing to our understandings of biology, the organisms, and entities that are studied, and ideas about ‘life’ itself.

Profile Image

Sarah R. Davies

Sarah R. Davies is Professor of Technosciences, Materiality, and Digital Cultures at the Department of Science and Technology Studies, University of Vienna, Austria.
Her work explores the intersections between science, technology, and society, with a particular focus on digital tools and spaces.

I am a social scientist, so delving into the nature of bio-ontologies by looking at natural science and philosophy literature about them was something of a departure for me. What I hadn’t necessarily expected was that doing so would bring me back to more sociological questions, in particular regarding the conditions of academic work. In other words, studying bio-ontologies led me to argue that these systems, which are “axioms that form a model of a portion of (a conceptualization) of reality”[1]Bodenreider, Olivier, and Robert Stevens. 2006. “Bio-ontologies: current trends and future directions.” Briefings in Bioinformatics 7 (3): 256–74. https://doi.org/10.1093/bib/bbl027., are connected not just to forms of life in the context of biological entities, but with regard to the researchers who create and use them.

Let me rewind a bit. What is biocuration, and what exactly are bio-ontologies? Biocuration is “the process of identifying, organising, correcting, annotating, standardising, and enriching biological data”. [2]Tang, Y. Amy, Klemens Pichler, Anja Füllgrabe, Jane Lomax, James Malone, Monica C. Munoz-Torres, Drashtti V. Vasant, Eleanor Williams, and Melissa Haendel. 2019. “Ten quick tips for … Continue reading Its “primary role … is to extract knowledge from biological data and convert it into a structured, computable form via manual, semi-automated and automated methods.”[3]Quaglia, Federica, Rama Balakrishnan, Susan M Bello, and Nicole Vasilevsky. 2022. “Conference report: Biocuration 2021 Virtual Conference.” Database 2022 (Januar): baac027. … Continue reading This is largely done in the context of large data- and knowledgebases (such as FlyBase or UniProt), which are now central to the biosciences. Biocurators work to develop and maintain such databases, for example by reading scientific articles and extracting useful information from them, inputting data into databases, adding metadata and annotating information, and – importantly – creating and using the bio-ontologies I have already mentioned. 

Bio-ontologies, then, are a means of classifying and organising biological data. They offer a ‘controlled vocabulary’ (meaning a standardised terminology), but also represent current knowledge about biological entities in that they consist of “a network of related terms, where each term denotes a specific biological phenomenon and is used as a category to classify data relevant to the study of that phenomenon.”[4]Leonelli, Sabina. 2012. “Classificatory Theory in Data-intensive Science: The Case of Open Biomedical Ontologies.” International Studies in the Philosophy of Science 26 (1): 47–65. … Continue reading Bio-ontologies such as the Gene Ontology therefore offer not only a means of accessing knowledge and data, but investigating biological phenomena by creating, as noted on the Gene Ontology’s website, “a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research”.

AI-generated picture of a network by Pixapay.

As I looked into the nature of bio-ontologies, it became clear to me that these organisational systems for biodata are hugely important. They allow researchers in the biosciences to access current knowledge and relevant data (not always easy in the midst of a ‘data deluge’), but they also have epistemic significance. As Sabina Leonelli writes, bio-ontologies “constitute a form of scientific theorizing that has the potential to affect the direction and practice of experimental biology.”[5]Ibid. The development and application of ontologies to biological data thus renders the contemporary biosciences thinkable, capturing the current state of the art and allowing researchers to extrapolate from that. 

Given this significance, it is perhaps somewhat surprising that biocuration, as an area of science, often goes unnoticed by its users and by research funders. As one biocurator told me:

…we are in the background. Even researchers who heavily use these resources [databases], don’t usually know our names and don’t think about us existing. But they love the resource. And that’s actually something we’ve gotten with the booth when we were at conferences. People will come up and be like, oh you are the [resource]! Wow, you are good, awesome. They are kind of shocked that there’s humans there.[6]Davies, Sarah R., and Constantin Holmer. 2024. “Care, collaboration, and service in academic data work: biocuration as ‘academia otherwise.'” Information, Communication & … Continue reading

Biocurators are not only ‘in the background’, they frequently struggle to get sustained funding for their work, and generally need to build careers through a series of temporary contracts. Perhaps because databases are machine-readable and can be queried automatically, both funders and the researchers who use curated resources often seem to imagine that the work of biocuration can be readily carried out through automated means; in practice, while biocurators make use of automated tools such as text-mining, interpreting scientific literature and annotating data is a highly skilled activity that cannot be easily replicated by AI or other technologies.

Why is biocuration so under-valued despite its epistemic importance? One answer is that biocuration does not fit well with current systems of reward and evaluation within academia. Researchers are, for instance, rewarded for publishing frequently and in high-profile journals, but biocurators produce other kinds of outputs to journal articles – the data – and knowledgebases that they work on. Similarly, gaining research funding is typically seen as a sign of a successful academic, but biocurators’ work does not fit well into the categories that funders use to assess research quality (such as novelty). As Ankeny and Leonelli explain:

Value in science (be it of individual researchers or particular research projects) is largely calculated on the basis of the number of publications produced, the quality of the journals in which those publications appeared, and the impact of the publications as measured by citation indices and other measures: given that [data] donation and curation are still largely unrecognized, the value of these activities correspondingly is limited in part because it cannot be measured using traditional metrics.[7]Ankeny, Rachel A., and Sabina Leonelli. 2015. “Valuing Data in Postgenomic Biology:: How Data Donation and Curation Practices Challenge the Scientific Publication System.” In … Continue reading

Studying bio-ontologies thus led me to consider the lives of their creators, and the conditions under which they work. Despite the epistemic significance of biocuration, it escapes recognition under contemporary ways of crediting and rewarding academic work – something which seems to me to be deeply unfair. Perhaps, then, we need to find new ways of valuing, funding, and rewarding the wide variety of epistemic contributions made within research, rather than relying on metrics such as number of publications and citations as the key means of assessing research?


References

References
1Bodenreider, Olivier, and Robert Stevens. 2006. “Bio-ontologies: current trends and future directions.” Briefings in Bioinformatics 7 (3): 256–74. https://doi.org/10.1093/bib/bbl027.
2Tang, Y. Amy, Klemens Pichler, Anja Füllgrabe, Jane Lomax, James Malone, Monica C. Munoz-Torres, Drashtti V. Vasant, Eleanor Williams, and Melissa Haendel. 2019. “Ten quick tips for biocuration.” PLoS Computational Biology 15 (5): e1006906. https://doi.org/10.1371/journal.pcbi.1006906.
3Quaglia, Federica, Rama Balakrishnan, Susan M Bello, and Nicole Vasilevsky. 2022. “Conference report: Biocuration 2021 Virtual Conference.” Database 2022 (Januar): baac027. https://doi.org/10.1093/database/baac027.
4Leonelli, Sabina. 2012. “Classificatory Theory in Data-intensive Science: The Case of Open Biomedical Ontologies.” International Studies in the Philosophy of Science 26 (1): 47–65. https://doi.org/10.1080/02698595.2012.653119.
5Ibid.
6Davies, Sarah R., and Constantin Holmer. 2024. “Care, collaboration, and service in academic data work: biocuration as ‘academia otherwise.'” Information, Communication & Society 27 (4): 683–701. https://doi.org/10.1080/1369118X.2024.2315285.
7Ankeny, Rachel A., and Sabina Leonelli. 2015. “Valuing Data in Postgenomic Biology:: How Data Donation and Curation Practices Challenge the Scientific Publication System.” In Postgenomics: Perspectives on Biology after the Genome, edited by Sarah S. Richardson and Hallam Stevens, 126–49. Duke University Press. https://doi.org/10.1515/9780822375449-008.

Net Zero Precinct Futures: place-based experimentation for sustainability transitions

Darren Sharp opens his lecture in the lecture hall of the KHK c:o/re

On September 11, 2024, Kármán Fellow Dr Darren Sharp gave an overview of Net Zero Precincts, a four-year ARC Linkage project to develop and test a new interdisciplinary approach to help cities achieve net-zero emissions. In this interdisciplinary project, Dr Sharp aims to bring together transition management and design anthropology with the goal of transitioning to net-zero carbon emissions in an urban environment.

The starting point of the project is the Net Zero Initiative of Dr Sharp’s home institution (Monash University, Melbourne, Australia), where Monash University, as the first university in Australia, has pledged to become carbon-neutral by 2030. Net Zero Precincts is researching this transition on campus to both facilitate its success and learn lessons for scaling up such initiatives at the precinct level.

Dr Sharps started by giving overviews of the two stages of the project that are already finished. In the orienting stage, Dr Sharp and his team made use of interviews with, among others, staff, students, representatives of local and state government, and people from NGOs. The goal was to identify the main sustainability challenges, drivers, and uncertainties along the way as they were understood by the interviewees.

The audience in the lecture hall

In the second stage, which focused on agenda-setting, workshops were used to go from abstract visions of a net-zero future by participants to concrete ideas of actionable steps and transition pathways. It was especially important at this stage to take local perspectives, the local landscape, and nature into account.

Finally, Dr Sharp briefly discussed the ongoing stage 3 of the project, which started in April 2024. Here, the pathways and visions found in stage 2 of the project were used to develop experiments for the Monash campus living lab. Different projects to overcome the identified challenges or reach the set goals are tried out.

Overall, Dr Sharp argues that the process of scaling up a net-zero project to the precinct level requires a broad perspective. It is not enough to focus on technical innovations to reduce carbon emissions alone. Instead, it is also essential to rediscover First Nations’ knowledge systems, to think about small everyday innovations, and to mobilize the community. Challenges to achieving a net-zero future are local and community-specific and must also be considered.  

Darren Sharp in conversation with KHK c:o/re director Stefan Böschen

The Net Zero Precinct project raises fundamental questions that are also of great importance for technical universities. The self-design of universities as living labs is becoming increasingly important under the current transformative conditions of research and innovation. This is because knowledge contexts and the orientation towards socially desirable results must be intertwined with the forms of academic knowledge production. In addition, in cooperation with the Living Labs Incubator at the Human Technology Center, we were able to not only work on specific research issues in living labs in a workshop, but also discuss the first steps towards developing a global network for living lab research at universities.


Links

Net Zero Precincts: Stage 1 Report (PDF)

Net Zero Precincts: Stage 2 Report (PDF)


Photos by Jana Hambitzer

Reports from the field: a very partial view of EASST4S2024 Amsterdam

BART PENDERS

Social studies of science, or science and technology by any other name, may sometimes feel like a small field in which one knows, or knows of, the relevant players on a global level. Attending the combined conference of both the European Association for the Study of Science and Technology (EASST) and the Society for the Social Studies of Science (4S) then becomes a humbling experience. With over 4100 attendees over the course of the conference, this year’s edition in Amsterdam may have been the biggest ever. The scale of these events is always impressive and without exception displays the holes in one’s overview of the community.

Profile Image

Bart Penders

Bart Penders investigates moral, social and technical plurality in research integrity, scientific reform and forms of collaboration across a variety of scientific specializations. He currently holds a position as Associate Professor in ‘Biomedicine and Society’ at Maastricht University.

On the upside, that means that there are new worlds in STS to uncover and engage with, without a real upper limit. The absence of these upper limits is overwhelming and daunting though. Consider, for instance, that EASST4S2024 had 10 timeslots for parallel sessions in which each timeslot offers a choice between 50 and 60 parallel sessions. That gives every attendee over 97 quadrillion potential sets of panels to go to and has given rise to the custom of not asking fellow attendees How is the conference so far? but instead How is your conference so far?

Thematically and conceptually, STS is difficult to pin down. EASST4S2024 saw whole collections of sessions on AI and society, participatory approaches to science policy and practice, critical engagement with open science and various panels on psychedelics, music and sound, and so much more. But it never is just talk – experiments with different forms of conferencing have, over the years, created alternative panel forms that included this year, ranging from cooking workshops, to a whole selection of movies.

The diversity of a conference this scale cannot be summarized. Every attempt is destined to fail. However, there are elements that are worth mentioning to me – as the core of my route through the conference and a few that are more plenary, more shared, more collective – snippets of a joint experience.

Let’s start with the shared experience – that of judicious connections between scholars with shared interests; the joy of meeting people you haven’t met in a while but with whom you share academic pasts and those whom you never met but with whom you may share academic futures. Next to the many plural elements of the conference, there is a number of plenary events for all to share. The scale of the conference did make some of that sharing materially difficult: the largest room at the Free University Amsterdam, which hosted the conference, could only seat roughly a quarter of all attendees. Plenaries were streamed to a number of the conference rooms, where plenary sessions became large-screen televised events.

A group of c:o/re members at the conference

One of the key questions of the first plenary was How does STS translate into policy? One of the speakers was Dr. Alondra Nelson who had served as scientific advisor in the Biden administration and conveyed a twofold message: first, there is a lot STS has to offer policy. The contested themes of our day are where STS excels and we need not be overly afraid of some instrumentalization of science in policy. Second, in contrast though – policy advice does not always leave time for empirical or conceptual labor to underpin it. What we need, Nelson argued, was a certain Science and Technology Intuition, a reservoir of generic tacit skill and knowledge we can tap from. Uncomfortable, imprecise, but powerful. Brice Laurent expanded on this argument by highlighting that we need to transcend a dualist frame in which science is separate from (the issues of) daily life. Our daily lives are penetrated by science to such an extent that we cannot, and should not separate them and any culture war that seeks to achieve this inevitably will come undone.

Massive conferences also come with honors: people who are remembered for their achievements (a plenary dedicated to the work of Adele Clarke) and those who are awarded for their achievements. The list of prizes both societies grant together is very long, but one worth point out in the duo that received the 2024 Bernal Prize: Dutch anthropologist Annemarie Mol and US critical informatics scholar Geoffrey Bowker.

Plenary at the EASST4S2024

The infrastructure of conferences this scale turns it, in many ways, into an academic festival with the ability to taste and enjoy the various fruits the community has on offer. This analogy was not lost on the conference organizers, who chose to not host a traditional conference dinner but rather organize a genuine “Forest Festival” in the Amsterdamse Bos. Next to the various flavors a global academic community has on offer, we were treated to quite literal global flavors under a pleasant sun.

Impression from the Forest Festival

On a more individual note, I managed to attend a plethora of sessions diving into the credibility of scientific collaboration, the role of replication in science and what perspectives STS has to offer, how reforms in science happen under conditions of uncertainty and how science corrects itself – or not. I organized some of them, spoke in some of them, and engaged with speakers in others. I asked and was asked regularly Have you written about that? and more often than not, the answer was no. In isolation, that no may be disappointing, but on a more structural level it displays the many unexplored and underexplored paths and potential futures STS conferences offer. As every STS mega-conference does, it has left me exhausted but intellectually revigorated. To be overwhelmed is not always a bad thing, but it sure is impressive every time.  

Exit of the conference

Photos by Ana María Guzmán Olmos

Towards Technological Solutions to Climate Action from Varieties of Science: Insights from the Narrative of floods in Kenya and Germany

FREDRICK OGENGA

INTRODUCTION

Nairobi has been experiencing extreme weather patterns in line with warnings from the weatherman in the past few months. This trend, which is seemingly an annual trend, begun sometime last year with devastating droughts that affected the entire country with arid and semi-arid parts of the country worst hit. The latter created food shortages and insecurity of biblical proportions in general, to the extent that politicians, led be the President, William Ruto (a champion of climate action), were calling for intercession through national prayers. The droughts led to death of vulnerable women and children and contributed to the loss of livestock and crops, negatively affecting Kenya’s economy through consequent high food prices. Then fast forward to this year (2024) another extreme pattern was witnessed this time characterized by heavy and long rainfalls that contributed to floods and mudslides that killed people in cities and villages[1]. It may have appeared like a Kenyan problem, but the problem was witnessed in other parts of the world in places like Dubai and most recently, Germany.

Kenya Red Cross members hold on to a safety rope as they wade through flood waters to assess and rescue residents trapped in their homes marooned after a seasonal river burst its banks following heavy rainfall in Kitengela municipality of Kajiado County, near Nairobi, Kenya May 1, 2024; photo: REUTERS/Thomas Mukoya

Of course, [2]these things often appear sensationally on media platforms and for the first time, similar media scenes of animals and property being swept away by floods in Kenya, Germany and Dubai were witnessed both in developed and developing economies. Is climate not the great equalizer? Does this then beg the question of what humanity can borrow from this, seemingly, similar patterns of events at least as represented through news media outlets? What kind of agency do this narrative incite and what does it tell us about our culture of doing things and our own ingenuity? Are there possibilities of positive synergy across cultures, geographical spaces and tech/media platforms to find solutions for the future of humanity in a world ravaged by climate induced disasters?

Fredrick Ogenga

Fredrick Ogenga is an Associate Professor of Media and Security Studies at Rongo University and the Founding Director, Center for Media, Democracy, Peace & Security. He also serves as the CEO, The Peacemaker Corps Foundation Kenya. Ogenga is a Letsema Visiting Fellow at the Institute for Pan-African Thought and Conversation and Senior Non-resident Research Fellow, Institute for Global African Affairs, at the University of Johannesburg and the West Indies respectively. He is also an Associate Researcher Africa Studies Center, University of Basel, and Senior Research Associate, Swisspeace. Ogenga is a member of International Panel on the Information Environment’s (IPIE) Scientific Panel on Information Integrity on Climate Science and Chair of IPIE’s AI and Peacebuilding Scientific Panel. He is also a former Sothern Voices Network for Peacebuilding Scholar at the Wilson Center, Washington D.C and Africa Peacebuilding Network fellow. Ogenga is a Co-founder of the Varieties of Science Network (VOSN) and will be Senior Fellow at the KHK c:o/re RWTH Aachen University in 2025.

VARIETIES OF SCIENCE NETWORK

These are the tough questions we are now facing and to address them, a new view on the different forms of how problem-oriented research is performed seems to be decisive. Therefore, the idea towards a Varieties of Science Network at (VOSN) was born in Basel, Switzerland by Prof. Stefan Böschen, the Director the Käte Hamburger Kolleg Cultures of Research, RWTH Aachen University, Germany, and Prof. Fredrick Ogenga, The Director of the Center for Media, Democracy, Peace and Security, Rongo University, and The CEO of the Peacemaker Corps Foundation Kenya that seeks to examine the challenges faced globally, from environmental, political, economic, social to cultural challenges. Subsequently, the most prominent ones being climate change, financial inequalities, political and social upheavals, and pandemics. In this context, humanity continues to display a great level of ingenuity and resilience and have innovated ways of coping and adapting for self-preservation but not without challenges. Nevertheless, what has been lacking is a higher level of cooperation across cultures and geographical spaces to take advantage of the potential benefits of crosspollinating local knowledge and expertise both at the local and global level as demonstrated by the recent floods witnessed from Nairobi to Dubai and the West of Germany, Aachen.

The latter is a reminder to humanity that we are confronted with similar challenges in a seemingly technologically connected world that appear to challenge the common assumption, evidenced in political conversations globally, that have often defined the boundaries between the global North and South in epistemic frameworks where the latter have often plaid catch up. Central to this conversation has been the idea of coloniality, and within that, decoloniality and the emergence of global communication technologies which have been designed and exploited to maintain and sustain unequal power modalities[3].

The latter positionality has sustained a global image of Africa on global media platforms as a continent ravaged by disease and disaster (floods, droughts and pandemics) as seen in recent floods in Kenya inspired by coloniality of technology and knowledge, and within that, the centrality of decoloniality vis-à-vis the emergence of global communication systems. Technological systems that have fallen short of sustaining a colonial discourse amidst changing global environment due to climate change must be resisted at all costs. And so, climate change has disrupted the ideological lenses of Western journalistic frames when it comes to the positive image of the West juxtaposed against that of Africa.

Consequently, news of floods are given equal treatment in Germany as they would otherwise not in comparison to news in ecologies in the global South such as Kenya – The usual sensational narrative of disaster demonstrated by cows and other valuables being swept by ravaging floods is a tired African narrative and it is therefore a paradox to confront such images in emerging narratives of floods in Germany – Is this then not a warning sign and a compelling reason for humanity to forge a united front? (the we are in this together or Harambee (togetherness) spirit of pan-African philosophical epistemic underpinning?)

From this background, the Varieties of Science Network (VOSN) seeks to tap from ‘glocal’ knowledge reservoir (local epistemic framework) in a bid to bring the epistemic gaps in knowledge production and dissemination in climate science and other socio-economic, political and cultural challenges using research and technology to seek a more coordinated approach to finding solutions to common scientific questions and challenges facing mankind today. The network is inspired by what is regarded as one of the central topics of the KHK RWTH Aachen, namely: Varieties of Science. Doing so, this initiative seeks to uncover the diverse cultural-institutional conditions of epistemic freedom and intellectual democracy across geographical and cultural spaces and multiple disciplines.

The idea is to unravel the productive parts of the global North -South conversations to overcome colonial burdens etc. Due to the emerging common threats, for example, brought about by climate change as argued, these traditional global North South conversations, that have often centered on coloniality of power dynamics as witnessed in news representation of disasters, is certainly not going to be the same in future and are becoming more and more unsustainable. Climate change will create, and is beginning to shape, a new world living space for mankind and therefore, we need to find ways to cooperate with each other. So, it’s about knowing and creating a new collective order, a new human rights agenda and creating an economic order that is fair enough for all people. VOSN intends to bring together people and topics that would like to contribute to this network to that end.

It is driven by better engagement between people and the different conditions between ecologies for better understanding in different worlds to form collaboration to, for example, balance in terms of Co2 and energy transitions globally. It also seeks to find better ways of understanding and guard-railing energy transitions and other forms of transitions, be it political, economic, and socio-cultural in different ecologies by examining problem centered cases such as climate change and many other topics and issues in different fields and countries that would animate varieties of science for members to learn from each other. It would seek to understand how to synergize technologically driven emergency responses to natural disasters such as drought, famine, floods and pandemics as recently witnessed in different geographical spaces across cultures. For example, in the question of climate, which is the inaugural theme for VOSN, what are the agencies and emerging different ways of knowing or gnosis and responding? What are the epistemic questions across cultures? and which kind of knowledge is seen as important and prioritized?

APPROACH

The agenda will begin with the more prominent environmental challenge brought about by climate change as both the entry point to the VOSN network and as a point of departure in establishing how a more united approach to difficult scientific questions that act as threat to the self-preservation of mankind (Ubuntu/ humanity) can be approached and co-designed in a manner that respects local cultures (Cultures of Research) with several cross-cutting public problems or themes.

CLIMATE MITIGATION AND ADAPTATION

As a flagship thematic focus, VOSN will focus on the intersection between technology, climate, and peacebuilding across cultures as an entry point to our global collaboration and research agenda which is in line with Käte Hamburger Kolleg Cultures of Research focal area of climate change. This will entail a technical, systematic and meta-analysis of the use of technology in climate mitigation across different ecologies and local Action Research in different ecologies in the global North and South involving local communities to inspire practical interventions by examining how they are adapting to climate change challenges and opportunities, and the kind of resources at their disposal (technological or otherwise)[4]. This evidence would be able to reveal human ingenuity and how tech innovations could be a game-changer in climate adaptation, conflict resilience and peacebuilding for the self-preservation of humanity going forward.

The varieties of science research agenda will also look at how the devastating effects of climate change are inciting new policy interventions that are in turn attracting mitigation efforts (the political economy of interventions) from different actors (local and international, public, and private), particularly carbon credit programs, that are not gender and conflict sensitive[5]. Consequently, how these mitigating efforts are implying on local communities in terms of livelihood, how they are exacerbating conflict pressure points and therein the role of digital technologies/tools in empowering communities into action for climate mitigation and adaptation through alternative livelihoods such as tree planting (greening), for conflict resilience and peacebuilding. The evidence will therefore be used to contribute to the defense of climate science information as opposed to climate misinformation and disinformation on social media spaces and help influence policy change around climate financing and community sensitive carbon credit investments in different ecologies such as Kenya and Germany going forward.


[1] Naidoo, D. and Gulati, M. 2022. Understanding Africa’s Climate and Human Security Risks. Policy Brief 170. October 2022. Institute for Security Studies; Tesfaye, B. 2022. Addressing Climate Security in  Fragile Contexts. Center for Strategic and International Studies, https://www.csis.org/analysis/addressing-climate-security-fragile-contexts.

[2] Morley, D. 2007. Media, Modernity and Technology- The Geography of the New. London: New York: Routledge.

[3] Freenberg, A. Democratic Rationalization: Technology, Power and Freedom. In Rober, C. and Dusek, V. (eds.) 2014. Philosophy of Technology –The Technological condition on Anthology 2nd Edition. Malden, Oxford: Wiley Blackwell; Godin, B., Gaglio, G. and Vinck, D. 2021. Handbook on Alternative Theories of Innovation. Cheltenham: Edward Elger Publishing.

[4] Yayboke, E., Nzuki, C. and Strouboulis, A. 2022. Going Green while Building Peace: Technology, Climate and Peacebuilding. Center for International and Strategic Studies. https://www.csis.org/analysis/going-green-while-building-peace-technology-climate-and-peacebuilding.

[5] Greenfield, P. 2023. The New Scramble for Africa: How a UAE Sheikh Quietly Made Carbon Deals for Forests Bigger than UK. The Guardian Thursday 10th November 2013.

On the promises of AI and listening data for music research

An image of the Textile Cone, a sea snail with a striking pattern on its shell

NIKITA BRAGUINSKI

As a c:o/re fellow, I had the uniquely advantageous opportunity to develop and test, in an environment dedicated to the study of science, my ideas about how AI and data can influence music research. Members of the Kolleg and its fellows, many of whom are philosophers of science, offered a very rich intellectual circle that inspired me to look at the datafication and technologization of future music research from many new angles. With its intensive and diverse program of talks, lectures, and conferences, the Kolleg also offered ideal opportunities for testing approaches in front of an attentive, thoughtful, critical and friendly audience. Below, I  present brief overviews of the main ideas that I discussed during three talks I gave at the Kolleg.

Profile Image

Nikita Braguinski

Nikita Braguinski studies the implications of technology for musicology and music. In his current work, he aims to discuss challenges posed to human musical theory by recent advances in machine learning.

My first presentation, entitled “The Shifting Boundaries of Music-Related Research: Listening Logs, Non-Human-Readable Data, and AI”, took place on January 16, 2024 during an internal meeting of Kolleg fellows and members. I focused on the promises and problems of using data about music streaming behavior for musical research. Starting from the discussion of how changing technologies of sound reproduction enabled differing degrees of observing listener behavior, I discussed the current split between academic and industrial music research, the availability of data, the problems of current industry-provided metrics such as “danceability”, and the special opportunities offered by existing and future multimodal machine learning (like the systems that use the same internal encoding for both music and text). I also offered examples of descriptive statistics and visualizations made possible by the availability of data on listener behavior. These visualizations of large listening datasets, which I was able to create thanks to my access to the RWTH high performance computing cluster, included, among others, an illustration of how users of online streaming services tend to listen to new recordings on the day of their release, and an analysis of the likeliness of different age groups to listen to popular music from different decades (with users from the age group 60-69 having almost the opposite musical preferences of the age group 10-19).

Fig. 1: Users of online streaming services often listen to new recordings on the day of their release
(Own diagram. Vertical axis: number of plays. Dataset: LFM-2b, German audience)

Discussing my talk, c:o/re colleagues drew parallels to other academic disciplines such as digital sociology and research on pharmaceutical companies. The topic of addictiveness of online media that I touched upon was discussed in comparison to data-gathering practices in gambling, including the ethics of using such data for research. The political significance of music listening and its connection to emotions was also discussed in relation to the danger of biases in music recommender systems.

My second presentation, entitled “Imitations of Human Musical Creativity: Process or Product?”, took place during the conference “Politics of the Machines 2024. Lifelikeness and Beyond”, which c:o/re hosted. I focused on the question of what AI-based imitations of music actually model – the final product (such as the notation or the audio recording) or the processes that lead to the creation of this product.

In this presentation, I discussed:

1) The distinction between process and product of artistic creation, which, while especially important for discussions on the output of generative AI, currently receives little scholarly attention;

2) How several theories in the humanities (notably, formalism, psychoanalytic literary theory, and the line of AI skepticism connected to the so-called Chinese room argument) stress the importance of the process in artistic creation and cognition;

3) That current endeavors in generative AI, though impressive from the point of view of the product, do not attempt to imitate the processes of creation, dissemination, and reception of art, literature, or music, nor do they imitate historical, cultural, or economic environments in which these processes take place;

4) Finally, because the data on which generative AI systems operate carries traces of past processes, the product of these systems remains connected to the processes, even if no conscious effort is made by the creators of these systems to imitate the processes themselves.

An image of the Textile Cone, a sea snail with a striking pattern on its shell
Fig. 2: An image of the Textile Cone, a sea snail with a striking pattern on its shell. I used this picture to illustrate how a full process-based imitation of the shell’s pattern would need to include imitation of all the snail’s life processes, as well as of its living environment. (Image: “Conus textile 7” by Harry Rose. https://www.flickr.com/photos/macleaygrassman/9271210509. CC-BY: https://creativecommons.org/licenses/by/2.0/)

A conference participant commented that for commercial companies avoiding the imitation of all these processes is a deliberate strategy because their imitation has to be cheaper than the original process-based artifact.

My third presentation at the Kolleg, “Life-Like Artificial Music: Understanding the Impact of AI on Musical Thinking”, took place on June 5, 2024 as a lecture in the c:o/re Lifelikeness lecture series. Here, I addressed the likeliness (or unlikeliness) of major shifts in the musicological terminology to result from the academic use of AI . Starting with an overview of various competing paradigms of musical research, I drew attention to possible upcoming problems of justifying the validity of currently existing musicological terminology. The salient point here is that AI systems based on machine learning are capable of imitating historical musical styles without recourse to explicitly stated rules of musical theory, while humans need the rules to learn to imitate those styles. Moreover, the ability of machine learning systems to learn internal structures of music directly from audio (skipping the notation stage on which most of human music theory operates) has the potential to question the validity and usefulness of musical theory, as currently taught.

Having stated these potential problems, I turned to a current example, a research paper [1] in which notions of Western music theory were compared to the internal representations learned by an AI system from music examples. Using this paper as a starting point for my argument, I asked whether it could be possible in principle to also use such an approach to come up with new, maybe better, musicological terminology. I pointed to the problems of interpreting the structures learned by machine learning systems and of the likely incompatibility of such structures (even if successfully decoded) with the human cognitive apparatus. To illustrate this, I referred to the use, by beginner players of the game of Go, of moves made by AI systems. Casual players are normally discouraged from copying the moves of professional human players because they cannot fully understand these moves’ underlying logic and thus cannot effectively integrate them into their strategy.

In the following discussion, one participant drew attention to the fact that new technologies often lead to a change in what is seen as a valid research contribution, devaluing older types of research outcomes and creating new ones. Another participant argued that a constant process of terminological change takes place in disciplines at all times and independently of a possible influence of a new technology, such as machine learning.

Overall, my c:o/re fellowship offered, and continues to offer, an ideal opportunity to develop and discuss new ideas for my inquiry into the future uses and problems of AI and data in music research, which have resulted, in addition to the three presentations mentioned above, in talks given at the University of Bonn, Maastricht University, and at a music and AI conference at the University of Hong Kong.


[1] N. Cosme-Clifford, J. Symons, K. Kapoor and C. W. White, “Musicological Interpretability in Generative Transformers,” 4th International Symposium on the Internet of Sounds, Pisa, Italy, 2023

Installations and Art at LOGOI and PACT – PoM Recap #4

It has been more than month since c:o/re hosted the PoM conference “Lifelikeness & beyond” . As this sizeable and, while still new, already renown conference produced many lively discussions in a creative interrogation of the dialog between life sciences and technology studies, we want to share our retrospective reflections on it through a series of focused posts.

Alongside the PoM main program of keynotes, talks, lectures and workshops, the conference was accompanied by art and installations displayed at the LOGOI Institute for Philosophy and Discourse in Aachen. Also part of the conference, the choreographic centre PACT Zollverein in Essen provided the program ‘life.like’ , which consisted of six artistic positions in the form of performance, installation, discourse and sound.

These contributions showed in various ways how philosophical, technical and bioscientific topics can be artistically thought and implemented. They enabled critical dialog and reflection on artistic methods and results between artists, scientists from different disciplines and the public.

If you would like to learn more about any of the contributions, take a look at the PoM program and life.like.


LOGOI


‘life.like’ at PACT Zollverein


Unless otherwise indicated, photos by Jana Hambitzer