Category: c:o/re-Blog

On the promises of AI and listening data for music research

An image of the Textile Cone, a sea snail with a striking pattern on its shell

NIKITA BRAGUINSKI

As a c:o/re fellow, I had the uniquely advantageous opportunity to develop and test, in an environment dedicated to the study of science, my ideas about how AI and data can influence music research. Members of the Kolleg and its fellows, many of whom are philosophers of science, offered a very rich intellectual circle that inspired me to look at the datafication and technologization of future music research from many new angles. With its intensive and diverse program of talks, lectures, and conferences, the Kolleg also offered ideal opportunities for testing approaches in front of an attentive, thoughtful, critical and friendly audience. Below, I  present brief overviews of the main ideas that I discussed during three talks I gave at the Kolleg.

Profile Image

Nikita Braguinski

Nikita Braguinski studies the implications of technology for musicology and music. In his current work, he aims to discuss challenges posed to human musical theory by recent advances in machine learning.

My first presentation, entitled “The Shifting Boundaries of Music-Related Research: Listening Logs, Non-Human-Readable Data, and AI”, took place on January 16, 2024 during an internal meeting of Kolleg fellows and members. I focused on the promises and problems of using data about music streaming behavior for musical research. Starting from the discussion of how changing technologies of sound reproduction enabled differing degrees of observing listener behavior, I discussed the current split between academic and industrial music research, the availability of data, the problems of current industry-provided metrics such as “danceability”, and the special opportunities offered by existing and future multimodal machine learning (like the systems that use the same internal encoding for both music and text). I also offered examples of descriptive statistics and visualizations made possible by the availability of data on listener behavior. These visualizations of large listening datasets, which I was able to create thanks to my access to the RWTH high performance computing cluster, included, among others, an illustration of how users of online streaming services tend to listen to new recordings on the day of their release, and an analysis of the likeliness of different age groups to listen to popular music from different decades (with users from the age group 60-69 having almost the opposite musical preferences of the age group 10-19).

Fig. 1: Users of online streaming services often listen to new recordings on the day of their release
(Own diagram. Vertical axis: number of plays. Dataset: LFM-2b, German audience)

Discussing my talk, c:o/re colleagues drew parallels to other academic disciplines such as digital sociology and research on pharmaceutical companies. The topic of addictiveness of online media that I touched upon was discussed in comparison to data-gathering practices in gambling, including the ethics of using such data for research. The political significance of music listening and its connection to emotions was also discussed in relation to the danger of biases in music recommender systems.

My second presentation, entitled “Imitations of Human Musical Creativity: Process or Product?”, took place during the conference “Politics of the Machines 2024. Lifelikeness and Beyond”, which c:o/re hosted. I focused on the question of what AI-based imitations of music actually model – the final product (such as the notation or the audio recording) or the processes that lead to the creation of this product.

In this presentation, I discussed:

1) The distinction between process and product of artistic creation, which, while especially important for discussions on the output of generative AI, currently receives little scholarly attention;

2) How several theories in the humanities (notably, formalism, psychoanalytic literary theory, and the line of AI skepticism connected to the so-called Chinese room argument) stress the importance of the process in artistic creation and cognition;

3) That current endeavors in generative AI, though impressive from the point of view of the product, do not attempt to imitate the processes of creation, dissemination, and reception of art, literature, or music, nor do they imitate historical, cultural, or economic environments in which these processes take place;

4) Finally, because the data on which generative AI systems operate carries traces of past processes, the product of these systems remains connected to the processes, even if no conscious effort is made by the creators of these systems to imitate the processes themselves.

An image of the Textile Cone, a sea snail with a striking pattern on its shell
Fig. 2: An image of the Textile Cone, a sea snail with a striking pattern on its shell. I used this picture to illustrate how a full process-based imitation of the shell’s pattern would need to include imitation of all the snail’s life processes, as well as of its living environment. (Image: “Conus textile 7” by Harry Rose. https://www.flickr.com/photos/macleaygrassman/9271210509. CC-BY: https://creativecommons.org/licenses/by/2.0/)

A conference participant commented that for commercial companies avoiding the imitation of all these processes is a deliberate strategy because their imitation has to be cheaper than the original process-based artifact.

My third presentation at the Kolleg, “Life-Like Artificial Music: Understanding the Impact of AI on Musical Thinking”, took place on June 5, 2024 as a lecture in the c:o/re Lifelikeness lecture series. Here, I addressed the likeliness (or unlikeliness) of major shifts in the musicological terminology to result from the academic use of AI . Starting with an overview of various competing paradigms of musical research, I drew attention to possible upcoming problems of justifying the validity of currently existing musicological terminology. The salient point here is that AI systems based on machine learning are capable of imitating historical musical styles without recourse to explicitly stated rules of musical theory, while humans need the rules to learn to imitate those styles. Moreover, the ability of machine learning systems to learn internal structures of music directly from audio (skipping the notation stage on which most of human music theory operates) has the potential to question the validity and usefulness of musical theory, as currently taught.

Having stated these potential problems, I turned to a current example, a research paper [1] in which notions of Western music theory were compared to the internal representations learned by an AI system from music examples. Using this paper as a starting point for my argument, I asked whether it could be possible in principle to also use such an approach to come up with new, maybe better, musicological terminology. I pointed to the problems of interpreting the structures learned by machine learning systems and of the likely incompatibility of such structures (even if successfully decoded) with the human cognitive apparatus. To illustrate this, I referred to the use, by beginner players of the game of Go, of moves made by AI systems. Casual players are normally discouraged from copying the moves of professional human players because they cannot fully understand these moves’ underlying logic and thus cannot effectively integrate them into their strategy.

In the following discussion, one participant drew attention to the fact that new technologies often lead to a change in what is seen as a valid research contribution, devaluing older types of research outcomes and creating new ones. Another participant argued that a constant process of terminological change takes place in disciplines at all times and independently of a possible influence of a new technology, such as machine learning.

Overall, my c:o/re fellowship offered, and continues to offer, an ideal opportunity to develop and discuss new ideas for my inquiry into the future uses and problems of AI and data in music research, which have resulted, in addition to the three presentations mentioned above, in talks given at the University of Bonn, Maastricht University, and at a music and AI conference at the University of Hong Kong.


[1] N. Cosme-Clifford, J. Symons, K. Kapoor and C. W. White, “Musicological Interpretability in Generative Transformers,” 4th International Symposium on the Internet of Sounds, Pisa, Italy, 2023

(Re)Discover “Objects of Research”

Being in the third year of our Fellowship Program, c:o/re is accumulating a remarkable variety of perspectives revolving around its main focus, research on research.

Questions tackled in this lively research environment are highly interesting and exciting and, as such, complex. The meeting of distinct research cultures may stir curiosity but may also leave one wondering what is the other even talking about… What are they studying?

To offer an insightful glimpse into the lively dialogues here, bridging and reflecting on diverse academic cultures, we have started the blog series “Objects of Research”.
We asked current and former c:o/re fellows and academic staff to show us an object that is most relevant to their research in order to understand how they think about their work.

In 12 contributions, we were able to witness the personal connections researchers have to objects that shape their work. We now invite you to visit the individual contributions and explore the world of research once again.


“For the past two decades, I have had a leading role in developing the neuronal network simulator NEST. This high-quality research software can improve research culture by providing a foundation for reliable, reproducible and FAIRly sharable research in computational neuroscience. Together with colleagues, I work hard to establish “nest::simulated()” as a mark of quality for research results in the field. Collaboration in the NEST community is essential to this effort, and many great ideas have come up while sharing a cup of coffee.“


“This is a notebook my Mom gave me. She had it as a kind of leftover from a shopping tour and she thought that it might be of use for my work. And of course, she was right. And as you know, research always starts with a good question that attracts attention.”


“I guess many academics would share some varient of this image: a careful arrangement of computer equipment, coffee, notepads, pens, and the other detritus that lives on (my) desk.

For me it’s important that the technical equipment is shown in conjunction with the paper notebook and pens. I’m fussy about all of these things – it’s distracting when my computer set-up isn’t what I’m used to, and I need to use very specific pens from a particular store – but ultimately my thinking lives in the interactions between them.

My colleagues and I are working on an autoethnographic study of knowledge production, and notice that (our) creative research work often emerges as we move notes and ideas from paper to computer (and back again).”


“I use mechanical pencils (like the one in the photo) to highlight, annotate, question, clarify, or reference things I read in books. This helps me digest the arguments, ideas, and discourses I deal with in my historical and sociological research. I also have software for annotating and organizing PDFs on my iPad as well as a proper notebook for excerpting and writing down ideas. However, I’ve found that the best way for me to connect my reading practices with my thoughts is through the corporeal employment of a pencil on the physical pages of a book.”


“As part of the work I do at KHK c:/ore, as well as extending beyond that, I collect empirical data. In my case, that data consists of records of interviews with scientists and others. Those records can be notes, but they can also be integral recordings of the conversations.

Relying on technology for the production of data is what scientists do on a daily basis. With that comes a healthy level of paranoia around that technology. Calibrating measurement instruments, measurement triangulation, and comparisons to earlier and future records all help us to alleviate that paranoia. I am not immune and my coping mechanism has been, for many years, to take a spare recording device with me.

This is that spare, my backup, and thereby the materialisation of how to deal with moderate levels of technological paranoia. It is not actually a formal voice recorder, but an old digital music player I have had for 15 years, the Creative Zen Vision M. It has an excellent microphone, abundant storage capacity (30 gigabytes) and, quite importantly, no remote access options. That last part is quite important to me, because it ensures that the recording cannot enter the ‘cloud’ and be accessed by anyone but me. Technologically, it is outdated. It no longer serves its original purpose: I never listen to music on it. Instead, it has donned a new mantle as a research tool.”


“When asked about the fundamental object for my research practice, I immediately thought of my computer, which seemed the obvious answer given that I read, study, and write on it most of the time.

Upon further reflection, however, I realized that on my computer, I just manage the initial and final phases of my research, namely gathering information and studying on the one hand, and writing papers on the other.

Yet, between these two phases, there is a crucial intermediate step that truly embodied the essence of research, for me: the reworking, systematization, organization, and re-elaboration of what I have read and studied, as well as the formulation of new ideas and hypothesis. These processes never occur on the computer but always on paper.

Therefore, the essential objects for my research are notebooks, sticky notes, notepads, pens, and pencils.”


“As I research Hegel’s logic and how he understands life as a logical category necessary to make nature intelligible, I work closely with his texts. On the other hand, the stickers on my laptop remind me of the need to look at reality and regularly question the relevance of my research for understanding current social phenomena. In this sense, I think I remain a Hegelian, because for Hegel one can only fully understand an object of research by looking at both its logical concept and how it appears in reality. However, I think that in order to look at current political and social phenomena, we need to go beyond Hegel’s racist and sexist ideas, which are all around his ideas on social organization. And none of this would be possible without a good cup of coffee and/or a club mate!”


“The 3D replica of my teeth that stands on my desk reminds me of two important things. First, a model is what we make of it. The epistemic value of modelling lies in interpretation, which depends on but is not defined by representation. I make something very different of (a replica of) teeth than a dentist and an archaeologist do.

Secondly, and not any less important, this replica reminds me to smile, and I hope that it might inspire colleagues to smile, too, when they see it on my desk.
To tell a smile from a veil, as Pink Floyd ask us to, we need to know that a smile is infinitely more important than scientific modelling. If scientific modelling does not lead to smiling, it is of no value. A smile is a good metonymy to be reminded by.”


“There is a joke about which faculty is cheaper for the university. Mathematics is very cheap because all they need is just pencils and erasers. But philosophy is even cheaper because they don’t even need erasers.

My favorite and indispensable object is the rOtring 600 mechanical pencil. It shows that social science is closer to mathematics than to philosophy. Of course, social scientists often need more than pencil and eraser: they have to collect and process data from the real world. But this processing is greatly facilitated by the ability to write and erase your observations.

In my work, I deal with the transcripts of human-machine communication, and I use the rOtring 600, which has a built-in eraser, a lot. It’s useful not only because of the eraser, but also because it’s designed to stay on the table and not break, even in very demanding circumstances like the train journey. And it gives me the feeling that I am making something tangible with it, because it reminds me of engineers or designers producing blueprints for objects and machines.”


“A pen and a notebook are essential for my research. They help me think. It’s not at all about the words I write. I rarely read them again. Scribbling is just an act that helps me stack ideas on top of each other and do all the complicated thinking and connection-building.

I also turn to scribbling in my notebook when I am stuck in the writing process. There is often a time after the first rough draft of the paper when some ideas stop flowing smoothly or don’t fit very well with the main argument. I turn to the notebook and start writing the main ideas, deliberating how they support each other.

This is all especially interesting since a lot of my research is about extended cognition, which is the idea that we sometimes employ external resources such that part of our thinking happens outside our body (in these resources).”


“Spending a few weeks in Argentina, in front of my desk, a Post Office building. A nice futuristic architectural concept, degraded by its construction materials, support of a communication antenna, appropriated by pigeons as a dovecote: a hybrid object.”


“By saying that I study ‘artful intelligence’, which I mean only as a half joke, I take seriously the propositions to my career as a media scholar that…

1. As the first image suggests, human artfulness can be found all around, such as this snapshot of a wall on a side street not far from the Cultures of Research at the RWTH.

2. Sometimes architectural masterpieces that represent more than the sharp angles of twentieth-century modernism are all about us, such as this bus stop on the way to Cultures of Research in Aachen. Any study of science and technology has to ask, what does it mean? Sources do not speak for themselves.

3. Sometimes artificial intelligence is best found in letting people be people, such as a doodle here in a sketchbook. Straight lines do not always precipitate straightness.

4. I study how science, technology, and artificial intelligence has been understood in different times and places, such as this remote-controlled robot that failed in the immediate aftermath of the Chernobyl explosion in 1986 in Soviet Ukraine, which helps unstiffen, enliven, and sober our imagination of what may already be the case today and could be the case tomorrow.”


Thank you for joining us on this journey. We look forward to share more insights and stories with you!

“Humans haven’t necessarily made the best choices for our world.” – Interview with Peter Mantello on Emotionalized Artificial Intelligence

In February 2024, a collaboration with colleagues from Ritsumeikan Asia Pacific University on the topic of Emotionalized Artificial Intelligence (EAI) started. Professor Peter Mantello (Ritsumeikan Asia Pacific University) leads a 3-year project funded by the Japan Society for the Promotion of Science, on which c:o/re is a partner, that will compare attitudes in Japan and in Germany on EAI in the workspace. This explorative pathway contributes to the c:o/re outlook on Varieties of Science. You can find the whole project description on our website here.

In the interview below, Peter Mantello explains what EAI is, how the project will consider AI ethics and why the comparison of German and Japanese workplaces is particularly insightful. We thank him for this interview and look forward to working together.

Profile Image

Peter Mantello

c:o/re short-term
Senior Fellow (11-17/2/2024)

Peter Mantello is an artist, filmmaker and Professor of Media Studies at Ritsumeikan Asia Pacific University in Japan. Since 2010, he has been a principal investigator on various research projects examining the intersection between emerging media technologies, social media artifacts, artificially intelligent agents, hyperconsumerism and conflict.

What is Emotionalized Artificial Intelligence (EAI)? What does this formulation entail differently than ‘Emotional’ AI?

Emotional AI is the commercial moniker of a sub-branch in computer science known as affective computing. The technology is designed to read, monitor, and evaluate a person’s subjective state. It does this by measuring heart rate, respiration rate, skin perspiration levels, blood pressure, eye movement, facial micro-expressions, gait, and word choice. It involves a range of hardware and software. This includes cameras, biometric sensors and actuators, big data, large language models, natural language processing, voice tone analytics, machine learning, and neural networks. Emotionalized AI appears in two distinct forms: embodied (care/nursing robots, smart toys) and disembodied (chatbots, smartphone apps, wearables, and algorithmically coded spaces). 
I think the term ’emotionalized’ AI better encompasses the ability of AI not to just read, and recognize human emotion but also to simulate and respond in an empathic manner. Examples of this can be found in therapy robots, chatbots, smart toys, and holograms. EAI in allows these forms of AI to communicate in a human-like manner. 

What is emotionalized AI used for and for what is it further developed?

Currently, emotionalized AI can be found in automobiles, smart toys, healthcare (therapy robots/ doctor-patients conversational AI) automated management systems in the workplace, advertising billboards, kiosks and menus, home assistants, social media platforms, security systems, wellness apps and videogames. 

What forms of ethical work practices and governance do you have in mind? Are there concrete examples?

There are a range of moral and ethical issues that encompass AI. Many of these are similar to conventional usages of AI, such as concerns about data collection, data management, data ownership, algorithmic bias, privacy, agency, and autonomy. But what is specific about emotionalized AI is that the technology pierces through the corporeal exterior of a person into the private and intimate recesses of their subjective state. Moreover, because the technology targets non-conscious data extracted from a person’s body, they may not be aware or consent to the monitoring. 

Where do you see the importance of cultural diversity in AI ethics?

Well, it raises important issues confronting the technology’s legitimacy.  First, the emotionalized AI industry is predominantly based in the West, yet the products are exported to many world regions. Not only are the data sets used to train the algorithms limited to primarily Westerners, but they also rely largely on famed American sociologist Paul Eckman’s ‘universality of emotions theory’ that suggests there are six basic emotions and are expressed in the same manner by all cultures. This is untrue. But thanks to a growing number of critics who have challenged the reliability/credibility of face analytics, Eckman’s theory has been discredited. However, this has not stopped many companies from designing their technologies on Eckman’s debunked templates. Second, empathetic surveillance in certain institutional settings (school, office, factory) could lead to emotional policing, where to be ‘normal’ or ‘productive’ will require people to be always ‘authentic’, ‘positive’, and ‘happy’. I’m thinking of possible dystopian Black Mirror scenarios, like in the episode known as “Nosedive”. 
Third, exactly what kind of values do we want AI to have – Confucian, Buddhist, Western Liberal? 

Do you expect to find significant differences between the Japanese and German workplace?

Well, it’s important to understand the multiple definitions of the workplace. Workplaces include commercial vehicles, ridesharing, remote workspaces, hospitals, restaurants, and public spaces, not just brick-and-mortar white-collar offices. 
Japan and Germany share common work culture features, but each society also has historically different attitudes to human resource management relationships, what constitutes a ‘good’ worker, loyalty, corporate responsibility to workers, worker rights and unions, and precarity. The two cultures also differ in how they express their emotions,  raising questions about the imposition of US and European emotion analytics in the Japanese context.

Peter Mantello presenting the project “Emotional AI in the Japanese and German Workplace: Exploring Cultural Diversity in AI Ethics” during a talk at c:o/re.

How will the research proceed?

The first stage of the research will be to map the ecology of emotion analytics companies in the West and East. This includes visits to trade show exhibits, technology fairs, start-up meetings, etc. The second stage will be interviews. The third stage will include a series of design fiction workshops targeted to key stakeholders. Throughout all of these stages, we will be holding workshops in Germany and Tokyo, inviting a interdisciplinary mix of scholars, practitioners, civil liberties advocates and industry people. 

What do you think will be the most important impact of this project?

We are at a critical junction point in defining and deciding how we want to live with artificial intelligence. Certainly, everyone talks about human-centric AI but I don’t know what that means. Or if that’s the best way forward. Humans haven’t necessarily made the best choices for our world. If we try to make AI in our own image, it might not turn out right. What I hope this project brings are philosophical insights that will better inform the values we need to encode into AI, so it serves the best interests of everyone, especially, those who will be most vulnerable to its influence. 

What inspired you to collaborate with c:o/re?

My inspiration to collaborate with c:o/re stems from my growing interest in phenomenological aspects of human-machine relations. For the past three years, my research has focused primarily on empirical studies of AI. The insights gained from this were very satisfying, albeit they also opened the door to larger, more complex questions that could only be examined from a more theoretical and philosophical perspective. After a chance meeting with Alin Olteanu at a semiotic conference, I was invited to attend a c:o/re workshop on software in 2023. I realized then that KHK’s interdisciplinary and international environment would be a perfect place for an international collaborative research project.   

How a well-crafted brain model has influenced research

HANS EKKEHARD PLESSER

On the tenth anniversary of the publication of a model of a cortical microcircuit by Potjans and Diesmann (2014), 14 experts in computational neuroscience and neuromorphic computing will meet at the KHK c:o/re, at RWTH Aachen, to discuss their experiences in working with this model.

This event is a unique opportunity to gain insights on the effect that the Potjans-Diesmann model is having on computational neuroscience as a discipline. In light of the success of the model, the participants will reflect on why active model sharing and re-use is still not common practice in computational neuroscience.

Profile Image

Hans Ekkehard Plesser

Hans Ekkehard Plesser is an Associate Professor at the Norwegian University of Life Sciences.
His work focuses on simulation technology for large-scale neuronal network simulations and reproducibility of research in computational neuroscience.

Computational neuroscience, the field dedicated to understanding brain function through modelling, is dominated by small models of parts of the brain designed to explain the results of a small set of experiments, for example animal behavior in a particular task. Such ad hoc models often set aside much knowledge about details of connection structures in brain circuits.  This limits the explanatory power of these models. Furthermore, these models are often implemented in low-level programming languages such as C++, Matlab, or Python and are  shared as collections of source code files. This makes it difficult for other scientists to re-use these models, because they will need to inspect low-level code to verify what the code actually does. One might thus say that these models are formally, but not practically FAIR (findable, accessible, interoperable, reusable).

The model of the cortical microcircuit published by Potjans and Diesmann (2014) pioneered a new approach, quite different to previous practice. For good reasons, this new approach made the model remarkably popular in computational neuroscience. Based on a well-documented analysis of existing anatomical and physiological data, Potjans and Diesmann provided a bottom-up crafted model of the neuronal network found under one square millimeter of cortical surface. Their paper describes the literature, data analysis and data modeling on which the model is based, and provides a precise definition of the model.

Schematic illustration of the microcircuit model of early sensory cortex in Albada et al. (2018). Copyright: Creative Commons

In addition to their theoretical definition of the model, they created an implementation which is executable on the domain-specific high-level simulation tool NEST. They complemented this implementation with thorough documentation on how to work with the model. They even created an additional implementation of the model in the PyNN language, so that the model can be executed automatically on a wide range of neuronal simulation tools, including on neuromorphic hardware systems.

These efforts have led to a wide uptake of the model in the scientific community. Hundreds of scientific publications have cited the Potjans-Diesmann model. Several groups in computational neuroscience have integrated it in their own modelling efforts. The model has also had a key role in driving innovation in neuromorphic and GPU-based simulators by providing a scientifically relevant standard benchmark for correctness and performance of simulators.

References

Potjans, T. C., Diesmann, M. 2014. The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cerebral Cortex 24(3): 785-806.

van Albada S. J., Rowley A. G., Senk, J., Hopkins, M., Schmidt, M., Stokes, A.B., Lester, D.R., Diesmann, M., and Furber S.B. 2018. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Frontiers in Neuroscience. 12:291.


Workshop “Art, Science, the Public”

On 16 February and 17 February 2024, the workshop “Art, Science, the Public” took place at the KHK c:o/re in cooperation with the project “Computer Signals: Art and Biology in the Age of Digital Experimentation“, a research collaboration between artists, biologists and humanities scholars, in which c:o/re director Gabriele Gramelsberger has been involved since a long time. 

Together with representatives and colleagues from the research group “Computer Signals”, PACT Zollverein and RWTH Knowledge Hub, different formats and practices of science communication, in particular those that experiment with artistic forms, were discussed. The aim of the workshop was to exchange ideas and best practice examples on the interface between science and art and the associated communication challenges.

Prof. Hannes Rickli provides insights into the “Computer Signals” research project.
Prof. Gabriele Gramelsberger and c:o/re research associate Ana María Guzmán talking about science communication at KHK c:o/re

A special highlight was the sound work by Valentina Vuksic, a transdisciplinary associate of the project “Computer Signals”. During the workshop, Valentina set up an installation format in which the archive of sounds, produced by the research project, could be explored.

The sound archive collected by the project “Computer Signals” can be visited here: https://archiv.computersignale.zhdk.ch.

In the evening, the workshop was concluded with a live performance by Valentina, in which she presented artistic formats that stem from straightforward audifications of computational processes with little aesthetic consideration taken at first, and yet, took on a double life as musical works outside of their context.

The electromagnetic, electric and mechanical recordings originate from the research infrastructure of the biological laboratory at UT Austin by Hans Hofmann and the underwater observatory RemOS in Kongsfjorden, Spitsbergen by Philipp Fischer (Alfred-Wegener-Institut for polar and marine research). The audio material stays unprocessed; it is merely re-arranged and layered. The sonic works set out from digital data generation as part of scientific procedures to take a specific course outlined by a series of sonic extracts.

Valentina Vuksic during her live performance
A special place for a special event: the cellar of c:o/re

Here you can listen to excerpts from Valentina’s work that she presented that evening

Photos and videos by Jana Hambitzer

Header picture: RemOs1, Archiv Stereometrie (15. 9. 2012 – 16. 6. 2020), 2022. Detailansicht Fotoinstallation Ausstellung «Daten lauschen» im Deutschen Schifffahrtsmuseum, Bremerhaven 2022. Fotodruck auf Polycarbonatplatten, 135.168 Bildpaare, 2.32 x 1.59 x 60 m.  Fotografie: Marc Latzel.

Recap: Lecture Series “Lifelikeness”

As the 2023/2024 winter term is coming to an end, we look back on this semester’s lecture series on “Lifelikeness”. Having started with c:o/re director Gabriele Gramelsberger’s lecture on October 25th, 2023, the series concluded recently with a lecture from Michael Friedman on February 7th, 2024. 

Over the course of seven captivating lectures, we delved deep into debates on the concept of “Lifelikeness”, having explored its various dimensions, implications and interpretations. Throughout the series, we had the opportunity to hear from seven different speakers, learning what “Lifelikeness” means to them and sharing the joy of hosting them at KHK c:o/re in Aachen.

Let’s come together to summarize this intriguing lecture series.

Life from scratch – Gabriele Gramelsberger
Prof. Dr. Gabriele Gramelsberger

Opening the “Lifelikeness” lecture series on October 25th, c:o/re director  Prof. Dr. Gabriele Gramelsberger gave a talk entitled “Life from scratch”. We got to explore the fascinating world of synthetic biology and the quest to create life from scratch.

This lecture provided an insightful introduction to the history of re-genesis, followed by an overview of current practices in synthetic biology of programming life. It concluded with some reflections on the proliferating “domain of synthetica.”

For a more detailed overview of the talk, you can visit our blog post on the first talk of the “Lifelikeness” series.

Robot, a Laboratory “Animal”: Producing Knowledge through and about Human-Robot Interaction – Andrei Korbut
Dr. Andrei Korbut

For the second lecture, we were delighted to listen to c:o/re Junior Fellow Dr. Andrei Korbut, who explored the use of robots (primarily humanoid) in robotics laboratories to produce knowledge about human–robot interaction (HRI). 

The lecture introduced a conceptual framework for studying robots as contemporary laboratory “animals”, based on the notion of different types of lifelikeness that can be ascribed to humanoid robots. It convincingly argued that robots, unlike other types of laboratory “living instruments”, allow for a much closer connection between tools and objects in knowledge production because they hinder their being perception as “natural objects”.

For a better insight, have a look at our blog post on the second talk of the “Lifelikeness” series here.

When asked to reflect on what the word “Lifelikeness” evoked for him, Andrei Korbut offered the following answer:

When I hear the word ‘lifelikeness’, I think of imitation. For me, lifelikeness is something that is produced, but also something that is imputed. People are very good at seeing life in the inanimate world around them, but they are also very interested in creating the illusion of life. This makes lifelikeness not only a perceptual but also a cultural phenomenon. Hence, there are many forms of lifelikeness, serving many purposes, from entertainment to curing disease.”

Neuromorphic Computing: Inspiration from the Brain for Future AI Technologies – Emre Neftci
Prof. Dr. Emre Neftci

Is it possible to emulate the brain’s efficiency and robustness? Will such brain-inspired solutions enhance state-of-the-art AI algorithms or will they lead to yet different problematizations? In our third lecture, Prof. Dr. Emre Neftci shed light on these questions from the perspective of brain-inspired “neuromorphic computing”, explaining how current AI was shaped by neuroscience, what stands in the way of emulating the brain, and the potential benefits of taking a deep dive into how life shaped computation.

In response to our question about his initial impressions of the word “Lifelikeness,” he answered:

“In the neuromorphic computing research area “lifelikeness” is a central and hotly debated
question. How much of the brain does one need to emulate to build intelligent machines? Do we need to closely mimic biology, or can we get away by mimicking the brain’s (so far unknown) computing principles? Foundational models such as chatGPT seem to indicate the latter, but at a huge cost in energy and hardware. However, if we hope for such powerful models to run on our daily devices, being closer to biology may be necessary.”

Art’s Mediation as Remediation: On Some Artworks and their reuses of Toxic Materials – Esther Leslie
Prof. Esther Leslie

Drawing on the various ways in which Walter Benjamin and T.W. Adorno addressed both the assault on nature, in the name of progress, and the possibility – or significance – of art in and after catastrophe, a number of contemporary art practices were examined in the fourth lecture, by Prof. Esther Leslie as a working through of art as a form of mediation.

Leslie looked at this mediation from multiple perspectives, between nature and culture, between world and self, between politics and aesthetics and their connections to toxic materials in order to reflect the transmutational capacities of art practice.

The talk was embedded in a workshop on “Toxic Material(itie)s: Eco-Material Entanglements in Art” taking place at KHK c:o/re, amongst others organized by Alumni Fellow Dr. Kyveli Mavrokordopoulou.

Towards an Ecology of Technoscience – Massimiliano Simons
Dr. Massimiliano Simons

The goal of our fifth lecture, by Dr. Massimiliano Simons on January 10th, was the development of a general framework for how and why technoscience can be characterized by a fascination with self-organization and loss of control.

Simons introduced a case on newly emerging technologies, which involve a loss of control in scientific research: scientists do not have full control over the outcome but grant the system under study a level of autonomy. Machine learning in data science was highlighted as an obvious case, where a problem – often in the form of discriminating between types of data – is solved not by rational design, but by letting a self-learning algorithm find patterns. 

In his talk, Simons focused on the life sciences, and particularly the method of directed evolution in synthetic biology, which is said to follow similar lines: solving a set of problems – how to design specific molecules or enzymes – not by rational design, but by creating a context in which natural selection solves the problem.

Reflecting his initial associations upon the term “Lifelikeness,” he stated:

“For me, lifelikeness refers to systems that show behavior to uphold a certain norm, making a distinction between desirable and undesirable states. What would make it really alive is the subsequent capacity to autonomously create its own and novel norms.”

Flowers for Agouti: Epigenetics and the Genealogy of Uplift – Ben Woodard
Dr. Ben Woodard

In the sixth lecture, Dr. Ben Woodard joined us with a talk that examined how recent discussions of epigenetics complicated notions of a too hasty equation of cognition and agency both within humanity and across species that the concept of uplift is championed as an anti-Darwinian politics of Eurocentric teleology.

According to Woodard, the notion of uplift, particularly as proliferated under the banner of transhumanism carries racist undertones. Like many concepts in science and technology studies, uplift carries views stemming from science-fiction and social justice. It refers to the raising up of one species by another as well as a historical (and often racially codified) way of speaking of how one group can be raised above others within limiting structural conditions. He underlined that while these notions seem disparate, they in fact have a shared history that hybridizes fictional and non-fictional aspirations for future humanity as well as the origins of civilization.

He answers the question of what he associates with “Lifelikeness” as follows:

“Lifelikeness to me suggests behavior that appears purposive – something seems alive when it is actively seeking out new environments as well as changing its current environment to suit its needs.”

Bio-inspired Materials and Dreams of Inspiration – Michael Friedman
Dr. Michael Friedman

Dr. Michael Friedman took the stage during our seventh and last lecture of the “Lifelikeness” lecture series to discuss whether the dream of inspiration from nature for manufactured active materials is not a revision of a much older view, or rather metaphor, to read and finally write the ‘book’ of nature as the scientific analysis of organic materials may lead to the fabrication of these synthetic and bio-inspired active materials.

The model for these newly developed ‘active’ and ‘Bio-inspired’ materials, considered as entities that are able to ‘sense’ and respond to their environment, often consists in organic materials, such as the grown wood of trees and the bone formation in living organisms. In this sense the scientists are ‘inspired’ by nature, Michael Friedman stressed.

When invited to share his spontaneous associations with the concept of “Lifelikeness”, he offered his perspective in the following way:

“If one thinks on the recent advancements in materials sciences, then this term certainly underlines human’s dream to create the impossible.”

We are looking forward to Michael Friedman starting his Senior Fellowship at KHK c:o/re in April 2024.


Stay informed about all upcoming c:o/re events and projects, including our next lecture series for Summer Term 2024, by subscribing to our newsletter!

Photos by Jana Hambitzer

Unfelt Threshold: Art Installation at c:o/re

On 30 January 2024, the art installation “Unfelt Threshold” by the Japanese artist Aoi Suwa was opened at the Käte Hamburger Kolleg: Cultures of Research (c:o/re).

Composition of the art installation “Unfelt Treshold”; photo by Aoi Suwa

During the live performance of the installation and the following discussion with c:o/re Senior Fellow Masahiko Hara on “Fluctonomous Emergence”, the audience was able to experience how machines react to the unpredictable, unknown behaviour of materials, e.g. changing incidence of light. This is where Masahiko Hara’s research comes in, focusing on the integration of art strategies into science and technology based on the emergent functions of autonomous systems that exhibit fluctuant behaviour.

Masahiko Hara talks about “Fluctonomous Emergence”;
photo by Jana Hambitzer

In her project, Aoi Suwa is indirectly linking together various pieces and exhibits that she has produced over the years. The idea behind it emerged out of the concept of “shiki-iki” (識閾, lit. “threshold of consciousness”), which also informed Shiki-iki (Border, 2011), the first installation that Suwa formally presented to begin her career. Composed of a tank of water partitioned with a piece of clear plastic, the work was activated when the viewer poured ink into the water, producing a dynamic effusion of colour that gradually revealed the presence of the initially imperceptible boundary.

Aoi Suwa speaks during the opening of her art installation;
photo by Jana Hambitzer

Suwa has since continued to employ such experimental techniques to create works focused on phenomena that can only be witnessed in situ, developing what could be described as an approach aimed at perceiving thresholds that emerge through the process of traversing back and forth between the realms of the perceivable/imperceivable and conscious/unconscious.

Suwa’s approach can be likened to the way one might attempt to gradually visualize the orbit of a celestial body as it makes its periodic passes or a drawing as it takes shape through the iterative actions of marking and erasing. Through this project, she aims to explore its potential as a way of giving expression to the complexity of our current age, which eludes description in terms of simplistic binaries.

c:o/re fellows and staff exploring the installation; photo by Ana María Guzmán

The installation can be viewed until 22 February 2024 by prior registration with events@khk.rwth-aachen.de.


Interview with Aoi Suwa

Could you please introduce yourself?

My name is Aoi Suwa. I come from Japan, Tokyo, I’m an artist and I’m also PhD student in fine arts at Tokyo University of the Arts. I’m mainly focusing on creating installation work like this, and I am also very interested in the relationship between Art and Science.

What is your installation about?

I named this work Patched Phase. It means that if we see something, something is not just one. It’s very complicated. So, we have a consciousness, but consciousness is also like a more complicated structure for me. I wanted to express such a structure. It is invisible, but if I see it, I imagine the subject’s chaotic dynamics shape it.

Where do you take your inspiration from?

My starting point was the elementary school student generation. When I was an elementary school student, I met a science teacher. She is a very nice person, and she showed me very interesting chemical reactions, the liquid colour is changing again and again. I am very surprised and I got some inspiration from chemical reactions.

I often get so many inspirations from chemical phenomena, natural phenomena in science fields. I very like science, but I mean, I want to see science. It’s very difficult to explain. But, this is the important thing. Of course, I want to understand science, but not only this, I want to feel science. So that’s why I often get some new inspiration from natural phenomena.

photo by Ana María Guzmán
photo by Aoi Suwa

Marketplace Engineers at Work: How Dynamic Airline Ticket Pricing Came into Being

GUILLAUME YON

If you have recently been online looking up for flights, you may have noticed that prices for airfares are always in flux. But what online shoppers usually do not know is that these dynamic price changes are enabled by large and intricate technological systems powered by cutting-edge science and technology.

These systems were first deployed by airlines in the United States in the 1980s. Up until today, they have been an object of intense scientific and technological research and development. When deploying such systems in airlines at scale, engineers and scientists blend statistics and probabilities, mathematical optimization, computer science, and economics, all this to implement sophisticated business strategies.

Profile Image

Dr. Guillaume Yon

Guillaume Yon is a historian of economics, who researches and teaches how the ideas that shaped our economic thinking emerged. He is particularly interested in the economic knowledge produced by engineers working in industry.

In the talk I delivered at the Käte Hamburger Kolleg ‘Cultures of Research’ on December 13th, I focused on what is often considered as the first of these systems: DINAMO. DINAMO stands for dynamic inventory and maintenance optimizer. It was developed at American Airlines and was fully operational in 1988. Similar systems were implemented at other major airlines in the United States around the same time, and these systems came to be known to specialists as ‘revenue management’ systems.

What was the problem that American Airlines’ engineers had to solve? As the airline industry was being deregulated in the U.S. (a process completed in 1978), American Airlines’ marketing department came up with a new strategy, which had two connected components.

The idea was to offer multiple price points for the same seats in the same class of service on the same flight. At the time, aircrafts had two classes of service, first and coach. Coach was the second class and main cabin, as in trains in Europe these days, and less like today’s economy seats in airplanes. In coach, American Airlines’ flights were regularly departing half-empty, hence the idea, in order to fill up these empty seats and avoid the associated loss in revenue, to stimulate a new demand, coming from people travelling for leisure. Before deregulation, air travel was a luxury product, and American Airlines was not alone in thinking that there was an untapped and potentially huge new market out there: middle-class families going on vacation, college students coming back home, young couples going away for the weekend, senior citizens visiting their children and grandchildren. However, these new leisure travelers were price sensitive, hence the need for a price discount to attract them, and fill the empty seats.

The second component was to prevent American Airlines’ already existing business customers, who travelled in coach too, from buying at the discounted price. Business customers were less price sensitive than the new leisure travelers, as they traveled on company money. They were willing to pay more for the same seat in coach. If business customers could buy the discounted fare, the new strategy would only result in a new source of revenue loss, this time from the business travelers’ side. American Airlines’ marketing department came up with the idea to tie the discounted prices to restrictions. For instance, American Airlines Ultimate Super Saver, a fare launched in 1985, was cheaper than the full fare for the same seats in coach. However, it was available only up to 30-days before departure (the so-called ‘advance purchase requirement’), had a steep cancellation fee, and was available only to those buying a round trip ticket with a Saturday night stay. Business travelers could not abide to those restrictions. They tended to book later and wanted to spend the weekends with their families. Therefore, even though discounted fares were available for a flight, business travelers would carry on buying at a higher price the seats on the same flight.

The outcome of American Airlines’ new pricing strategy was that for a given flight – from A to B, with a given departure date in the future – the seats in coach were offered at different prices with different restrictions (the lower the price, the more stringent the restrictions). These different ‘fare classes’ were available for sale at the same time. This new marketing strategy was a tremendous success for American Airlines, and it played an important role in turning air travel into mass transportation.

TV Ad for American Airlines’ Ultimate Super Saver Fare in 1985. Note the mention of ‘round trip purchase’ and that ‘restrictions may apply’.

This tremendous success from a revenue perspective turned into a nightmare from a business process perspective. At American Airlines, hundreds of new revenue management analysts were hired, and they were struggling. Each revenue management analyst had a set of flights to manage. They needed to decide, for each flight, how many seats should be made available for sale in each fare class in order to obtain, at the flight departure, the mix of passengers which maximizes revenue. That decision needed to be made at first a year before departure, when the flight opened for booking. In the mid-1980s, there were at least three different fare classes in coach (the full fare, the Ultimate Super Saver, and a Super Saver in between), in addition to the first class, on each flight. Worse, American Airlines re-organized its network after deregulation as a hub-and-spoke, in order to efficiently serve more destinations domestically and internationally. Each path in the network with a connection at the hub also had at least three fare products in coach. For the local traffic, if the analyst allocated too many seats to the lowest fares, it could displace high paying business travelers. But allocating too few seats to the lowest fares could mean departing with empty seats, if high paying demand did not materialize late in the booking process. Simultaneously, for the same flight, the analysts needed to decide what the revenue maximizing mix of local and connecting traffic was. Was it best to have one more seat protected for a high paying business passenger on that flight, or have one more discounted passenger on the same seat but with a connection to a long-haul expensive flight? It depended on the price each of those two passengers paid, the likeliness of each passenger showing up for booking, and how full each of the two flights were. Humans could not possibly make all these decisions efficiently at scale. Therefore, around 1982/1983 American Airlines management tasked its operations research department with automating the process.

The hub-and-spoke problem: if the analyst focusses only on maximizing revenue on the Los Angeles-Dallas flight, and that flight is expected to be quite full, they might offer too many full fares for sale, displacing potential passengers on discounted fares but continuing their journey to Miami – or even further away. The Los Angeles-Miami via Dallas low fare passenger might be more profitable for the airline than full fare passengers travelling only from Los Angeles to Dallas, in particular if the Dallas-Miami flight has lots of empty seats. To put it simply, if you are operating a hub-and-spoke network, it does not make any sense to lock people out of your hub – and in particular, to lock people out of long-haul flight because of high local traffic to the hub. Moving from the flight level to the whole network level made the revenue optimization problem unmanageable by humans. This picture presents only a sample network. In the mid-1980s, in Dallas, any American Airlines passengers could connect not just with two, as in this sample network, but with 30+ other flights, including very profitable long-haul international flights.
Source: Smith, Leimkuhler and Darrow (1992) ‘Yield Management at American Airlines’ Interfaces 22 (1), pp. 8-31.

To automate the process, operations researchers started thinking from the actual available technology: SABRE (for semi-automated business research environment). SABRE was big tech at the time. It was the first global electronic commerce infrastructure, allowing travel agents to sell tickets through a dedicated terminal, connected to American Airlines inventory in real-time, by telephone transactions. SABRE was also an amazing database, as it recorded the numbers of bookings for each fare class on each flight. However, for revenue management analysts, this deluge of data was overwhelming.

A travel agent with a SABRE terminal, date unknown. With this terminal, travel agents could check remotely, on behalf of a customer, whether a fare is available in American Airlines central inventory, and book the seat. This was the airline ‘store front’, and the customer-facing end of the first global electronic commerce infrastructure.
Source: IBM; https://www.ibm.com/history/sabre; Last Accessed Jan. 2024.

American Airlines’ engineers aimed at overcoming the limitations of human decision-making through automation. To do so, they needed to redesign SABRE, which was simultaneously an information system (or a database, recording bookings in each fare class at the flight level) and a distribution infrastructure (a marketplace). They asked: how to expand it, and turn it into a pricing system (able to manage which fares were available for sale on each flight from a network flow management perspective)?

The articulation of that problem is historically significant. American Airlines’ operations researchers sought to solve a business problem, the implementation of a sophisticated new pricing strategy, which aimed at making pricing more dynamic, more market-responsive, more granular. But they did not look for the theoretically optimal solution. Instead, they sought to deploy a new technology. To do so, they started from an already existing technology, identifying the constrains and opportunities it offered. This already existing technology (SABRE) was a global electronic commerce infrastructure, i.e. the marketplace itself, coupled with a large database on bookings, i.e. customers’ purchasing behavior.

I spent most of the talk narrating how American Airlines’ operations researchers came to a solution. I tried to show how their thinking was shaped by the details of the distribution infrastructure: how airlines’ products were sold to customers through a computerized system, i.e. the features of the marketplace itself. I also tried to show how their thinking was shaped by the data (availability and size) and the computing power they had access to.

I argued that the crucial step to the solution was nothing spectacular, just a hack in SABRE called ‘virtual nesting’. This hack enabled the management at the flight level of the availability of the connecting fare classes, when working with two new components plugged into SABRE. First, an automated demand forecast, powered by statistical and probabilistic approaches, extracted the historical booking data in each fare class in each flight from SABRE, and then provided an expected revenue for each ‘virtual bucket’ on a flight. The expected revenue of a bucket meant the average price of the range of fare classes clustered in the bucket, weighted by the probability of having that many customers booking in that bucket. Second, an algorithm allocated a number of seats to each bucket of fare classes, given the average expected revenue for each bucket; this component was called the optimizer. The mathematics supporting the optimization were not trivial. American Airlines’ operations researchers used mathematical programming approaches which belonged to the standard toolbox of operations research at the time. However, these tools needed to be creatively applied to the specific problem at hand, accounting in particular for the limitations in computing power. This required the development of completely new heuristics. Overall, using mathematical programming to make pricing more dynamic, more market responsive, and much more fine-grained than it had ever been before in any industry, was an important innovation. And it all hinged on a hack in SABRE.

DINAMO opened decades of intense research and development to improve the ‘hack’, the forecasting, and the optimization. The underlying logic is still in use today, at least in the largest networked airlines. It directly inspired marketplace engineers in many industries, from Amazon to Uber, from hotels to concert tickets sellers. It features prominently in the training of the future generation of marketplace engineers. And if your local supermarket uses digital price tags on the shelves, it is likely that they are using a version of it too.

Sources

The knowledge produced by marketplace engineers is not widely shared beyond the community of specialists. Furthermore, it is very practical and operational, and for that reason not fully codified in the scientific literature. Therefore, the main sources for my research are interviews with the engineers and scientists who built these systems in airlines (50 people interviewed so far, and the list is still open!). I asked them how they proceeded, the resources they had, the environment they were working in, what their thought process was, their path to the solution. My interviewees also walked me through the technical literature they have produced, in particular technical presentations delivered at an industry forum called AGIFORS, the Airline Group of the International Federation of Operational Research Societies. This talk on DINAMO drew on my broader research project, in which I study the practices, forms of reasoning, and ways of thinking of engineers and scientists who built revenue management systems in the airline industry, from the origins in the 1980s to today. On DINAMO, the interested reader can start with this great paper that was published by its three main inventors: Smith, Leimkuhler and Darrow (1992) ‘Yield Management at American Airlines’ Interfaces 22 (1), pp. 8-31.


Proposed citation: Guillaume Yon. 2024. Marketplace Engineers at Work: How Dynamic Airline Ticket Pricing Came into Being. https://khk.rwth-aachen.de/2024/01/31/9192/marketplace-engineers-at-work-how-dynamic-airline-ticket-pricing-came-into-being/.

c:o/re Highlights of 2023: A look back

When we look back on the year 2023 at c:o/re, we can think of many great lectures, workshops and projects that we were able to realise in collaboration with our fellows and scientists from Aachen and around the world.

In this blog post, we take joy in remembering some of them:

Lecture Series

In the summer semester, the lecture series of c:o/re took place on the topic of “Complexity”. In seven lectures, the concept of complexity was examined from various disciplinary perspectives. You can reread some of the questions discussed in this and this blog post.

Emre Neftci during his lecture “Neuromorphic Computing: Inspiration from the Brain for Future AI Technologies” on November 22.

The lecture series for the winter semester 2023/24 began on 25 October 2023 with the topic “Lifelikeness”. Every two weeks until 7 February 2024, c:o/re fellows and guest speakers have discussed will continue to talk about the representations and imitations of life in its many forms. You can find all the dates in the program and impressions of some of the talks in the blog posts here and here.

Workshops
The Navigating Interdisciplinarity workshop took off at Marsilius Kolleg at Heidelberg University.

On 19 and 20 January 2023, the Marsilius Kolleg Heidelberg hosted the workshop “Navigating Interdisciplinarity”, which was organised in collaboration with CAPAS Heidelberg, the Marsilius Kolleg and c:o/re. The event brought together interdisciplinary research groups, mainly, but not only, from the humanities and social sciences to discuss the complex challenges of interdisciplinarity in the academic setting. The participants were able to discuss terms such as “complexity”, “security” or “collapse” as key aspects of interdisciplinary cooperation and research. You can read more about the workshop in this blog post.

Historicising STS participants (left to right): Benjamin Peters, Salome Rodeck, Arianna Borrelli, Kyveli Mavrokordopoulou, Lisa Onaga.

On 14 and 15 March 2023, the workshop “Turning points in reflections on science and technology: Toward historicising STS” took place at c:o/re. The aim of this event was to analyse the turning points in the intellectual history of Science and Technology Studies (STS) in the course of the 20th and 21st centuries. The meeting illustrated the interdisciplinary and multi-perspective study of STS that is being conducted at c:o/re. Director Gabriele Gramelsberger and the fellows Ben Peters, Clarissa Lee, Kyveli Mavrokordopoulou, Jan C. Schmidt and Arianna Borrelli organised the event. They were joined by several renowned early career researchers from the STS field, such as Lisa Onaga (Max Planck Institute for History of Science, Berlin), Carsten Reinhardt (Bielefeld University), Salome Rodeck (Max Planck Institute for History of Science, Berlin), Vanessa Bateman (Maastricht University) and Andreas Kaminski (Darmstadt Technical University). A recap of the workshop can be found here.

Stefan Böschen introduced the notion of Varieties of Science as one of the guiding inquiries of c:o/re in Bucharest.

Following on from the success of the first workshop “Varieties of Science: Patterns of Knowledge” in December 2022 at the Universidad Nacional Autónoma de México (UNAM) in Mexico City, the networking trips to institutes of science and transformation research were continued as part of the Varieties of Science activities. From 5 to 6 May 2023, a group of fellows and staff as well as the directors travelled to the Institute of Philosophy at the University of Bucharest for the workshop “Varieties of Science 2. European Traditions of Philosophy of Science: Unexpected Varieties” to discuss the differences in research cultures with international colleagues. You can read more about the talks and discussions in this blog post.

Due to the summer weather, the participants of Thomas Haigh’s workshop “So What Was Artificial Intel-ligence, Anyway” moved to the centre’s garden.

Another workshop took place from 13 to 14 June 2023 in collaboration with Dr. Thomas Haigh with the title “So What Was Artificial Intelligence, Anyway” at c:o/re. In the first part of the workshop, the history and philosophy of AI and digitalisation were discussed, with both the directors and the fellows being able to contribute their own research topics. In the second part of the workshop, Thomas Haigh presented the manuscript for his new book on the history of AI, which was followed by a discussion on the previously very heterogeneous concept of AI and the possibilities of standardising the various AI practices as a brand.

Conferences
Participants of the conference “Wissenschaften des Konkreten” in the lecture hall.

From 15 to 17 February 2023, c:o/re hosted the international conference “Wissenschaften des Konkreten”, organised by Caroline Torra-Mattenklott, Christiane Frey, Yashar Mohagheghi and Sergej Rickenbacher from the Institute for Germanic and General Literary Studies at RWTH Aachen. The concept of the conference was not only, as the title indicates, inspired by Claude Lévi-Strauss’ concept of the “Science of the Concrete”, but also follows his assumption that the sensual and experimental interaction with the things of the environment is equally the origin of the “wild spirit” as well as of modern science. A summary of the event can be found here.

Together with the Rhine Ruhr Center for Science Communication (RRC), c:o/re directors Gabriele Gramelsberger and Stefan Böschen organised the conference “Nowhere(to)land? What science studies contributes to science communication” from 14 to 16 June 2023 in Bonn. Exciting questions and topics of science communication were discussed, including how the research fields of Science and Technology Studies (STS) and Science Communication Studies could be more closely linked in the future in order to fulfil the special communication requirements of science research results or how social participation in research could be implemented. The lectures and discussions also provided insightful impulses for the science communication at c:o/re.

STS Hub
Participants during Ulrike Felt’s keynote “Infrastructuring circulations: On the tacit governance of contemporary academic knowing spaces”.

From 15 to 17 March 2023, a new format for the interdisciplinary linking of different science research communities was launched for the first time: STS-Hub.de. This format, conceived as a biennial conference with changing hosts, provides an opportunity for networking and exchange for STS researchers from German-speaking countries and a connection between different disciplines and areas of specialization. c:o/re played a leading role in the local organisation. In over 65 individual panels, the approximately 300 participants found space and room for discussion. The overarching conference theme of “Circulations” resonated and was addressed in various topics ranging from experimental democracy and science communication to ethics and art. In addition to traditional panels, there were also innovative formats such as walkshops and fishbowl discussions. The desired non-hierarchical exchange between researchers from different backgrounds was thus promoted and achieved. The conference was framed by two keynotes by Ulrike Felt, short-term fellow at c:o/re in March 2023, on the infrastructurisation of circulation in the field of science and Susann Wagenknecht on leaks in circular infrastructures and markets. You can read more about the conference on our blog here.

“Leonardo” Project
Dr Andrei Korbut discussed Human-Robot Interaction (HRI) in his lecture on November 8, which was part of the “Leonardo” module.

This winter semester, c:o/re’s participation in the interdisciplinary courses of the “Leonardo” project continues with the module “Engineering Life: Imaginaries of Lifelikeness” at RWTH Aachen University, which is organised jointly with the fellows. The module is aimed at interested M.A. students from all subject groups who can earn credit points for their degree programme. On this website, you can find more information about the “Leonardo” project.

Summer School
Participants of the International Semiotics Summer School in Prague 2023, gathered in front of the Faculty of Humanities (Charles University).

c:o/re was one of the main organisers of the International Semiotic Summer School in Prague “Visual Metaphors”, which took place from 23 to 28 July 2023 in cooperation with the Palacký University Olomouc and the Charles University. Through various lectures and presentations, the Summer School explored visual metaphors and the epistemological changes brought about by the current technological revolution. 80 students from various European universities took part. In this blog post, RWTH Aachen students wrote about their experiences during the Summer School.

Social Media

On X (formerly Twitter), c:o/re is still very active by announcing events, reporting live from conferences and talks and giving updates on everything that is happening at the centre.

We are still figuring out whether leaving X and using alternatives such as Bluesky is an option. Have you already moved your Social Media activities off X?

Impressions of the c:o/re Instagram account.

Since September 2023, c:o/re is also active on Instagram in order to give some insights into the work happening at the centre in a more tangible and less text-heavy way through photos and videos. We have also just registered with LinkedIn.

You are cordially invited to follow us on our social media accounts.

Video series: c:o/re shorts

We have started a new video series: c:o/re shorts. Get to know our current fellows and gain an impression of their research. In short videos, they introduce themselves, talk about their work at c:o/re, the impact of their research on society and give book recommendations. Take a look at the first two videos:

Blogposts

We would like to thank all the authors who contributed texts to our blog in 2023.
We invite you to read through them:

  1. What makes an ideal robot girlfriend?
  2. The notebook pt. 3: “For 20 years, I haven’t used a pen” – a computer nerd’s confession
  3. Research in Times of War – “Scientific Life Somehow Goes on…”
  4. Research in Times of War – “The War Added One More Factor – the War Itself”
  5. On Aryeh Ludwig Strauss: a German-Hebrew poet from Aachen
  6. Supercharge the real-world impact of Research, Innovation and Enterprise with brand building methodologies
What is coming in 2024?

The Käte Hamburger Kolleg: Cultures of Research (c:o/re) is offering ten research fellowships for international scholars from the humanities, social sciences or STS as well as from natural, life and technical sciences for the academic year 2024/25. The fellowships can start between June and October 2024. The Call for Applications for 2024/25 is still open until December 31, 2023.

For the beginning of 2024, relationships with the arts will be strengthened with two upcoming collaborations. 

In February 2024, a workshop on “Art, Science, the Public Sphere” in collaboration with the research project “Computer Signals. Art and Biology in the Age of Digital Experimentation II” from Zurich University of the Arts will be held at c:o/re. As part of this, the artist Valentina Vuksic will present an artistic performance on the sonification of data “Listening to Science”, which will be accessible to students and researchers at RWTH but also to citizens of Aachen. 

In April 2024, the experimental conference “Politics of the machines: Lifelikeness & beyond” will take place in Aachen, which seeks to bring together researchers and practitioners from a wide range of fields across the sciences, technology and the arts to develop imaginaries for possibilities that are still to be realized and new ideas of what the contingency of life is. You can find more information about POM on their website.

You can subscribe to our newsletter to stay updated with all projects and events happening at c:o/re in 2024.

We are looking forward to everything that awaits c:o/re in the coming year. Stay tuned!

Robot, a Laboratory “Animal”: Andrei Korbut on how robots produce knowledge in laboratories

On November 8, Dr. Andrei Korbut warned that he will disappoint philosophers, sociologists and roboticists in what he delivered as the second lecture of the c:o/re Lifelikeness series. He disappointed to disappoint any of these. The Lifelikeness c:o/re lecture series addresses a public even broader and more diverse than previous c:o/re lectures, as it now also engages postgraduate students coming from a vast array of disciplines through the Projekt Leonardo.

‘Animacy’ vs. ‘Lifelikeness’: Dr. Korbut discusses Voss (2021). Photographer: Jana Hambitzer

Dr Korbut discussed Human-Robot Interaction (HRI) from a perspective enabled by construing robots as laboratory “animals”. He invited the audience to reflect on this view by watching a famous 2016 video produced by Boston Dynamics which shows a researcher (physically) obstructing a robot to complete its task to pick up an object. Dr. Korbut asked the audience what do they feel when watching this scene, whether they feel sorry for the robot and whether the human is bullying? He explained that the feeling the humans might feel when watching such a scene is purposefully employed in laboratory studies on HRI. This led Dr. Korbut to note that HRI is one of the fastest growing and most dynamic subfields in robotics currently, raising salient questions in fields like communication studies, psychology and design. Particularly given the multidisciplinary branching that it implies, it is important to note that robotics is not exclusively academic. HRI has a strong commercial stake.

Robot Pepper, a semi-humanoid robot manufactured by SoftBank Robotics to identify emotions. Source and copyright: Wikimedia Commons, CC.

Dr. Korbut explicated the conceptual framework to study robots as contemporary laboratory “animals” as inspired by various notions of types of lifelikeness that can be ascribed to humanoid robots. He argued that robots allow for a closer connection between tools and objects in knowledge production than other types of laboratory “living instruments” because robots are not perceived as “natural objects”. In this line of inquiry, Dr. Korbut’s c:o/re fellowship project focuses on the robot Pepper, designed by SoftBank Robotics. Pepper is a 1.2-meter-tall mobile humanoid robot with 20 degrees of freedom and 20 sensors, microphones and actuators. It can process and synthesize speech in natural language, and it is commercially promoted as capable to recognize basic human emotions. Its appearance is deliberately “cute” and genderless because it is designed to be interacting with by humans in offices, cultural institutions, homes and medical settings. It is currently one of the most popular machines in robotics laboratories globally.

As such, Dr. Korbut is now exploring Pepper’s “academic career”, from manufacturer to publication, where the robotics lab is the crucial passage point. In this inquiry and by pondering on the knowledge that Pepper produces, Dr. Korbut is bringing together but also transcending the disciplinary limitations of laboratory studies in general, robotics laboratories studies, and laboratory animal studies. In laboratory sciences, Dr. Korbut takes Karin Knorr-Cetina‘s notion of epistemic cultures as a guiding optics, where “Laboratory sciences subject natural conditions to a “social overhaul” and derive epistemic effects from the new situation” (Knorr-Cetina 1999, p. 28). Further, he draws an insight from, but also argues for expanding Andreas Bischof’s view that when roboticists “laboratize“, they reduce “the complexity and contingency of social situations” (Bischof 2017: 225, 229). This leads him to observe the importance of Voss’ apparently paradoxical remark that “the practice of representing the robot as both an inanimate object and an animate being is an integral and constructive aspect of roboticists’ work”. At this point, Dr. Korbut remarks the relevance of the term lifelikeness. Particularly, via this term, the discussion is construed in terms of the simultaneous attributing and avoiding the attribution of lifelikeness to machines.

Dr. Korbut advocates employing the term lifelikeness, rather than animacy (in Voss 2021), in this debate because it enables drawing parallels between robotics studies and laboratory animal studies. While lifelikeness may mislead, because it suggests that roboticists may impute “life” to their machines, it opens up the some mitigating possiblities by indicating that “life”, in this discourse, is defined pragmatically, in the context of “laboratory life”, as referring to a property of the object used in the laboratory to produce knowledge. As such, robots are closer to laboratory animals, such as mice and Drosophila than to the wooden idols of animistic practices described by cultural anthropologists.

Dr. Korbut argues for a theory that construes robots as “animals” of very specific kind. Because they are detached from the laboratory environment much more than animals like mice or Drosophila, roboticists can secure a tighter link between tool and object. This link, Dr. Korbut argues, is based on roboticists’ ability to procure and exploit three types of lifelikeness that can be attributed to the robots, all which come down to considering the body as moving, interacting and manipulating.

In this light, Dr. Korbut considers that humans empathise with robots not because we identify with them but because of the particular configuration of robots’ hull – their programming, movements, and the material environment – corresponds to a recognizable type of lifelikeness. In brief, in the laboratory, robots hinder their being perceived as “natural objects”.

References

Bischof, A. 2017. Sozial Maschinen bauen: Epistemische Praktiken der Sozialrobotik. Transcript.

Knorr-Cetina, K. 1999. Epistemic cultures: How the sciences make knowledge. Harvard University Press.

Voss, L. 2021. More than machines? The Attribution of (In)Animacy to Robot Technology. Transcript.

Dr. Andrei Korbut discusses a 2016 video produced by Boston Dynmanics. Photographer: Jana Hambitzer