“Humans haven’t necessarily made the best choices for our world.” – Interview with Peter Mantello on Emotionalized Artificial Intelligence
In February 2024, a collaboration with colleagues from Ritsumeikan Asia Pacific University on the topic of Emotionalized Artificial Intelligence (EAI) started. Professor Peter Mantello (Ritsumeikan Asia Pacific University) leads a 3-year project funded by the Japan Society for the Promotion of Science, on which c:o/re is a partner, that will compare attitudes in Japan and in Germany on EAI in the workspace. This explorative pathway contributes to the c:o/re outlook on Varieties of Science. You can find the whole project description on our website here.
In the interview below, Peter Mantello explains what EAI is, how the project will consider AI ethics and why the comparison of German and Japanese workplaces is particularly insightful. We thank him for this interview and look forward to working together.
Peter Mantello
c:o/re short-term
Senior Fellow (11-17/2/2024)
Peter Mantello is an artist, filmmaker and Professor of Media Studies at Ritsumeikan Asia Pacific University in Japan. Since 2010, he has been a principal investigator on various research projects examining the intersection between emerging media technologies, social media artifacts, artificially intelligent agents, hyperconsumerism and conflict.
What is Emotionalized Artificial Intelligence (EAI)? What does this formulation entail differently than ‘Emotional’ AI?
Emotional AI is the commercial moniker of a sub-branch in computer science known as affective computing. The technology is designed to read, monitor, and evaluate a person’s subjective state. It does this by measuring heart rate, respiration rate, skin perspiration levels, blood pressure, eye movement, facial micro-expressions, gait, and word choice. It involves a range of hardware and software. This includes cameras, biometric sensors and actuators, big data, large language models, natural language processing, voice tone analytics, machine learning, and neural networks. Emotionalized AI appears in two distinct forms: embodied (care/nursing robots, smart toys) and disembodied (chatbots, smartphone apps, wearables, and algorithmically coded spaces).
I think the term ’emotionalized’ AI better encompasses the ability of AI not to just read, and recognize human emotion but also to simulate and respond in an empathic manner. Examples of this can be found in therapy robots, chatbots, smart toys, and holograms. EAI in allows these forms of AI to communicate in a human-like manner.
What is emotionalized AI used for and for what is it further developed?
Currently, emotionalized AI can be found in automobiles, smart toys, healthcare (therapy robots/ doctor-patients conversational AI) automated management systems in the workplace, advertising billboards, kiosks and menus, home assistants, social media platforms, security systems, wellness apps and videogames.
What forms of ethical work practices and governance do you have in mind? Are there concrete examples?
There are a range of moral and ethical issues that encompass AI. Many of these are similar to conventional usages of AI, such as concerns about data collection, data management, data ownership, algorithmic bias, privacy, agency, and autonomy. But what is specific about emotionalized AI is that the technology pierces through the corporeal exterior of a person into the private and intimate recesses of their subjective state. Moreover, because the technology targets non-conscious data extracted from a person’s body, they may not be aware or consent to the monitoring.
Where do you see the importance of cultural diversity in AI ethics?
Well, it raises important issues confronting the technology’s legitimacy. First, the emotionalized AI industry is predominantly based in the West, yet the products are exported to many world regions. Not only are the data sets used to train the algorithms limited to primarily Westerners, but they also rely largely on famed American sociologist Paul Eckman’s ‘universality of emotions theory’ that suggests there are six basic emotions and are expressed in the same manner by all cultures. This is untrue. But thanks to a growing number of critics who have challenged the reliability/credibility of face analytics, Eckman’s theory has been discredited. However, this has not stopped many companies from designing their technologies on Eckman’s debunked templates. Second, empathetic surveillance in certain institutional settings (school, office, factory) could lead to emotional policing, where to be ‘normal’ or ‘productive’ will require people to be always ‘authentic’, ‘positive’, and ‘happy’. I’m thinking of possible dystopian Black Mirror scenarios, like in the episode known as “Nosedive”.
Third, exactly what kind of values do we want AI to have – Confucian, Buddhist, Western Liberal?
Do you expect to find significant differences between the Japanese and German workplace?
Well, it’s important to understand the multiple definitions of the workplace. Workplaces include commercial vehicles, ridesharing, remote workspaces, hospitals, restaurants, and public spaces, not just brick-and-mortar white-collar offices.
Japan and Germany share common work culture features, but each society also has historically different attitudes to human resource management relationships, what constitutes a ‘good’ worker, loyalty, corporate responsibility to workers, worker rights and unions, and precarity. The two cultures also differ in how they express their emotions, raising questions about the imposition of US and European emotion analytics in the Japanese context.
How will the research proceed?
The first stage of the research will be to map the ecology of emotion analytics companies in the West and East. This includes visits to trade show exhibits, technology fairs, start-up meetings, etc. The second stage will be interviews. The third stage will include a series of design fiction workshops targeted to key stakeholders. Throughout all of these stages, we will be holding workshops in Germany and Tokyo, inviting a interdisciplinary mix of scholars, practitioners, civil liberties advocates and industry people.
What do you think will be the most important impact of this project?
We are at a critical junction point in defining and deciding how we want to live with artificial intelligence. Certainly, everyone talks about human-centric AI but I don’t know what that means. Or if that’s the best way forward. Humans haven’t necessarily made the best choices for our world. If we try to make AI in our own image, it might not turn out right. What I hope this project brings are philosophical insights that will better inform the values we need to encode into AI, so it serves the best interests of everyone, especially, those who will be most vulnerable to its influence.
What inspired you to collaborate with c:o/re?
My inspiration to collaborate with c:o/re stems from my growing interest in phenomenological aspects of human-machine relations. For the past three years, my research has focused primarily on empirical studies of AI. The insights gained from this were very satisfying, albeit they also opened the door to larger, more complex questions that could only be examined from a more theoretical and philosophical perspective. After a chance meeting with Alin Olteanu at a semiotic conference, I was invited to attend a c:o/re workshop on software in 2023. I realized then that KHK’s interdisciplinary and international environment would be a perfect place for an international collaborative research project.
Lecture Series Summer 2024: Lifelikeness
Due to the great interest, the lecture series of the summer semester 2024 will once again be held on the topic of “Lifelikeness”.
Various speakers, including the sociologist Hannah Landecker (University of California, Los Angeles) and the historian of science Friedrich Steinle (TU Berlin), will be guests at the KHK c:o/re and shed light on “Lifelikeness” from different disciplinary perspectives.
Please find an overview of the dates and speakers in the program.
The lectures will take place from May 8 to July 3, 2024 every second Wednesday from 5 to 6.30 pm in presence and online.
An exception is the lecture by Hannah Landecker, which she will give as part of the interdisciplinary conference “Politics of the Machines” on Tuesday, April 23, 2024 from 5:30 to 7 p.m. in the Super C- Generali Saal.
If you would like to attend the lectures, please send a short email to events@khk.rwth-aachen.de.
Program: PoM Conference in Aachen
Programmable biosensors, life-like robotics and other artificial models – the present and the future are dominated by new phenomena in the life sciences. How can the challenges, opportunities and uncertainties associated with these advances be addressed?
The transdisciplinary conference series “PoM – Politics of the Machines”, which will take place from April 22 to 25, 2024 at the Super C at RWTH Aachen University (Templergraben 57, 52062 Aachen) under the title “Lifelikeness & beyond”, will explore this question. At the interface of science and art, the conference aims to stimulate reflection on the comprehensive connections that shape our perception of the world.
International researchers and practitioners from various fields of science, technology and art will come together to discuss socio-cultural concepts of the future, the interaction between human and machine and ideas of the living and non-living in different formats.
The main program from 22 to 25 April will take place in Aachen in the Super C of the RWTH Aachen University and in the LOGOI Institute.
Super C: Templergraben 57, 52062 Aachen
LOGOI Institute: Jakobstraße 25a, 52064 Aachen
You can register with this form.
Further information on the schedule can be found in this program.
You can find a longer version with all abstracts in this program.
On Thursday, April 25, Dr. Jürgen Kippenhan will give a talk on “Artificial intelligence and the sensory structures of human speech, thought and action” as part of the “POM Conference” at LOGOI, Jakobstraße 25a, 52064 Aachen.
As part of the conference, the choreographic centre PACT Zollverein in Essen will realize the accompanying programme ‘life.like’ on 26 and 27 April 2024, which consists of six artistic positions in the form of performance, installation, discourse and sound.
‘Lifelikeness & beyond’ is the fourth edition of the “Politics of the Machines” conference series, founded by Laura Beloff (Aalto University Helsinki) and Morten Søndergaard (Aalborg University Denmark) and organized in collaboration with RWTH Aachen University, LOGOI Institute for Philosophy and Discourse and PACT Zollverein in Essen.
Objects of Research: Sarah R. Davies
For today’s edition of the “Objects of Research” series, c:o/re Senior Fellow Sarah R. Davies gives an insight into her desk set up. As a professor of Technosciences, Materiality, and Digital Cultures, her work focuses on the intersections between science, technology, and society, with a particular focus on digital tools and spaces.
“I guess many academics would share some varient of this image: a careful arrangement of computer equipment, coffee, notepads, pens, and the other detritus that lives on (my) desk.
For me it’s important that the technical equipment is shown in conjunction with the paper notebook and pens. I’m fussy about all of these things – it’s distracting when my computer set-up isn’t what I’m used to, and I need to use very specific pens from a particular store – but ultimately my thinking lives in the interactions between them.
My colleagues and I are working on an autoethnographic study of knowledge production, and notice that (our) creative research work often emerges as we move notes and ideas from paper to computer (and back again).”
Would you like to find out more about our Objects of Research series at c:o/re? Then take a look at the pictures by Benjamin Peters, Andoni Ibarra, Hadeel Naeem, Alin Olteanu, Hans Ekkehard Plesser, Ana María Guzmán, Andrei Korbut, Erica Onnis, Phillip H. Roth, Bart Penders and Dawid Kasprowicz.
Workshop “Art, Science, the Public”
On 16 February and 17 February 2024, the workshop “Art, Science, the Public” took place at the KHK c:o/re in cooperation with the project “Computer Signals: Art and Biology in the Age of Digital Experimentation“, a research collaboration between artists, biologists and humanities scholars, in which c:o/re director Gabriele Gramelsberger has been involved since a long time.
Together with representatives and colleagues from the research group “Computer Signals”, PACT Zollverein and RWTH Knowledge Hub, different formats and practices of science communication, in particular those that experiment with artistic forms, were discussed. The aim of the workshop was to exchange ideas and best practice examples on the interface between science and art and the associated communication challenges.
A special highlight was the sound work by Valentina Vuksic, a transdisciplinary associate of the project “Computer Signals”. During the workshop, Valentina set up an installation format in which the archive of sounds, produced by the research project, could be explored.
In the evening, the workshop was concluded with a live performance by Valentina, in which she presented artistic formats that stem from straightforward audifications of computational processes with little aesthetic consideration taken at first, and yet, took on a double life as musical works outside of their context.
The electromagnetic, electric and mechanical recordings originate from the research infrastructure of the biological laboratory at UT Austin by Hans Hofmann and the underwater observatory RemOS in Kongsfjorden, Spitsbergen by Philipp Fischer (Alfred-Wegener-Institut for polar and marine research). The audio material stays unprocessed; it is merely re-arranged and layered. The sonic works set out from digital data generation as part of scientific procedures to take a specific course outlined by a series of sonic extracts.
Here you can listen to excerpts from Valentina’s work that she presented that evening
Photos and videos by Jana Hambitzer
Header picture: RemOs1, Archiv Stereometrie (15. 9. 2012 – 16. 6. 2020), 2022. Detailansicht Fotoinstallation Ausstellung «Daten lauschen» im Deutschen Schifffahrtsmuseum, Bremerhaven 2022. Fotodruck auf Polycarbonatplatten, 135.168 Bildpaare, 2.32 x 1.59 x 60 m. Fotografie: Marc Latzel.
Objects of Research: Bart Penders
Here comes the new edition of our “Objects of Research” series. c:o/re Senior Fellow Dr. Bart Penders provides an insight into his research work and introduces an important tool for this:
“As part of the work I do at KHK c:/ore, as well as extending beyond that, I collect empirical data. In my case, that data consists of records of interviews with scientists and others. Those records can be notes, but they can also be integral recordings of the conversations.
Relying on technology for the production of data is what scientists do on a daily basis. With that comes a healthy level of paranoia around that technology. Calibrating measurement instruments, measurement triangulation, and comparisons to earlier and future records all help us to alleviate that paranoia. I am not immune and my coping mechanism has been, for many years, to take a spare recording device with me.
This is that spare, my backup, and thereby the materialisation of how to deal with moderate levels of technological paranoia. It is not actually a formal voice recorder, but an old digital music player I have had for 15 years, the Creative Zen Vision M. It has an excellent microphone, abundant storage capacity (30 gigabytes) and, quite importantly, no remote access options. That last part is quite important to me, because it ensures that the recording cannot enter the ‘cloud’ and be accessed by anyone but me. Technologically, it is outdated. It no longer serves its original purpose: I never listen to music on it. Instead, it has donned a new mantle as a research tool.”
Would you like to find out more about our Objects of Research series at c:o/re? Then take a look at the pictures by Benjamin Peters, Andoni Ibarra, Hadeel Naeem, Alin Olteanu, Hans Ekkehard Plesser, Ana María Guzmán, Andrei Korbut, Erica Onnis and Phillip H. Roth.
Objects of Research: Phillip H. Roth
For this edition of the “Objects of Research” series, c:o/re postdoc and event coordinator Dr. Phillip H. Roth shows a picture of his favorite research tool. He is currently working on a book/habilitation project that will be a media history of preprints in science.
“I use mechanical pencils (like the one in the photo) to highlight, annotate, question, clarify, or reference things I read in books. This helps me digest the arguments, ideas, and discourses I deal with in my historical and sociological research. I also have software for annotating and organizing PDFs on my iPad as well as a proper notebook for excerpting and writing down ideas. However, I’ve found that the best way for me to connect my reading practices with my thoughts is through the corporeal employment of a pencil on the physical pages of a book.”
Would you like to find out more about our Objects of Research series at c:o/re? Then take a look at the pictures by Benjamin Peters, Andoni Ibarra, Hadeel Naeem, Alin Olteanu, Hans Ekkehard Plesser, Ana María Guzmán, Andrei Korbut and Erica Onnis.
Inaugurating the collaboration of c:o/re and Ritsumeikan University on Emotionalized Artificial Intelligence
We are delighted to be commencing a collaboration on Emotionalized Artificial Intelligence with colleagues at Ritsumeikan Asia Pacific University. Professor Peter Mantello (Ritsumeikan Asia Pacific University) leads a project funded by the Japan Society for the Promotion of Science, on which c:o/re is a partner, that over the coming three years will compare attitudes in Japan and in Germany on Emotionalized Artificial Intelligence in the workspace. This is explorative pathway contributes to the c:o/re outlook on Varieties of Science.
Being hosted as a short-term fellow at c:o/re, on February 15th, Professor Peter Mantello inaugurated this collaboration by presenting the rationale and framework of this project.
Get to know our Fellows: Bart Penders
Get to know our current fellows and gain an impression of their research.
In a new series of short videos, we asked them to introduce themselves, talk about their work at c:o/re, the impact of their research on society and give book recommendations.
You can now watch the fifth video of Dr. Bart Penders, PhD in Science and Technology Studies and Associate Professor in ‘Biomedicine and Society’ at Maastricht University, on our YouTube channel:
Check out our media section or our YouTube channel to have a look at the other videos.
Marketplace Engineers at Work: How Dynamic Airline Ticket Pricing Came into Being
GUILLAUME YON
If you have recently been online looking up for flights, you may have noticed that prices for airfares are always in flux. But what online shoppers usually do not know is that these dynamic price changes are enabled by large and intricate technological systems powered by cutting-edge science and technology.
These systems were first deployed by airlines in the United States in the 1980s. Up until today, they have been an object of intense scientific and technological research and development. When deploying such systems in airlines at scale, engineers and scientists blend statistics and probabilities, mathematical optimization, computer science, and economics, all this to implement sophisticated business strategies.
Dr. Guillaume Yon
Guillaume Yon is a historian of economics, who researches and teaches how the ideas that shaped our economic thinking emerged. He is particularly interested in the economic knowledge produced by engineers working in industry.
In the talk I delivered at the Käte Hamburger Kolleg ‘Cultures of Research’ on December 13th, I focused on what is often considered as the first of these systems: DINAMO. DINAMO stands for dynamic inventory and maintenance optimizer. It was developed at American Airlines and was fully operational in 1988. Similar systems were implemented at other major airlines in the United States around the same time, and these systems came to be known to specialists as ‘revenue management’ systems.
What was the problem that American Airlines’ engineers had to solve? As the airline industry was being deregulated in the U.S. (a process completed in 1978), American Airlines’ marketing department came up with a new strategy, which had two connected components.
The idea was to offer multiple price points for the same seats in the same class of service on the same flight. At the time, aircrafts had two classes of service, first and coach. Coach was the second class and main cabin, as in trains in Europe these days, and less like today’s economy seats in airplanes. In coach, American Airlines’ flights were regularly departing half-empty, hence the idea, in order to fill up these empty seats and avoid the associated loss in revenue, to stimulate a new demand, coming from people travelling for leisure. Before deregulation, air travel was a luxury product, and American Airlines was not alone in thinking that there was an untapped and potentially huge new market out there: middle-class families going on vacation, college students coming back home, young couples going away for the weekend, senior citizens visiting their children and grandchildren. However, these new leisure travelers were price sensitive, hence the need for a price discount to attract them, and fill the empty seats.
The second component was to prevent American Airlines’ already existing business customers, who travelled in coach too, from buying at the discounted price. Business customers were less price sensitive than the new leisure travelers, as they traveled on company money. They were willing to pay more for the same seat in coach. If business customers could buy the discounted fare, the new strategy would only result in a new source of revenue loss, this time from the business travelers’ side. American Airlines’ marketing department came up with the idea to tie the discounted prices to restrictions. For instance, American Airlines Ultimate Super Saver, a fare launched in 1985, was cheaper than the full fare for the same seats in coach. However, it was available only up to 30-days before departure (the so-called ‘advance purchase requirement’), had a steep cancellation fee, and was available only to those buying a round trip ticket with a Saturday night stay. Business travelers could not abide to those restrictions. They tended to book later and wanted to spend the weekends with their families. Therefore, even though discounted fares were available for a flight, business travelers would carry on buying at a higher price the seats on the same flight.
The outcome of American Airlines’ new pricing strategy was that for a given flight – from A to B, with a given departure date in the future – the seats in coach were offered at different prices with different restrictions (the lower the price, the more stringent the restrictions). These different ‘fare classes’ were available for sale at the same time. This new marketing strategy was a tremendous success for American Airlines, and it played an important role in turning air travel into mass transportation.
This tremendous success from a revenue perspective turned into a nightmare from a business process perspective. At American Airlines, hundreds of new revenue management analysts were hired, and they were struggling. Each revenue management analyst had a set of flights to manage. They needed to decide, for each flight, how many seats should be made available for sale in each fare class in order to obtain, at the flight departure, the mix of passengers which maximizes revenue. That decision needed to be made at first a year before departure, when the flight opened for booking. In the mid-1980s, there were at least three different fare classes in coach (the full fare, the Ultimate Super Saver, and a Super Saver in between), in addition to the first class, on each flight. Worse, American Airlines re-organized its network after deregulation as a hub-and-spoke, in order to efficiently serve more destinations domestically and internationally. Each path in the network with a connection at the hub also had at least three fare products in coach. For the local traffic, if the analyst allocated too many seats to the lowest fares, it could displace high paying business travelers. But allocating too few seats to the lowest fares could mean departing with empty seats, if high paying demand did not materialize late in the booking process. Simultaneously, for the same flight, the analysts needed to decide what the revenue maximizing mix of local and connecting traffic was. Was it best to have one more seat protected for a high paying business passenger on that flight, or have one more discounted passenger on the same seat but with a connection to a long-haul expensive flight? It depended on the price each of those two passengers paid, the likeliness of each passenger showing up for booking, and how full each of the two flights were. Humans could not possibly make all these decisions efficiently at scale. Therefore, around 1982/1983 American Airlines management tasked its operations research department with automating the process.
To automate the process, operations researchers started thinking from the actual available technology: SABRE (for semi-automated business research environment). SABRE was big tech at the time. It was the first global electronic commerce infrastructure, allowing travel agents to sell tickets through a dedicated terminal, connected to American Airlines inventory in real-time, by telephone transactions. SABRE was also an amazing database, as it recorded the numbers of bookings for each fare class on each flight. However, for revenue management analysts, this deluge of data was overwhelming.
American Airlines’ engineers aimed at overcoming the limitations of human decision-making through automation. To do so, they needed to redesign SABRE, which was simultaneously an information system (or a database, recording bookings in each fare class at the flight level) and a distribution infrastructure (a marketplace). They asked: how to expand it, and turn it into a pricing system (able to manage which fares were available for sale on each flight from a network flow management perspective)?
The articulation of that problem is historically significant. American Airlines’ operations researchers sought to solve a business problem, the implementation of a sophisticated new pricing strategy, which aimed at making pricing more dynamic, more market-responsive, more granular. But they did not look for the theoretically optimal solution. Instead, they sought to deploy a new technology. To do so, they started from an already existing technology, identifying the constrains and opportunities it offered. This already existing technology (SABRE) was a global electronic commerce infrastructure, i.e. the marketplace itself, coupled with a large database on bookings, i.e. customers’ purchasing behavior.
I spent most of the talk narrating how American Airlines’ operations researchers came to a solution. I tried to show how their thinking was shaped by the details of the distribution infrastructure: how airlines’ products were sold to customers through a computerized system, i.e. the features of the marketplace itself. I also tried to show how their thinking was shaped by the data (availability and size) and the computing power they had access to.
I argued that the crucial step to the solution was nothing spectacular, just a hack in SABRE called ‘virtual nesting’. This hack enabled the management at the flight level of the availability of the connecting fare classes, when working with two new components plugged into SABRE. First, an automated demand forecast, powered by statistical and probabilistic approaches, extracted the historical booking data in each fare class in each flight from SABRE, and then provided an expected revenue for each ‘virtual bucket’ on a flight. The expected revenue of a bucket meant the average price of the range of fare classes clustered in the bucket, weighted by the probability of having that many customers booking in that bucket. Second, an algorithm allocated a number of seats to each bucket of fare classes, given the average expected revenue for each bucket; this component was called the optimizer. The mathematics supporting the optimization were not trivial. American Airlines’ operations researchers used mathematical programming approaches which belonged to the standard toolbox of operations research at the time. However, these tools needed to be creatively applied to the specific problem at hand, accounting in particular for the limitations in computing power. This required the development of completely new heuristics. Overall, using mathematical programming to make pricing more dynamic, more market responsive, and much more fine-grained than it had ever been before in any industry, was an important innovation. And it all hinged on a hack in SABRE.
DINAMO opened decades of intense research and development to improve the ‘hack’, the forecasting, and the optimization. The underlying logic is still in use today, at least in the largest networked airlines. It directly inspired marketplace engineers in many industries, from Amazon to Uber, from hotels to concert tickets sellers. It features prominently in the training of the future generation of marketplace engineers. And if your local supermarket uses digital price tags on the shelves, it is likely that they are using a version of it too.
Sources
The knowledge produced by marketplace engineers is not widely shared beyond the community of specialists. Furthermore, it is very practical and operational, and for that reason not fully codified in the scientific literature. Therefore, the main sources for my research are interviews with the engineers and scientists who built these systems in airlines (50 people interviewed so far, and the list is still open!). I asked them how they proceeded, the resources they had, the environment they were working in, what their thought process was, their path to the solution. My interviewees also walked me through the technical literature they have produced, in particular technical presentations delivered at an industry forum called AGIFORS, the Airline Group of the International Federation of Operational Research Societies. This talk on DINAMO drew on my broader research project, in which I study the practices, forms of reasoning, and ways of thinking of engineers and scientists who built revenue management systems in the airline industry, from the origins in the 1980s to today. On DINAMO, the interested reader can start with this great paper that was published by its three main inventors: Smith, Leimkuhler and Darrow (1992) ‘Yield Management at American Airlines’ Interfaces 22 (1), pp. 8-31.
Proposed citation: Guillaume Yon. 2024. Marketplace Engineers at Work: How Dynamic Airline Ticket Pricing Came into Being. https://khk.rwth-aachen.de/2024/01/31/9192/marketplace-engineers-at-work-how-dynamic-airline-ticket-pricing-came-into-being/.