In February 2024, a collaboration with colleagues from Ritsumeikan Asia Pacific University on the topic of Emotionalized Artificial Intelligence (EAI) started. Professor Peter Mantello (Ritsumeikan Asia Pacific University) leads a 3-year project funded by the Japan Society for the Promotion of Science, on which c:o/re is a partner, that will compare attitudes in Japan and in Germany on EAI in the workspace. This explorative pathway contributes to the c:o/re outlook on Varieties of Science. You can find the whole project description on our website here.
In the interview below, Peter Mantello explains what EAI is, how the project will consider AI ethics and why the comparison of German and Japanese workplaces is particularly insightful. We thank him for this interview and look forward to working together.
Peter Mantello
c:o/re short-term
Senior Fellow (11-17/2/2024)
Peter Mantello is an artist, filmmaker and Professor of Media Studies at Ritsumeikan Asia Pacific University in Japan. Since 2010, he has been a principal investigator on various research projects examining the intersection between emerging media technologies, social media artifacts, artificially intelligent agents, hyperconsumerism and conflict.
What is Emotionalized Artificial Intelligence (EAI)? What does this formulation entail differently than ‘Emotional’ AI?
Emotional AI is the commercial moniker of a sub-branch in computer science known as affective computing. The technology is designed to read, monitor, and evaluate a person’s subjective state. It does this by measuring heart rate, respiration rate, skin perspiration levels, blood pressure, eye movement, facial micro-expressions, gait, and word choice. It involves a range of hardware and software. This includes cameras, biometric sensors and actuators, big data, large language models, natural language processing, voice tone analytics, machine learning, and neural networks. Emotionalized AI appears in two distinct forms: embodied (care/nursing robots, smart toys) and disembodied (chatbots, smartphone apps, wearables, and algorithmically coded spaces).
I think the term ’emotionalized’ AI better encompasses the ability of AI not to just read, and recognize human emotion but also to simulate and respond in an empathic manner. Examples of this can be found in therapy robots, chatbots, smart toys, and holograms. EAI in allows these forms of AI to communicate in a human-like manner.
What is emotionalized AI used for and for what is it further developed?
Currently, emotionalized AI can be found in automobiles, smart toys, healthcare (therapy robots/ doctor-patients conversational AI) automated management systems in the workplace, advertising billboards, kiosks and menus, home assistants, social media platforms, security systems, wellness apps and videogames.
What forms of ethical work practices and governance do you have in mind? Are there concrete examples?
There are a range of moral and ethical issues that encompass AI. Many of these are similar to conventional usages of AI, such as concerns about data collection, data management, data ownership, algorithmic bias, privacy, agency, and autonomy. But what is specific about emotionalized AI is that the technology pierces through the corporeal exterior of a person into the private and intimate recesses of their subjective state. Moreover, because the technology targets non-conscious data extracted from a person’s body, they may not be aware or consent to the monitoring.
Where do you see the importance of cultural diversity in AI ethics?
Well, it raises important issues confronting the technology’s legitimacy. First, the emotionalized AI industry is predominantly based in the West, yet the products are exported to many world regions. Not only are the data sets used to train the algorithms limited to primarily Westerners, but they also rely largely on famed American sociologist Paul Eckman’s ‘universality of emotions theory’ that suggests there are six basic emotions and are expressed in the same manner by all cultures. This is untrue. But thanks to a growing number of critics who have challenged the reliability/credibility of face analytics, Eckman’s theory has been discredited. However, this has not stopped many companies from designing their technologies on Eckman’s debunked templates. Second, empathetic surveillance in certain institutional settings (school, office, factory) could lead to emotional policing, where to be ‘normal’ or ‘productive’ will require people to be always ‘authentic’, ‘positive’, and ‘happy’. I’m thinking of possible dystopian Black Mirror scenarios, like in the episode known as “Nosedive”.
Third, exactly what kind of values do we want AI to have – Confucian, Buddhist, Western Liberal?
Do you expect to find significant differences between the Japanese and German workplace?
Well, it’s important to understand the multiple definitions of the workplace. Workplaces include commercial vehicles, ridesharing, remote workspaces, hospitals, restaurants, and public spaces, not just brick-and-mortar white-collar offices.
Japan and Germany share common work culture features, but each society also has historically different attitudes to human resource management relationships, what constitutes a ‘good’ worker, loyalty, corporate responsibility to workers, worker rights and unions, and precarity. The two cultures also differ in how they express their emotions, raising questions about the imposition of US and European emotion analytics in the Japanese context.
How will the research proceed?
The first stage of the research will be to map the ecology of emotion analytics companies in the West and East. This includes visits to trade show exhibits, technology fairs, start-up meetings, etc. The second stage will be interviews. The third stage will include a series of design fiction workshops targeted to key stakeholders. Throughout all of these stages, we will be holding workshops in Germany and Tokyo, inviting a interdisciplinary mix of scholars, practitioners, civil liberties advocates and industry people.
What do you think will be the most important impact of this project?
We are at a critical junction point in defining and deciding how we want to live with artificial intelligence. Certainly, everyone talks about human-centric AI but I don’t know what that means. Or if that’s the best way forward. Humans haven’t necessarily made the best choices for our world. If we try to make AI in our own image, it might not turn out right. What I hope this project brings are philosophical insights that will better inform the values we need to encode into AI, so it serves the best interests of everyone, especially, those who will be most vulnerable to its influence.
What inspired you to collaborate with c:o/re?
My inspiration to collaborate with c:o/re stems from my growing interest in phenomenological aspects of human-machine relations. For the past three years, my research has focused primarily on empirical studies of AI. The insights gained from this were very satisfying, albeit they also opened the door to larger, more complex questions that could only be examined from a more theoretical and philosophical perspective. After a chance meeting with Alin Olteanu at a semiotic conference, I was invited to attend a c:o/re workshop on software in 2023. I realized then that KHK’s interdisciplinary and international environment would be a perfect place for an international collaborative research project.