Towards Expanding STS?
MARCUS CARRIER
On October 9, 2024, KHK c:o/re director Prof. Dr. Stefan Böschen opened the new lecture series “Expanding Science and Technology Studies”. His talk titled “Towards Expanding STS?” was aimed at setting the scene for the lecture series and served as a starting point for further reflections on the topic. The talk was mostly designed around sketching out the problems that, as Prof. Böschen argues, classical Science and Technology Studies (STS) are not equipped to tackle alone. Instead, he argues for an expansion of STS towards other disciplines that investigate Science and Technology, namely History of Science and Philosophy of Science, to better grasp these problems.
Prof. Böschen started his talk with presenting his own personal starting points for thinking about this topic. First, there is the new concept of “Synthetica” or new forms of life that are designed by humans which also played a role in the opening talk by KHK c:o/re director Prof. Gramelsberger for the 2023/2024 lecture series “Lifelikeness”. Prof. Böschen now asked whether these Synthetica are epistemic objects or technical objects and if STS are equipped to describe the practice around them. Second, he talked about sustainable development goals. These are very knowledge intensive, but at the same time the knowledge management has to be done by different countries which also have to take into account different forms of knowledge and have to manage a lot of diversity in the system. Third, Prof. Böschen reflected on different formats he experienced that made him think further on expanding STS: The Temporary University Hambach that was designed around the structural change in the Rhinish Revier and based on the needs of local people, and the STS Hub 2023 in Aachen which was designed to bring together different disciplines doing “science on science.”
After having set the scene with these personal starting points, Prof. Böschen claimed that there are signs for science changing significantly. First, he concentrated on the cluster of topics around digitization and especially the digitization of problem-solving in science. This cluster includes topics like digital models both for scenario building and for reducing the space of options where real-world problems must be transformed to be computable by which models shape the way of thinking in science. But also, the digitization of scientific literature to grasp the ever-growing amount as well as the digitization of experiments which can pose challenges for expectations of reproducibility are part of this cluster.
In the tension of simplification for the sake of problem-solving and complexifying to better understand specific contexts, Prof. Böschen argued that digital tools are steered towards simplification. This, in turn, creates new and specific concerns about the epistemic quality of knowledge produced by these tools and about the way they transform research in practice.
The second cluster of topics that Prof. Böschen argued are a sign of significant change in science is the de-centralization of knowledge production exemplified in projects like living labs which were also part of recent talk by Dr. Darren Sharp at the KHK c:o/re. Programs like living labs, where science encourages society to participate in the making of solutions for local issues, can have two forms. On the one hand side, they can in collaborative ways explore the status quo and define what should be understood as the “problem” before bringing together local experiences and knowledge as well as scientific knowledge to solve it. On the other hand, living labs can start out with a technological innovation and can then locally look for applications and use-cases for this innovation. The technology can then be optimized towards local needs.
In both forms of living labs, the important new criterion for knowledge is relevance which entails the question for whom it should be relevant and who defines that. Also, these local solutions and optimizations face problems of scaling. How can they be scaled up and are the “problems” on all scaling level still the same? Lastly, how does it impact knowledge production on a deeper level?
Both, the digitization of science and the de-centralization of knowledge production show that science is in the midst of a transformation according to Prof. Böschen. There is a need for a relational analysis of epistemic quality and epistemic authority. He shares his intuition to preserve the ideal of reliable scientific knowledge and that knowledge production for decision making processes has an epistemic as well as an institutional side. This, Prof. Böschen argues, can not be done by any discipline alone but needs collaboration between the sociologically and ethnographically centered STS and more philosophically and historically oriented research on science. Expanded STS as Prof. Böschen envisions it should tailor new concepts for analyzing research during transformation.
With this call to action, Prof. Böschen leaves not with a set program but with a description of problems that call for future interdisciplinary discussions.
On October 30, 2024, the next talk of the Lecture Series titled “An IAEA for AI? The Regulation of Artificial Intelligence and Governance Models from the Nuclear Age” will be by our fellow Elisabeth Röhrlich. We look forward to continuing the conversation!
Workshop “Epistemology of Arithmetic: New Philosophy for New Times”
The Käte Hamburger Kolleg on Cultures of Research hosted a philosophical Workshop on May 16th and 17th May. It was organized by Markus Pantsar and Gabriele Gramelsberger for good reasons: Gabriele Gramelsberger received as the first German philosopher the K. Jon Barwise Prize, while Markus Pantsar’s book “From Numerical Cognition to the Epistemology of Arithmetic” had been recently published by Cambridge University Press as the first book publication by a fellow at the KHK Aachen.
Markus Pantsar: “From Numerical Cognition to the Epistemology of Arithmetic”
The workshop kicked off with a presentation by Markus Pantsar (RWTH Aachen University) on how his book came to be. The leading question is: how can we use empirical knowledge about numerical cognition to gain a better understanding of arithmetical knowledge? His goal is to combine philosophy of mathematics with the cognitive sciences to gain a deeper understanding of how we develop and acquire number concepts and their arithmetic. It’s fascinating how these concepts develop differently across cultures, even though they are based on universal proto-arithmetical numerical abilities. Indeed, even animals have proto-arithmetical abilities, evidenced by their ability to differentiate on collections based on numerosities. This leads to an intriguing question: how do we come to develop and acquire number concepts? From an anthropological perspective, numbers are a fundamental aspect of human life in many cultures, yet there are also cultures without numbers. Hence, aside from the evolutionarily developed proto-arithmetical abilities, we also need to focus on the cultural foundations of arithmetic. All this, Pantsar argued, is relevant for the epistemology of arithmetic.
Dirk Schlimm: “Where do mathematical symbols come from?”
Dirk Schlimm from McGill University in Montreal was the next to present. He talked about his recent research project on mathematical notations. Grounding on the question of what notations are (according to Peirce), Schlimm introduced his newest findings that mathematical notations are sometimes arbitrary, but this is not the case generally. Mathematical symbols may resemble or draw from shapes in the real world, or have other characteristics that connect to our cognitive capacities. The issue is, however, very complex. Mathematical symbols, in particular, carry many purposes and their use needs to be studied with this in mind. In addition to purely scientific purposes, we should consider how academic practices and political dimensions influence the acceptance and use of notations.
Richard Menary: “The multiple routes of enculturation”
Richard Menary (Macquarie University, Sydney) then gave us insights into his research on enculturation, arguing that there are multiple cultural pathways to developing and acquiring number concepts and arithmetic. Menary calls this the multiple routes model of enculturation. He discussed aspects of Pantsar’s book, especially the developmental path from proto-arithmetical cognition to arithmetical cognition. Menary showed a variety of factors in how this transition can take place, like finger counting, writing and forming numbers on paper. Enculturation through cultural practices has a significant influence on the development of arithmetical abilities, but we should not be fooled into thinking that such enculturation is a uniform phenomenon that always follows similar paths.
Regina Fabry: “Enculturation gone bad: The Case of math anxiety”
Regina Fabry (Macquarie University) showed in her presentation on math anxiety how the relationship between cognition and affectivity needs to be included in accounts of arithmetical knowledge. While accounts of enculturation, like those of Menary and Pantsar, focus on the successful side of things, it is important to acknowledge that processes of enculturation can also go bad. Socio-cultural factors associated with mathematics education can lead to anxiety, which hinders the learning process with long-standing consequences. Empirical studies can contribute to a better understanding of where epistemic injustice may be present, and where there is a strong link to math anxiety. Accounts of arithmetical knowledge drawing from enculturation should be sensitive to such problems, but we can also use research on math anxiety to understand better the role of affectivity in enculturation in general.
Catarina Dutilh Novaes: “Dialogical pragmatism and the justification of deduction”
On Friday, Catarina Dutilh Novaes from Vrije University of Amsterdam discussed her ongoing investigation on the dialogical roots of deduction and posed the question what, if anything, can justify deductive reasoning. While her book The Dialogical Roots of Deduction offers an analysis of deduction as it is present in cultural practices, the question of its justification is left open. In her talk, she discussed whether pragmatist approaches could fill the gap to ground deduction. She argued that the justification for deduction comes from nothing beyond the pragmatics of the dialogical development of deduction. She supported this claim by a discussion on pragmatist theories of truth and recent discussion on anti-exceptionalism in logic.
Frederik Stjernfelt: “Peirce’s Philosophy of Notations and the Trade-offs in Comparing Numeral Symbol Systems”
The former KHK Fellow Frederik Stjernfelt (Aalborg University Copenhagen) talked about his recent studies on Charles S. Peirce’s work on notations, co-conducted with Pantsar. Although better known for his work on logical notation, Peirce was deeply interested also in mathematical notation, including numeral symbol systems. He was eager to find a fitting notation for numbers which is easy to learn and allows easy calculations. Peirce focused in particular on the binary and heximal systems, the latter of which he considered superior to our decimal system. Stjernfelt presented Peirce-inspired criteria for different aims of numeral symbol systems, like iconicity, simplicity, and ease of calculation, arguing that the choice of a symbol system comes with trade-offs between them.
Stefan Buijsman: “Getting to numerical content from proto-arithmetic”
Stefan Buijsman (TU Delft) discussed Pantsar’s account of how humans arrive from proto-arithmetical abilities to proper arithmetical abilities. Studies of young children suggest that the core cognitive object-tracking system (OTS) and approximate number system (ANS) can both play a role in this process, but a key stage is acquiring the successor principle (that for every number n, also n + 1 is a number). Buijsman emphasized the role of acquiring the number concept one and its importance in grasping the successor principle, noting that Pantsar’s account could benefit from more focus on the special character of acquiring the first number concept.
Alexandre Hocquet: “Reproducibility, Photoshop, Pubpeer, and Collective Disciplining”
With Alexandre Hocquet’s (Université de Lorraine/ Laboratory Archives Henri-Poincaré) talk, the workshop moved from the philosophy of arithmetic to digital and computational approaches to philosophy of science. Hocquet discussed Photoshopping scientific digital images and using them for fraud in academic research, focusing on the Voinnet affair. On this basis, he discussed the topics of trust, reproducibility and change of scientific methods. With the use of digital images as evidence, new considerations of transparency are needed to ensure trust in scientific practice.
Gabriele Gramelsberger and Andreas Kaminski: “From Calculation to Computation. Philosophy of Computational Sciences in the Making”
In the final talk of the workshop, Gabriele Gramelsberger (RWTH Aachen University) and Andreas Kaminski (TU Darmstadt) focused on the computational turn in science. While mathematics has been an indispensable part of science for centuries, the increasing use of computer simulations has replaced arithmetical calculations by Boolean computations. Gramelsberger discussed the cognitive limitations of interpreting non-linear computing systems. Kaminski then considered questions of epistemic, pragmatic, and ethical opacity that arise from these limitations.
From bio-ontologies to academic lives: What studying biocuration can tell us about the conditions of academic work
SARAH R. DAVIES
When I arrived at the Käte Hamburger Kolleg in February 2024, my plan was to study bio-ontologies: the systems that are used to categorise and organise biological data. As a Science and Technology Studies (STS) researcher, I had been interested in biocuration for a while, and one key aspect of biocuration work is developing and applying ontologies. Exploring bio-ontologies would, I thought, give me important insights into the practice of biocuration and what it is doing to our understandings of biology, the organisms, and entities that are studied, and ideas about ‘life’ itself.
Sarah R. Davies
Sarah R. Davies is Professor of Technosciences, Materiality, and Digital Cultures at the Department of Science and Technology Studies, University of Vienna, Austria.
Her work explores the intersections between science, technology, and society, with a particular focus on digital tools and spaces.
I am a social scientist, so delving into the nature of bio-ontologies by looking at natural science and philosophy literature about them was something of a departure for me. What I hadn’t necessarily expected was that doing so would bring me back to more sociological questions, in particular regarding the conditions of academic work. In other words, studying bio-ontologies led me to argue that these systems, which are “axioms that form a model of a portion of (a conceptualization) of reality”[1]Bodenreider, Olivier, and Robert Stevens. 2006. “Bio-ontologies: current trends and future directions.” Briefings in Bioinformatics 7 (3): 256–74. https://doi.org/10.1093/bib/bbl027., are connected not just to forms of life in the context of biological entities, but with regard to the researchers who create and use them.
Let me rewind a bit. What is biocuration, and what exactly are bio-ontologies? Biocuration is “the process of identifying, organising, correcting, annotating, standardising, and enriching biological data”. [2]Tang, Y. Amy, Klemens Pichler, Anja Füllgrabe, Jane Lomax, James Malone, Monica C. Munoz-Torres, Drashtti V. Vasant, Eleanor Williams, and Melissa Haendel. 2019. “Ten quick tips for … Continue reading Its “primary role … is to extract knowledge from biological data and convert it into a structured, computable form via manual, semi-automated and automated methods.”[3]Quaglia, Federica, Rama Balakrishnan, Susan M Bello, and Nicole Vasilevsky. 2022. “Conference report: Biocuration 2021 Virtual Conference.” Database 2022 (Januar): baac027. … Continue reading This is largely done in the context of large data- and knowledgebases (such as FlyBase or UniProt), which are now central to the biosciences. Biocurators work to develop and maintain such databases, for example by reading scientific articles and extracting useful information from them, inputting data into databases, adding metadata and annotating information, and – importantly – creating and using the bio-ontologies I have already mentioned.
Bio-ontologies, then, are a means of classifying and organising biological data. They offer a ‘controlled vocabulary’ (meaning a standardised terminology), but also represent current knowledge about biological entities in that they consist of “a network of related terms, where each term denotes a specific biological phenomenon and is used as a category to classify data relevant to the study of that phenomenon.”[4]Leonelli, Sabina. 2012. “Classificatory Theory in Data-intensive Science: The Case of Open Biomedical Ontologies.” International Studies in the Philosophy of Science 26 (1): 47–65. … Continue reading Bio-ontologies such as the Gene Ontology therefore offer not only a means of accessing knowledge and data, but investigating biological phenomena by creating, as noted on the Gene Ontology’s website, “a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research”.
As I looked into the nature of bio-ontologies, it became clear to me that these organisational systems for biodata are hugely important. They allow researchers in the biosciences to access current knowledge and relevant data (not always easy in the midst of a ‘data deluge’), but they also have epistemic significance. As Sabina Leonelli writes, bio-ontologies “constitute a form of scientific theorizing that has the potential to affect the direction and practice of experimental biology.”[5]Ibid. The development and application of ontologies to biological data thus renders the contemporary biosciences thinkable, capturing the current state of the art and allowing researchers to extrapolate from that.
Given this significance, it is perhaps somewhat surprising that biocuration, as an area of science, often goes unnoticed by its users and by research funders. As one biocurator told me:
…we are in the background. Even researchers who heavily use these resources [databases], don’t usually know our names and don’t think about us existing. But they love the resource. And that’s actually something we’ve gotten with the booth when we were at conferences. People will come up and be like, oh you are the [resource]! Wow, you are good, awesome. They are kind of shocked that there’s humans there.[6]Davies, Sarah R., and Constantin Holmer. 2024. “Care, collaboration, and service in academic data work: biocuration as ‘academia otherwise.'” Information, Communication & … Continue reading
Biocurators are not only ‘in the background’, they frequently struggle to get sustained funding for their work, and generally need to build careers through a series of temporary contracts. Perhaps because databases are machine-readable and can be queried automatically, both funders and the researchers who use curated resources often seem to imagine that the work of biocuration can be readily carried out through automated means; in practice, while biocurators make use of automated tools such as text-mining, interpreting scientific literature and annotating data is a highly skilled activity that cannot be easily replicated by AI or other technologies.
Why is biocuration so under-valued despite its epistemic importance? One answer is that biocuration does not fit well with current systems of reward and evaluation within academia. Researchers are, for instance, rewarded for publishing frequently and in high-profile journals, but biocurators produce other kinds of outputs to journal articles – the data – and knowledgebases that they work on. Similarly, gaining research funding is typically seen as a sign of a successful academic, but biocurators’ work does not fit well into the categories that funders use to assess research quality (such as novelty). As Ankeny and Leonelli explain:
Value in science (be it of individual researchers or particular research projects) is largely calculated on the basis of the number of publications produced, the quality of the journals in which those publications appeared, and the impact of the publications as measured by citation indices and other measures: given that [data] donation and curation are still largely unrecognized, the value of these activities correspondingly is limited in part because it cannot be measured using traditional metrics.[7]Ankeny, Rachel A., and Sabina Leonelli. 2015. “Valuing Data in Postgenomic Biology:: How Data Donation and Curation Practices Challenge the Scientific Publication System.” In … Continue reading
Studying bio-ontologies thus led me to consider the lives of their creators, and the conditions under which they work. Despite the epistemic significance of biocuration, it escapes recognition under contemporary ways of crediting and rewarding academic work – something which seems to me to be deeply unfair. Perhaps, then, we need to find new ways of valuing, funding, and rewarding the wide variety of epistemic contributions made within research, rather than relying on metrics such as number of publications and citations as the key means of assessing research?
References
↑1 | Bodenreider, Olivier, and Robert Stevens. 2006. “Bio-ontologies: current trends and future directions.” Briefings in Bioinformatics 7 (3): 256–74. https://doi.org/10.1093/bib/bbl027. |
---|---|
↑2 | Tang, Y. Amy, Klemens Pichler, Anja Füllgrabe, Jane Lomax, James Malone, Monica C. Munoz-Torres, Drashtti V. Vasant, Eleanor Williams, and Melissa Haendel. 2019. “Ten quick tips for biocuration.” PLoS Computational Biology 15 (5): e1006906. https://doi.org/10.1371/journal.pcbi.1006906. |
↑3 | Quaglia, Federica, Rama Balakrishnan, Susan M Bello, and Nicole Vasilevsky. 2022. “Conference report: Biocuration 2021 Virtual Conference.” Database 2022 (Januar): baac027. https://doi.org/10.1093/database/baac027. |
↑4 | Leonelli, Sabina. 2012. “Classificatory Theory in Data-intensive Science: The Case of Open Biomedical Ontologies.” International Studies in the Philosophy of Science 26 (1): 47–65. https://doi.org/10.1080/02698595.2012.653119. |
↑5 | Ibid. |
↑6 | Davies, Sarah R., and Constantin Holmer. 2024. “Care, collaboration, and service in academic data work: biocuration as ‘academia otherwise.'” Information, Communication & Society 27 (4): 683–701. https://doi.org/10.1080/1369118X.2024.2315285. |
↑7 | Ankeny, Rachel A., and Sabina Leonelli. 2015. “Valuing Data in Postgenomic Biology:: How Data Donation and Curation Practices Challenge the Scientific Publication System.” In Postgenomics: Perspectives on Biology after the Genome, edited by Sarah S. Richardson and Hallam Stevens, 126–49. Duke University Press. https://doi.org/10.1515/9780822375449-008. |
Net Zero Precinct Futures: place-based experimentation for sustainability transitions
On September 11, 2024, Kármán Fellow Dr Darren Sharp gave an overview of Net Zero Precincts, a four-year ARC Linkage project to develop and test a new interdisciplinary approach to help cities achieve net-zero emissions. In this interdisciplinary project, Dr Sharp aims to bring together transition management and design anthropology with the goal of transitioning to net-zero carbon emissions in an urban environment.
The starting point of the project is the Net Zero Initiative of Dr Sharp’s home institution (Monash University, Melbourne, Australia), where Monash University, as the first university in Australia, has pledged to become carbon-neutral by 2030. Net Zero Precincts is researching this transition on campus to both facilitate its success and learn lessons for scaling up such initiatives at the precinct level.
Dr Sharps started by giving overviews of the two stages of the project that are already finished. In the orienting stage, Dr Sharp and his team made use of interviews with, among others, staff, students, representatives of local and state government, and people from NGOs. The goal was to identify the main sustainability challenges, drivers, and uncertainties along the way as they were understood by the interviewees.
In the second stage, which focused on agenda-setting, workshops were used to go from abstract visions of a net-zero future by participants to concrete ideas of actionable steps and transition pathways. It was especially important at this stage to take local perspectives, the local landscape, and nature into account.
Finally, Dr Sharp briefly discussed the ongoing stage 3 of the project, which started in April 2024. Here, the pathways and visions found in stage 2 of the project were used to develop experiments for the Monash campus living lab. Different projects to overcome the identified challenges or reach the set goals are tried out.
Overall, Dr Sharp argues that the process of scaling up a net-zero project to the precinct level requires a broad perspective. It is not enough to focus on technical innovations to reduce carbon emissions alone. Instead, it is also essential to rediscover First Nations’ knowledge systems, to think about small everyday innovations, and to mobilize the community. Challenges to achieving a net-zero future are local and community-specific and must also be considered.
The Net Zero Precinct project raises fundamental questions that are also of great importance for technical universities. The self-design of universities as living labs is becoming increasingly important under the current transformative conditions of research and innovation. This is because knowledge contexts and the orientation towards socially desirable results must be intertwined with the forms of academic knowledge production. In addition, in cooperation with the Living Labs Incubator at the Human Technology Center, we were able to not only work on specific research issues in living labs in a workshop, but also discuss the first steps towards developing a global network for living lab research at universities.
Links
Net Zero Precincts: Stage 1 Report (PDF)
Net Zero Precincts: Stage 2 Report (PDF)
Photos by Jana Hambitzer
Reports from the field: a very partial view of EASST4S2024 Amsterdam
BART PENDERS
Social studies of science, or science and technology by any other name, may sometimes feel like a small field in which one knows, or knows of, the relevant players on a global level. Attending the combined conference of both the European Association for the Study of Science and Technology (EASST) and the Society for the Social Studies of Science (4S) then becomes a humbling experience. With over 4100 attendees over the course of the conference, this year’s edition in Amsterdam may have been the biggest ever. The scale of these events is always impressive and without exception displays the holes in one’s overview of the community.
Bart Penders
Bart Penders investigates moral, social and technical plurality in research integrity, scientific reform and forms of collaboration across a variety of scientific specializations. He currently holds a position as Associate Professor in ‘Biomedicine and Society’ at Maastricht University.
On the upside, that means that there are new worlds in STS to uncover and engage with, without a real upper limit. The absence of these upper limits is overwhelming and daunting though. Consider, for instance, that EASST4S2024 had 10 timeslots for parallel sessions in which each timeslot offers a choice between 50 and 60 parallel sessions. That gives every attendee over 97 quadrillion potential sets of panels to go to and has given rise to the custom of not asking fellow attendees How is the conference so far? but instead How is your conference so far?
Thematically and conceptually, STS is difficult to pin down. EASST4S2024 saw whole collections of sessions on AI and society, participatory approaches to science policy and practice, critical engagement with open science and various panels on psychedelics, music and sound, and so much more. But it never is just talk – experiments with different forms of conferencing have, over the years, created alternative panel forms that included this year, ranging from cooking workshops, to a whole selection of movies.
The diversity of a conference this scale cannot be summarized. Every attempt is destined to fail. However, there are elements that are worth mentioning to me – as the core of my route through the conference and a few that are more plenary, more shared, more collective – snippets of a joint experience.
Let’s start with the shared experience – that of judicious connections between scholars with shared interests; the joy of meeting people you haven’t met in a while but with whom you share academic pasts and those whom you never met but with whom you may share academic futures. Next to the many plural elements of the conference, there is a number of plenary events for all to share. The scale of the conference did make some of that sharing materially difficult: the largest room at the Free University Amsterdam, which hosted the conference, could only seat roughly a quarter of all attendees. Plenaries were streamed to a number of the conference rooms, where plenary sessions became large-screen televised events.
One of the key questions of the first plenary was How does STS translate into policy? One of the speakers was Dr. Alondra Nelson who had served as scientific advisor in the Biden administration and conveyed a twofold message: first, there is a lot STS has to offer policy. The contested themes of our day are where STS excels and we need not be overly afraid of some instrumentalization of science in policy. Second, in contrast though – policy advice does not always leave time for empirical or conceptual labor to underpin it. What we need, Nelson argued, was a certain Science and Technology Intuition, a reservoir of generic tacit skill and knowledge we can tap from. Uncomfortable, imprecise, but powerful. Brice Laurent expanded on this argument by highlighting that we need to transcend a dualist frame in which science is separate from (the issues of) daily life. Our daily lives are penetrated by science to such an extent that we cannot, and should not separate them and any culture war that seeks to achieve this inevitably will come undone.
Massive conferences also come with honors: people who are remembered for their achievements (a plenary dedicated to the work of Adele Clarke) and those who are awarded for their achievements. The list of prizes both societies grant together is very long, but one worth point out in the duo that received the 2024 Bernal Prize: Dutch anthropologist Annemarie Mol and US critical informatics scholar Geoffrey Bowker.
The infrastructure of conferences this scale turns it, in many ways, into an academic festival with the ability to taste and enjoy the various fruits the community has on offer. This analogy was not lost on the conference organizers, who chose to not host a traditional conference dinner but rather organize a genuine “Forest Festival” in the Amsterdamse Bos. Next to the various flavors a global academic community has on offer, we were treated to quite literal global flavors under a pleasant sun.
On a more individual note, I managed to attend a plethora of sessions diving into the credibility of scientific collaboration, the role of replication in science and what perspectives STS has to offer, how reforms in science happen under conditions of uncertainty and how science corrects itself – or not. I organized some of them, spoke in some of them, and engaged with speakers in others. I asked and was asked regularly Have you written about that? and more often than not, the answer was no. In isolation, that no may be disappointing, but on a more structural level it displays the many unexplored and underexplored paths and potential futures STS conferences offer. As every STS mega-conference does, it has left me exhausted but intellectually revigorated. To be overwhelmed is not always a bad thing, but it sure is impressive every time.
Photos by Ana María Guzmán Olmos
Towards Technological Solutions to Climate Action from Varieties of Science: Insights from the Narrative of floods in Kenya and Germany
FREDRICK OGENGA
INTRODUCTION
Nairobi has been experiencing extreme weather patterns in line with warnings from the weatherman in the past few months. This trend, which is seemingly an annual trend, begun sometime last year with devastating droughts that affected the entire country with arid and semi-arid parts of the country worst hit. The latter created food shortages and insecurity of biblical proportions in general, to the extent that politicians, led be the President, William Ruto (a champion of climate action), were calling for intercession through national prayers. The droughts led to death of vulnerable women and children and contributed to the loss of livestock and crops, negatively affecting Kenya’s economy through consequent high food prices. Then fast forward to this year (2024) another extreme pattern was witnessed this time characterized by heavy and long rainfalls that contributed to floods and mudslides that killed people in cities and villages[1]. It may have appeared like a Kenyan problem, but the problem was witnessed in other parts of the world in places like Dubai and most recently, Germany.
Of course, [2]these things often appear sensationally on media platforms and for the first time, similar media scenes of animals and property being swept away by floods in Kenya, Germany and Dubai were witnessed both in developed and developing economies. Is climate not the great equalizer? Does this then beg the question of what humanity can borrow from this, seemingly, similar patterns of events at least as represented through news media outlets? What kind of agency do this narrative incite and what does it tell us about our culture of doing things and our own ingenuity? Are there possibilities of positive synergy across cultures, geographical spaces and tech/media platforms to find solutions for the future of humanity in a world ravaged by climate induced disasters?
Fredrick Ogenga
Fredrick Ogenga is an Associate Professor of Media and Security Studies at Rongo University and the Founding Director, Center for Media, Democracy, Peace & Security. He also serves as the CEO, The Peacemaker Corps Foundation Kenya. Ogenga is a Letsema Visiting Fellow at the Institute for Pan-African Thought and Conversation and Senior Non-resident Research Fellow, Institute for Global African Affairs, at the University of Johannesburg and the West Indies respectively. He is also an Associate Researcher Africa Studies Center, University of Basel, and Senior Research Associate, Swisspeace. Ogenga is a member of International Panel on the Information Environment’s (IPIE) Scientific Panel on Information Integrity on Climate Science and Chair of IPIE’s AI and Peacebuilding Scientific Panel. He is also a former Sothern Voices Network for Peacebuilding Scholar at the Wilson Center, Washington D.C and Africa Peacebuilding Network fellow. Ogenga is a Co-founder of the Varieties of Science Network (VOSN) and will be Senior Fellow at the KHK c:o/re RWTH Aachen University in 2025.
VARIETIES OF SCIENCE NETWORK
These are the tough questions we are now facing and to address them, a new view on the different forms of how problem-oriented research is performed seems to be decisive. Therefore, the idea towards a Varieties of Science Network at (VOSN) was born in Basel, Switzerland by Prof. Stefan Böschen, the Director the Käte Hamburger Kolleg Cultures of Research, RWTH Aachen University, Germany, and Prof. Fredrick Ogenga, The Director of the Center for Media, Democracy, Peace and Security, Rongo University, and The CEO of the Peacemaker Corps Foundation Kenya that seeks to examine the challenges faced globally, from environmental, political, economic, social to cultural challenges. Subsequently, the most prominent ones being climate change, financial inequalities, political and social upheavals, and pandemics. In this context, humanity continues to display a great level of ingenuity and resilience and have innovated ways of coping and adapting for self-preservation but not without challenges. Nevertheless, what has been lacking is a higher level of cooperation across cultures and geographical spaces to take advantage of the potential benefits of crosspollinating local knowledge and expertise both at the local and global level as demonstrated by the recent floods witnessed from Nairobi to Dubai and the West of Germany, Aachen.
The latter is a reminder to humanity that we are confronted with similar challenges in a seemingly technologically connected world that appear to challenge the common assumption, evidenced in political conversations globally, that have often defined the boundaries between the global North and South in epistemic frameworks where the latter have often plaid catch up. Central to this conversation has been the idea of coloniality, and within that, decoloniality and the emergence of global communication technologies which have been designed and exploited to maintain and sustain unequal power modalities[3].
The latter positionality has sustained a global image of Africa on global media platforms as a continent ravaged by disease and disaster (floods, droughts and pandemics) as seen in recent floods in Kenya inspired by coloniality of technology and knowledge, and within that, the centrality of decoloniality vis-à-vis the emergence of global communication systems. Technological systems that have fallen short of sustaining a colonial discourse amidst changing global environment due to climate change must be resisted at all costs. And so, climate change has disrupted the ideological lenses of Western journalistic frames when it comes to the positive image of the West juxtaposed against that of Africa.
Consequently, news of floods are given equal treatment in Germany as they would otherwise not in comparison to news in ecologies in the global South such as Kenya – The usual sensational narrative of disaster demonstrated by cows and other valuables being swept by ravaging floods is a tired African narrative and it is therefore a paradox to confront such images in emerging narratives of floods in Germany – Is this then not a warning sign and a compelling reason for humanity to forge a united front? (the we are in this together or Harambee (togetherness) spirit of pan-African philosophical epistemic underpinning?)
From this background, the Varieties of Science Network (VOSN) seeks to tap from ‘glocal’ knowledge reservoir (local epistemic framework) in a bid to bring the epistemic gaps in knowledge production and dissemination in climate science and other socio-economic, political and cultural challenges using research and technology to seek a more coordinated approach to finding solutions to common scientific questions and challenges facing mankind today. The network is inspired by what is regarded as one of the central topics of the KHK RWTH Aachen, namely: Varieties of Science. Doing so, this initiative seeks to uncover the diverse cultural-institutional conditions of epistemic freedom and intellectual democracy across geographical and cultural spaces and multiple disciplines.
The idea is to unravel the productive parts of the global North -South conversations to overcome colonial burdens etc. Due to the emerging common threats, for example, brought about by climate change as argued, these traditional global North South conversations, that have often centered on coloniality of power dynamics as witnessed in news representation of disasters, is certainly not going to be the same in future and are becoming more and more unsustainable. Climate change will create, and is beginning to shape, a new world living space for mankind and therefore, we need to find ways to cooperate with each other. So, it’s about knowing and creating a new collective order, a new human rights agenda and creating an economic order that is fair enough for all people. VOSN intends to bring together people and topics that would like to contribute to this network to that end.
It is driven by better engagement between people and the different conditions between ecologies for better understanding in different worlds to form collaboration to, for example, balance in terms of Co2 and energy transitions globally. It also seeks to find better ways of understanding and guard-railing energy transitions and other forms of transitions, be it political, economic, and socio-cultural in different ecologies by examining problem centered cases such as climate change and many other topics and issues in different fields and countries that would animate varieties of science for members to learn from each other. It would seek to understand how to synergize technologically driven emergency responses to natural disasters such as drought, famine, floods and pandemics as recently witnessed in different geographical spaces across cultures. For example, in the question of climate, which is the inaugural theme for VOSN, what are the agencies and emerging different ways of knowing or gnosis and responding? What are the epistemic questions across cultures? and which kind of knowledge is seen as important and prioritized?
APPROACH
The agenda will begin with the more prominent environmental challenge brought about by climate change as both the entry point to the VOSN network and as a point of departure in establishing how a more united approach to difficult scientific questions that act as threat to the self-preservation of mankind (Ubuntu/ humanity) can be approached and co-designed in a manner that respects local cultures (Cultures of Research) with several cross-cutting public problems or themes.
CLIMATE MITIGATION AND ADAPTATION
As a flagship thematic focus, VOSN will focus on the intersection between technology, climate, and peacebuilding across cultures as an entry point to our global collaboration and research agenda which is in line with Käte Hamburger Kolleg Cultures of Research focal area of climate change. This will entail a technical, systematic and meta-analysis of the use of technology in climate mitigation across different ecologies and local Action Research in different ecologies in the global North and South involving local communities to inspire practical interventions by examining how they are adapting to climate change challenges and opportunities, and the kind of resources at their disposal (technological or otherwise)[4]. This evidence would be able to reveal human ingenuity and how tech innovations could be a game-changer in climate adaptation, conflict resilience and peacebuilding for the self-preservation of humanity going forward.
The varieties of science research agenda will also look at how the devastating effects of climate change are inciting new policy interventions that are in turn attracting mitigation efforts (the political economy of interventions) from different actors (local and international, public, and private), particularly carbon credit programs, that are not gender and conflict sensitive[5]. Consequently, how these mitigating efforts are implying on local communities in terms of livelihood, how they are exacerbating conflict pressure points and therein the role of digital technologies/tools in empowering communities into action for climate mitigation and adaptation through alternative livelihoods such as tree planting (greening), for conflict resilience and peacebuilding. The evidence will therefore be used to contribute to the defense of climate science information as opposed to climate misinformation and disinformation on social media spaces and help influence policy change around climate financing and community sensitive carbon credit investments in different ecologies such as Kenya and Germany going forward.
[1] Naidoo, D. and Gulati, M. 2022. Understanding Africa’s Climate and Human Security Risks. Policy Brief 170. October 2022. Institute for Security Studies; Tesfaye, B. 2022. Addressing Climate Security in Fragile Contexts. Center for Strategic and International Studies, https://www.csis.org/analysis/addressing-climate-security-fragile-contexts.
[2] Morley, D. 2007. Media, Modernity and Technology- The Geography of the New. London: New York: Routledge.
[3] Freenberg, A. Democratic Rationalization: Technology, Power and Freedom. In Rober, C. and Dusek, V. (eds.) 2014. Philosophy of Technology –The Technological condition on Anthology 2nd Edition. Malden, Oxford: Wiley Blackwell; Godin, B., Gaglio, G. and Vinck, D. 2021. Handbook on Alternative Theories of Innovation. Cheltenham: Edward Elger Publishing.
[4] Yayboke, E., Nzuki, C. and Strouboulis, A. 2022. Going Green while Building Peace: Technology, Climate and Peacebuilding. Center for International and Strategic Studies. https://www.csis.org/analysis/going-green-while-building-peace-technology-climate-and-peacebuilding.
[5] Greenfield, P. 2023. The New Scramble for Africa: How a UAE Sheikh Quietly Made Carbon Deals for Forests Bigger than UK. The Guardian Thursday 10th November 2013.
On the promises of AI and listening data for music research
NIKITA BRAGUINSKI
As a c:o/re fellow, I had the uniquely advantageous opportunity to develop and test, in an environment dedicated to the study of science, my ideas about how AI and data can influence music research. Members of the Kolleg and its fellows, many of whom are philosophers of science, offered a very rich intellectual circle that inspired me to look at the datafication and technologization of future music research from many new angles. With its intensive and diverse program of talks, lectures, and conferences, the Kolleg also offered ideal opportunities for testing approaches in front of an attentive, thoughtful, critical and friendly audience. Below, I present brief overviews of the main ideas that I discussed during three talks I gave at the Kolleg.
Nikita Braguinski
Nikita Braguinski studies the implications of technology for musicology and music. In his current work, he aims to discuss challenges posed to human musical theory by recent advances in machine learning.
My first presentation, entitled “The Shifting Boundaries of Music-Related Research: Listening Logs, Non-Human-Readable Data, and AI”, took place on January 16, 2024 during an internal meeting of Kolleg fellows and members. I focused on the promises and problems of using data about music streaming behavior for musical research. Starting from the discussion of how changing technologies of sound reproduction enabled differing degrees of observing listener behavior, I discussed the current split between academic and industrial music research, the availability of data, the problems of current industry-provided metrics such as “danceability”, and the special opportunities offered by existing and future multimodal machine learning (like the systems that use the same internal encoding for both music and text). I also offered examples of descriptive statistics and visualizations made possible by the availability of data on listener behavior. These visualizations of large listening datasets, which I was able to create thanks to my access to the RWTH high performance computing cluster, included, among others, an illustration of how users of online streaming services tend to listen to new recordings on the day of their release, and an analysis of the likeliness of different age groups to listen to popular music from different decades (with users from the age group 60-69 having almost the opposite musical preferences of the age group 10-19).
Discussing my talk, c:o/re colleagues drew parallels to other academic disciplines such as digital sociology and research on pharmaceutical companies. The topic of addictiveness of online media that I touched upon was discussed in comparison to data-gathering practices in gambling, including the ethics of using such data for research. The political significance of music listening and its connection to emotions was also discussed in relation to the danger of biases in music recommender systems.
My second presentation, entitled “Imitations of Human Musical Creativity: Process or Product?”, took place during the conference “Politics of the Machines 2024. Lifelikeness and Beyond”, which c:o/re hosted. I focused on the question of what AI-based imitations of music actually model – the final product (such as the notation or the audio recording) or the processes that lead to the creation of this product.
In this presentation, I discussed:
1) The distinction between process and product of artistic creation, which, while especially important for discussions on the output of generative AI, currently receives little scholarly attention;
2) How several theories in the humanities (notably, formalism, psychoanalytic literary theory, and the line of AI skepticism connected to the so-called Chinese room argument) stress the importance of the process in artistic creation and cognition;
3) That current endeavors in generative AI, though impressive from the point of view of the product, do not attempt to imitate the processes of creation, dissemination, and reception of art, literature, or music, nor do they imitate historical, cultural, or economic environments in which these processes take place;
4) Finally, because the data on which generative AI systems operate carries traces of past processes, the product of these systems remains connected to the processes, even if no conscious effort is made by the creators of these systems to imitate the processes themselves.
A conference participant commented that for commercial companies avoiding the imitation of all these processes is a deliberate strategy because their imitation has to be cheaper than the original process-based artifact.
My third presentation at the Kolleg, “Life-Like Artificial Music: Understanding the Impact of AI on Musical Thinking”, took place on June 5, 2024 as a lecture in the c:o/re Lifelikeness lecture series. Here, I addressed the likeliness (or unlikeliness) of major shifts in the musicological terminology to result from the academic use of AI . Starting with an overview of various competing paradigms of musical research, I drew attention to possible upcoming problems of justifying the validity of currently existing musicological terminology. The salient point here is that AI systems based on machine learning are capable of imitating historical musical styles without recourse to explicitly stated rules of musical theory, while humans need the rules to learn to imitate those styles. Moreover, the ability of machine learning systems to learn internal structures of music directly from audio (skipping the notation stage on which most of human music theory operates) has the potential to question the validity and usefulness of musical theory, as currently taught.
Having stated these potential problems, I turned to a current example, a research paper [1] in which notions of Western music theory were compared to the internal representations learned by an AI system from music examples. Using this paper as a starting point for my argument, I asked whether it could be possible in principle to also use such an approach to come up with new, maybe better, musicological terminology. I pointed to the problems of interpreting the structures learned by machine learning systems and of the likely incompatibility of such structures (even if successfully decoded) with the human cognitive apparatus. To illustrate this, I referred to the use, by beginner players of the game of Go, of moves made by AI systems. Casual players are normally discouraged from copying the moves of professional human players because they cannot fully understand these moves’ underlying logic and thus cannot effectively integrate them into their strategy.
In the following discussion, one participant drew attention to the fact that new technologies often lead to a change in what is seen as a valid research contribution, devaluing older types of research outcomes and creating new ones. Another participant argued that a constant process of terminological change takes place in disciplines at all times and independently of a possible influence of a new technology, such as machine learning.
Overall, my c:o/re fellowship offered, and continues to offer, an ideal opportunity to develop and discuss new ideas for my inquiry into the future uses and problems of AI and data in music research, which have resulted, in addition to the three presentations mentioned above, in talks given at the University of Bonn, Maastricht University, and at a music and AI conference at the University of Hong Kong.
[1] N. Cosme-Clifford, J. Symons, K. Kapoor and C. W. White, “Musicological Interpretability in Generative Transformers,” 4th International Symposium on the Internet of Sounds, Pisa, Italy, 2023
Installations and Art at LOGOI and PACT – PoM Recap #4
It has been more than month since c:o/re hosted the PoM conference “Lifelikeness & beyond” . As this sizeable and, while still new, already renown conference produced many lively discussions in a creative interrogation of the dialog between life sciences and technology studies, we want to share our retrospective reflections on it through a series of focused posts.
Alongside the PoM main program of keynotes, talks, lectures and workshops, the conference was accompanied by art and installations displayed at the LOGOI Institute for Philosophy and Discourse in Aachen. Also part of the conference, the choreographic centre PACT Zollverein in Essen provided the program ‘life.like’ , which consisted of six artistic positions in the form of performance, installation, discourse and sound.
These contributions showed in various ways how philosophical, technical and bioscientific topics can be artistically thought and implemented. They enabled critical dialog and reflection on artistic methods and results between artists, scientists from different disciplines and the public.
If you would like to learn more about any of the contributions, take a look at the PoM program and life.like.
LOGOI
‘life.like’ at PACT Zollverein
Unless otherwise indicated, photos by Jana Hambitzer
Algorithms of Late-Capitalism: The Board Game – PoM Recap #3
It has been more than a month since the KHK c:o/re hosted the PoM conference “Lifelikeness & beyond”. As this sizeable and, while still new, already renown conference produced many lively discussions in a creative interrogation of the dialog between life sciences and technology studies, we want to share our retrospective reflections on it through a series of focused posts. In two interviews, the artists shared with us insights into their work and creative process. Here, we reflect on the board game Algorithms of Late Capitalism together with Karla Zavala Barreda and Adriaan Odendaal.
What can we learn from the contingency of the community of the living and the non-living? What insights on contingency may transpire from embedding life and non-life within each other? How are factuality and fiction mediated by the imagination in the pursuit for new forms of collective action and of creating collectivities?
Algorithms of Late-Capitalism: The Board Game
by Karla Zavala Barreda and Adriaan Odendaal
During the PoM conference, Karla Zavala Barreda and Adriaan Odendaal from the research & design studio internet teapot hosted a series of guided play-sessions of their new board game “Algorithms of Late-Capitalism”.
In 2021, they conducted a series of experimental workshops as part of the New New Fellowship that brought diverse groups of international participants together to co-design a board game. The purpose of this project was to use board game co-design as a medium through which participants can collectively explore questions around more pluralistic and desirable technological futures. Over the course of several workshop sessions, participants contributed ideas and reflections to the creation of the game, framed by concepts drawn from pluriversal ontological design, intersectional feminism, and digital materialism.
In Algorithms of Late-Capitalism, players become members of a community of cyborgs, reigned over by the first Sentient Machine Cult. This cult has given rise to a formative new algocracy in which society is governed by the organizational logic of rigid data structures and opaque algorithms. The players-as-cyborgs are confronted with a rule-system that places them in a position of systematic exclusion and increasing marginality.
The board game affords different ways of playing: players can integrate themselves into this society by following the formal rules and competing against each other to conform to the logic of the Sentient Machine Cult’s algocracy; or they can subversively coordinate their efforts and attempt to change the system by introducing new rules and winning conditions. By discovering ways to play collaboratively instead of competitively, players are encouraged to explore alternative, convivial, caring, and inherently pluralistic technological futures – as well as possible pathways towards these futures.
By playing the game, conference attendees were able to explore reflections, questions, and ideas encoded into the game fiction and mechanics by the different cohorts of game co-designers.
How did the idea of developing the game come up?
Karla: We have been exploring the medium of board game design for a couple of years, both designing prototypes and playing them. On the other hand, we have also been hosting and co-creating zines, so when the New New Fellowship opportunity came up, we thought of it as a chance to merge game design with co-creation methodologies. We also believe that design can foster critical reflection and social transformation. So we wanted participants to think about the absurdities of the technology in our present and through this lens imagine better futures. As technology users we all have an expertise to share. We want to open the barriers to technology design, so that everyone can share their experiences and perspectives to help improve things. Through the board game design, we wanted to ask: What can be reimagined to make more inclusive and desirable technological futures?
What is the goal of the game?
Adriaan: The goal is to create an open space for people to contribute to and enrich the process of thinking about technological futures. The game is an exploration of how we can benefit from collaborative processes, instead of following the imperatives of market-driven competition. We want people to explore these critical and conceptual points through low-barrier and playful mediums. Board games are also very social objects, they create social spaces where people can connect and start discussions. By playing, people engage with more inclusive imaginaries of better technological futures. When we think of digital technologies, for example AI, what probably comes to mind is widespread services such as ChatGPT. Big tech companies’ imaginaries dominate the discussions of what technology is and can be. But, through co-creating the board game we explored alternative imaginaries.
Karla: It’s important to empower the broader public to imagine what technology can be and understand that they should have a say in what technologies get deployed in their cities and societies at large. As a society we negotiate culturally how technology works, as such public participation should be fostered. This was the goal of the co-creation workshops that brought this game to life, to give non-technical public the tools to think of important questions around our increasingly digitized and mediatized societies.
What would be the ideal technological future for you?
Adriaan: There should be more diversity in technology. Smaller, weirder, experimental things. I would wish for a future where technology is curious and diverse and not dominated by a few companies that copy each other.
Karla: A future where communities understand how technology works and have a say in the technologies that impact their lives. To me, especially understanding that technology is socially constructed is important, what we think as a community of certain technologies matter. Technology carries values and worldviews, there should be more variety and creative imaginaries around it.
How should things continue with your game?
Karla: We will soon publish it as a print to play version. Our aim is that the game can be used as means to open conversations about technology and its role in our social and intimate lives in diverse settings: from schools to university students and even policy making.
The board game is currently available as a free print-to-play version online. You can also follow Karla’s and Adriaan’s work on Instagram.
Would you like to gain further impressions of the PoM conference in Aachen? Then take a look at our interview with Chris Dupuis as well as our recap of the conference days and the accompanying program of art and performances.
Photos by Jana Hambitzer
Dead People Are Liking Things On Facebook – PoM Recap #2
It has been more than a month since the KHK c:o/re hosted the PoM conference “Lifelikeness & beyond”. As this sizeable and, while still new, already renown conference produced many lively discussions in a creative interrogation of the dialog between life sciences and technology studies, we want to share our retrospective reflections on it through a series of focused posts. In two interviews, the artists shared with us insights into their work and creative process. Here, we reflect on the performance Dead people are like things on Facebook in conversation with Chris Dupuis.
What can we learn from the contingency of the community of the living and the non-living? What insights on contingency may transpire from embedding life and non-life within each other? How are factuality and fiction mediated by the imagination in the pursuit for new forms of collective action and of creating collectivities?
Dead People Are Liking Things On Facebook
by Chris Dupuis
What happens to our online self after we die? How might this material be used by others, and to what effect? Does this material serve as a valid means of remembering people? Do we remember them as they were or as they wanted to be?
Chris Dupuis asked himself these questions as part of his interactive lecture performance. In this interview, he provides insights into the background to his work and how he deals with death in social media.
Could you please introduce yourself?
Chris: I’m a Canadian writer, curator, and performance maker, based in Brussels.
What is your performance about?
Chris: “Dead People Are Liking Things On Facebook” is a lecture performance where I scroll through the profiles of Facebook friends who have died, discuss how I knew them, and what meaning can be taken from their online afterlives. The show was catalyzed in 2016 when I was scrolling through Facebook and noticed that my friend Will had “liked” Coca-Cola. In one way, this wasn’t strange, as Will actually liked Coca-Cola in real life. Will was a well-known Toronto DJ and queer club promoter in the early 2000s. He was famously sober, but thought everyone needed at least one bad habit, and so at some point, he decided Coke would be his vice. At the same time, it was strange that he had “liked” it in 2016 since at that point he had been dead for six years. How was this possible? The show started with me searching for the answer to that question.
What do you want to show with your performance?
Chris: I think a lot of the experience is up to the audience to interpret. I’m not really trying to “show” anything or make any specific claims. It’s more about raising a series of questions about mortality, social media, and the construction of identity for us to consider together.
What do you want to happen to your online presence after your death?
Chris: Despite having toured this show for several years and being preoccupied with these questions the whole time, I haven’t actually made any decisions about it. But assuming I have an average lifespan, the Internet and human connectivity will probably look radically different than it does now, so it’s difficult for me to imagine what I’ll be concerned with then.
How do you think social media platforms will deal with this type of situation in the future?
Chris: When all of these social media platforms and tech companies were starting out, they weren’t considering where they would be in twenty years. They were thinking about surviving the next six months. As they’ve gradually come to control so much of our lives and our public discourse, I think that some of them (though not all of them) have genuine concerns with how to navigate the future with the power they wield. At the same time, there’s also a question of how many of these companies will be around in the future or whether they will be replaced by AI versions that allow us to live online in very different ways, particularly as they may intersect with VR. What does seem clear is that there needs to be some level of government intervention to regulate these companies as they develop increasingly powerful tools.
Would you like to gain further impressions of the PoM conference in Aachen? Then take a look at our interview with Karla Zavala Barreda and Adriaan Odendaal as well as our recap of the conference days and the accompanying program of art and performances.
Photos by Jana Hambitzer