Joost-Pieter Katoen demystifies probabilistic modeling

Joost-Pieter Katoen, Dawid Kasprowicz, Stefan Böschen; Joost-Pieter Katoen cites Ghahramani (2015: 455): “There are several reasons why probabilistic programming could prove to be revolutionary for machine learning and scientific modelling.”

On July 13th, Professor Joost-Pieter Katoen (RWTH Aachen University) gave the final Philosophy of AI lecture: Optimistic and Pessimistic Views lecture at c:o/re, titled “Demystifying probabilistic programming“. The talk convincingly advocated the usefulness and accuracy of probabilistic inferences as performed by computers. Various types of machine learning, argued Joost-Pieter Katoen, can benefit from being developed through probabilistic programming. The underling claim is that probabilistic programs are a universal modeling formalism. Far from implying that this could result in softwares that could successfully replace humans from inferential and decision-making processes, probabilistic programming relies on correct parameterisation, which is an input provided by humans.

The c:o/re team would like to thank Professor Frederik Stjernfelt and Dr. Markus Pantsar for organizing the lecture series Philosophy of AI: Optimistic and Pessimistic Views, which ran throughout the summer semester of 2022.

References

probabilistic inference

Training of neural networks

References

Ghahramani, Zoubin. 2015. Probabilistic machine learning and artificial intelligence. Nature 521: 452–459.

The language of thought: still a salient issue

Jakub Szymanik – Reverse engineering the language of thought

Part of the Philosophy of AI: Optimistic and Pessimistic View lecture series, on 25.05.2022 Jakub Szymanik gave a lecture at c:o/re on Reverseengineering the language of thought, problematising thought and computation by exploring the cognitive scientific notion of language of thought (“mentalese”). This concept, positing that humans think through logical predicates combined through logical operators, originates in Jerry A. Fodor’s celebrated book (1975), explicitly titled Language of thought. In effort to reverse-engineer the language of thought, Jakub Szymanik considered some recent computation theories, such as inspired from Jerome Feldman and neural networks in a fresh manner.

Jakub Szymanik explained that the notion of language of thought is not easy to avoid: it is often the engine underpinning theories of cognitive models, via notions of complexity and simplicity. The investigation leads to a broad variety of theoretical implications and empirical insights, among which there can be many contradictions. However, apparently divergent approaches at play here, such as, for example, symbolism and enactivism are not necessarily and entirely irreconcilable. The way forward is through pragmatically pursuing the epistemological unification of such theories, as guided by empirical insight.

The syllogisms of mentalese

Are we there yet, are we there yet?

“What will companies do?”

At his talk, part of the Philosophy of AI: Optimistic and Pessimistic Views, Professor Kim Guldstrand Larsen reflected on how far (or near) are we from developing fully autonomous cars. This is a priority challenge for explainable and verifiable machine learning. The question is not easy to answer directly. One certainty, though, is that the answer lies in the cooperation, or lack thereof, between academia and political agents (municipalities). The mediating agent, which, none of these two seem to favour, stems from industry: what can commercial companies deliver to improve traffic? Companies seem to speak both the language of research and of politics. How much will we smartify traffic in the next, say, 10 years? The question translates, as Professor Ana Bazzan asked simply, “What will companies do”?

What companies do, in this regard, will impact not only policy but also academia. Success in delivering smart solutions for traffic is expected to guide curriculum development in computer science programs. For example, the commerical solutions will focus teaching on either neural networks, Bayesian networks or automata based models.

from Kim Guldstrand Larson’s presentation: ideal adaptive cruise control

Digital justice for all… and letters: Jean Lassègue on Space, Literacy and Citizenship

Jean Lassègue and Phillip Roth discussing Digital Justice

Part of the c:or/e Philosophy of AI: Optimis and Pessimist Views, Jean Lassègue’s talk showed that (digital) literacy is intrinsic to digital justice. His minute comparison of the modern notion of justice and what digital justice may be suggests that, aside many compatibilities and ways in which digital technology can help juridic processes, there is one point of divergence. Namely, this is the despatialization implied by digitalization. In appearance, digital media takes human societies onto despatialized virtual media. However, through an encompassing and thoughtful historical investigation, Jean Lassègue traces the long (cultural) process of despatialization all the way to the emergence of the alphabet as a dominating means of social representation in the West. The alphabet is the beginning of the social practice of “scanning”, eventually fostering computation. In light of this long historical process, questions on digital justice invite the problematization of digital literacy, spatialization and, we would add, embodiment.

“… should be about the people” (Ana Bazzan)

Ana Bazzan during her talk on Traffic as a Socio-Technical System.

As part of the Philosophy of AI: Optimist and Pessimist Views c:o/re lecture series, Ana Bazzan delivered a very rich talk on “Traffic as a Socio-Technical System: Opportunities for AI“. It is beyond the scope of this entry to cover all arguments advanced in this talk. Here, we reflect on one matter that we find particularly interesting and inspiring. Namely, two interrelated central ideas in this talk are human-centredness in engineering and the network structuring of human societies. Ana Bazzan’s work contributes to understanding newly emerging social networks, in their many dimensions, and to usher the smart city by engineering and designing traffic as a socio-technical system.

Ana Bazzan highlighted the importance of mobility for equality, in many senses of these two words. Simplifying, a transportation system that facilitates mobility is congruent with democratic and transparent institutions. This realization comes from a user experience manner of thinking or, more broadly, placing the human at the centre of engineering. Stating that traffic and, in general, engineering “should be about the people”, Ana Bazzan further asked: “How to mitigate traffic problems by means of human-centered modeling, simulation, and control?” This question echoes the rationale of the environmental humanities, as posited by Sverker Sörlin (2012: 788), that “We cannot dream of sustainability unless we start to pay more attention to the human agents of the planetary pressure that environmental experts are masters at measuring but that they seem unable to prevent.” It is highly insightful that many sciences find ways for progress by reflecting on human matters. It may pass unnoticed, as a detail, but very different problematizations stem from thinking of either technical or socio-technical solutions for, say, traffic. The endeavour of engineering solutions in awareness of social and cultural context is typical of “the creative economy”, being ripe with a creative conflict, as emerging from “where culture clashes most noisily with economics.” (Hartley 2015, p. 80)

In this interrogation, Ana Bazzan referred to Wellington’s argument that we are now in a century of cities, as opposed to the last century, which was a century of nation states. Indeed, research from various angles shows that digitalization transcends the borders, as imagined through print (Anderson 2006 [1983]), of nation-states. In a similar vein, defining industrial revolutions as the merging of energy resources with communication systems, Jeremy Rifkin (2011) argues that to achieve sustainability, it is necessary to merge renewable energy grids with digital communication networks. This would result in overcoming the dependence on the merger of motorways, powered by fossil fuels, and broadcasting.

Particularly through reinforced learning, digital technology and AI are the instruments of the transition towards smart cities. Smart cities do not merely span over a geographical territory but are better identified as (social) networks. They are incarnate in traffic. Ana Bazzan also insists that broadcasting the same information about traffic to all participants to traffic is not useful. Drivers exhibit a rational behavior, according to pragmatic purposes. More than simply targeting pragmatically useful information to specific drivers, a smart traffic system or, better, a smart city is constituted by multiagent systems, not a centralized and unidirectional top-down transmission of information. As such, while not broadcasting uniformly, this approach is, actually, anti-individualistic. It makes evident the benefits, particularly, the cumulative rewards, of seeking solutions in light of people’s shared and concrete necessities. To apprehend these networks and serve the needs of their actants, Ana Bazzan advocates a decentralized, bottom-up approach. Indeed, this is a characteristic of ‘network thinking’ (Hartley 2015). The resulting networks render obsolete previously imagined community boundaries, revealing, instead, the real problems of people as they find themselves in socioeconomic contexts. The city is these networks and it becomes according to how they are engineered.

References

Anderson, Benedict. 2006 [1983]. Imagined communities: Reflections on the origin and spread of nationalism. London: Verso.

Hartley, John. Urban semiosis: Creative industries and the clash of systems.

Rifkin, Jeremy. 2011. The Third Industrial Revolution: How lateral power is transforming energy, the economy, and the world. New York: Pallgrave Macmillan.

Sörlin, Sverker. 2012. Environmental humanities: why should biologists interested in the environment take the humanities seriously? BioScience 62(9): 788-789.

Technologically enlightened: Interdisciplinary research in robotics or the privilege of being messy

Elena Tosi Brandi and Samuel Bianchini

Today’s c:o/re workshop on Interdisciplinary Research in Robotics and AI, organized by Joffrey Becker, insightfully showcased the mutual relevance between scientific and market research, particularly in the concern of design. It convincingly posited that materiality and physical properties are intrinsic to learning. The talks by Samuel Bianchini, Hugo Scurto and Elena Tosi Brandi showed that learning is a matter of designing. We do not make stuff out of nothing. Elena Tosi Brandi explained that “when you design behaviors, you have to put an object in an environment, a context.” Humans appear to notice this by interacting with robots. For example, machine learning processes offer opportunities for humans to reflect on their own learning. Animated (digital) objects acting independently from humans and thus, arguably, having agency, in so many words, make us wonder. Observing relations between software, bodies and non-organic matter places humans in a new position to understand how materiality is intrinsic to knowledge. Director of c:o/re, Stefan Böschen noted that experimental research on robotics is often open-ended, aiming to stimulate innovation in a “what may be” interrogation. To this, Hugo Scurto shared that for him “it is a privilege to be able to do research in a messy way”

Elena Tosi Brandi on design of robots inspired by the animal world


The consideration of the epistemic qualities of material properties urges the reconsideration of Western modern philosophy, which we shall dive into in tomorrow’s workshop, Enlightenment Now, hosted by Steve Fuller and Frederik Stjernfelt. As Steve Fuller already remarked, listening to Elena Tosi Brandi’s work on design, it is possible “that machines and humans might both improve their autonomy through increased interaction. But this will depend on both machines and humans being able to learn in a sufficiently ‘free’ way, regardless of what that means.” If Hume woke up Kant from the dogmatic slumber, it seems that robots can wake us up some more.