Date: Wednesday, 5 November, 2025
Location: Rosensäle of the Friedrich Schiller University Jena, Fürstengraben 27, 07743 Jena
13:30 – 13:35
Welcome
Speaker: Joachim Giesen
Affiliation: Friedrich Schiller University Jena
13:35 – 14:20
Talk 1: From Active Inference to Causal Inference?
Speaker: Wanja Wiese
Affiliation: Ruhr-University Bochum
Abstract: In this talk, I will first review core tenets of active inference, a modelling approach developed by theoretical neuroscientist Karl Friston and colleagues. I will then critically examine the extent to which an active inference agent engages in causal inference. To do this, I will first outline how causal inference, as per Judea Pearl’s framework, differs from mere probabilistic inference. Finally, I will argue that, contrary to initial appearances, active inference agents may perform interventional, but not counterfactual, causal inference.
14:20 – 14:45
Coffee Break
14:45 – 15:30
Talk 2: From Big Brain Data to Collaboration: Where Mental Health Meets Computer Science
Speaker: Thomas Wolfers
Affiliation: Friedrich Schiller University Jena
Abstract: This talk explores how large-scale brain and behavioural data can deepen our understanding of the human mind. We will examine the challenges inherent in such data, ranging from domain shifts and noise to label uncertainty, and discuss how advances in computer science, particularly in machine learning, can help address them. The session aims to spark dialogue on how mental health research and computational sciences can join forces to build robust, interpretable, and clinically meaningful models. I look forward to connecting and exchanging ideas on how we can truly join our brains for collaborative innovation.
15:30 – 16:15
Talk 3: Multidimensional Musical Markov Chains
Speaker: Douglas Cunningham
Affiliation: Brandenburg Technical University Cottbus
Abstract: Musical expression can be found in nearly every culture throughout history, taking on an impressive variety of forms. The extensive range of instruments, the wide range of content, and the long history of music provides an insight into both the systematic nature and the high degree of complexity of musical expression. Indeed, it has been argued that music – like natural languages – follows the rules of semiotics. Understanding how to computationally learn and generate music can provide crucial insights into machine creativity and pattern recognition, while potentially revealing new perspectives on human musical cognition.
Current approaches to music generation predominantly rely on deep learning architectures, which, while powerful, come with significant drawbacks. These methods require massive labelled datasets – which largely do not exist and often raising copyright concerns -- and demand substantial computational resources both for training and inference. Moreover, their black-box nature can limit creative control and understanding of the generation process. Most importantly, despite their complexity, these models often struggle to capture the nuanced interplay between different musical parameters while maintaining coherent long-term structure especially while still retaining some degree of novelty in the generated pieces. This limitation presents an opportunity to revisit and extend simpler, more constrained approaches.
In this talk, we examine several extentions to Markov Chains for music learning and generation. This approach allows us to maintain algorithmic transparency and computational efficiency while pushing the boundaries of what these methods can achieve. Traditional Markov chain approaches to music generation face two key limitations: they struggle to handle multiple musical parameters simultaneously, and higher-order chains become exponentially memory-intensive. Here, we explore the promise of decomposing the problem into multiple first-order matrices rather and then combine their predictions, as well as the efficacy of using discounted higher-order chains both within and between the musical parameters. This approach dramatically reduces memory requirements while potentially capturing more sophisticated musical relationships through the interaction between different parameter matrices.