About
This website contains the schedule and information about the DSAI Seminar, a weekly seminar series on broad data science and artificial intelligence topics.
The seminar is hosted by the Data Science and Artificial Intelligence Division at the Computer Science and Engineering Department at Chalmers University of Technology and the University of Gothenburg.
Usually, the seminar takes place every Monday from 14:00 to 15:00 (GMT+1) in Room Analysen, Edit Building, and online via Zoom (link sent to the mailing list).
The seminar is hosted by the Data Science and Artificial Intelligence Division at the Computer Science and Engineering Department at Chalmers University of Technology and the University of Gothenburg.
Usually, the seminar takes place every Monday from 14:00 to 15:00 (GMT+1) in Room Analysen, Edit Building, and online via Zoom (link sent to the mailing list).
Current Organizers: Richard Beckmann, Johann Flemming Gloy
Contact: ricbec@chalmers.se, gloy@chalmers.se
Schedule
| Date | Speaker | Talk |
|---|---|---|
| 23 March 2026 | Josef Urban |
130k Lines of Formal Topology in Two Weeks: Simple and Cheap Autoformalization for Everyone? (Joined seminar with the Division of Formal Methods, Recording available here)
This is a brief description of a project that has already autoformalized a large portion of the general topology from the Munkres textbook (which has in total 241 pages in 7 chapters and 39 sections). The project has been running since November 21, 2025 and has as of January 4, 2026, produced 160k lines of formalized topology. Most of it (about 130k lines) have been done in two weeks,from December 22 to January 4, for an LLM subscription cost of about $100. This includes a 3k-line proof of Urysohn's lemma, a 2k-line proof of Urysohn's Metrization theorem, over 10k-line proof of the Tietze extension theorem, and many more (in total over 1.5k lemmas/theorems). The approach is quite simple and cheap: build a long-running feedback loop between an LLM and a reasonably fast proof checker equipped with a core foundational library. The LLM is now instantiated as ChatGPT (mostly 5.2) or Claude Sonnet (4.5) run through the respective Codex or Claude Code command line interfaces. The proof checker is Chad Brown's higher-order set theory system Megalodon, and the core library is Brown's formalization of basic set theory and surreal numbers (including reals, etc). The rest is some prompt engineering and technical choices which we describe here. Based on the fast progress, low cost, virtually unknown ITP/library, and the simple setup available to everyone, we believe that (auto)formalization may become quite easy and ubiquitous in 2026, regardless of which proof assistant is used.
|
| 30 March 2026 | Daniel Bisig Zurich University of the Arts, Chalmers University of Technology, Coventry University, Mainz University of Applied Sciences, Instituto Stocos |
Artist-Guided Development of Interactive Generative Systems for Contemporary Dance
Contemporary dance offers a demanding yet revealing testbed for the design of interactive generative systems. It is a creative practice that treats the human body as a primary medium of expression, values experimentation over codified style, and integrates influences beyond Western dance traditions. From a human–computer interaction and creative computing perspective, it poses several challenges: embodied and tacit knowledge resists formalisation, somatic awareness is difficult to externalise and quantify through sensing technologies, and highly individual creative approaches defy generalisation and standardisation.
This talk presents an artist-guided approach to developing interactive generative systems in which expert dancers actively contribute to system design, data representation, and evaluation. The approach reframes development as a co-creative process—one where dancers’ embodied knowledge informs both system design and interaction strategies, and where generative algorithms act as adaptive partners rather than fixed tools.
Two case studies illustrate this methodology. Pemida (Personalised Musical Instrument for Dancers) employs machine learning to model individual improvisation strategies to music, enabling dancers to perform with highly personalised instruments responsive to full-body motion. Expressive Aliens applies simulation-based generative methods to map dancers’ distinct movement vocabularies to behavioural patterns for robotic lights, integrating dynamic light as a choreographic medium.
Together, these projects demonstrate how artist-guided co-design can ground the development of generative systems in real-world creative practice, enhance their artistic validity and usability in performance contexts, and promote long-term relevance within dancers’ evolving repertoires. They highlight the potential of embodied co-creation as a productive framework for research in human–AI interaction and creative computing.
|
| 11 May 2026 | Daniel Bisig Zurich University of the Arts, Chalmers University of Technology, Coventry University, Mainz University of Applied Sciences, Instituto Stocos |
|
| 4 May 2026 | Guy Axelrod Chalmers University of Technology, University of Gothenburg |
Twitch: Learning Abstractions for Equational Theorem Proving
Several successful strategies in automated reasoning rely on human-supplied guidance about which term or clause shapes are interesting. In this paper we aim to discover interesting term shapes automatically. Specifically, we discover abstractions: term patterns that occur over and over again in relevant proofs. We present our tool Twitch which discovers abstractions with the help of Stitch, a tool originally developed for discovering reusable library functions in program synthesis tasks. Twitch can produce abstractions in two ways: (1) from a partial, failed proof of a conjecture; (2) from successful proofs of other theorems in the same domain. We have also extended Twee, an equational theorem prover, to use these abstractions. We evaluate Twitch on a set of unit equality (UEQ) problems from TPTP, and show that it can help prove hard problems as well as yield significant speed-ups on many other problems.
|
| 8 April 2026 | Julian Togelius New York University |
Julian Togelius is a Professor of Computer Science at the New York University (NYU) Tandon School of Engineering, where he directs the Game Innovation Lab. He is also an IEEE Fellow and the co-founder of the AI-driven game testing startup, modl.ai. A world-renowned expert at the intersection of games and AI, his research focuses on procedural content generation, player modeling, and using games as testbeds for artificial intelligence. He is the co-author of the widely used textbook
Artificial Intelligence and Games
and his work explores how AI can make games more fun, easier to design, and highly adaptive.
Game generation for cognitive science and open-ended learning (and fun)
How can you create a system that can design novel, good games? And why would you want to do that? To answer the latter question first, we might want to understand how humans create or model their decision-making processes, but we may also want to create new testbeds for AI development. Or ideation tools for designers. To answer the first question, I will describe a series of systems developed my teams at IT University of Copenhagen and New York University. These systems generally build on evolutionary algorithms, and use simulated game-playing as part of an evaluation function. Accurately algorithmically estimating game quality is hard, but inspiration can be taken from game design theory, developmental psychology, and other cognitive sciences. Estimating how well a human would play or learn to play a game, a necessary precondition in many theories of game quality, is itself non-trivial, but reinforcement learning, imitation learning, and good old planning provide useful tools.
|
| 13 April 2026 | Devdatt Dubhashi Chalmers University of Technology, University of Gothenburg |
AI for Math and Open Science Workshop
|
| 20 April 2026 | TBD |
|
| 27 April 2026 | Xuechen Liu Chalmers University of Technology, University of Gothenburg |
Grounding Machine Creativity in Game Design Knowledge Representations: Empirical Probing of LLM-Based Executable Synthesis of Goal Playable Patterns under Structural Constraints
Creatively translating complex gameplay ideas into executable artifacts (e.g., games as Unity projects and code) remains a central challenge in computational game creativity. Gameplay design patterns provide a structured representation for describing gameplay phenomena, enabling designers to decompose high-level ideas into entities, constraints, and rule-driven dynamics. Among them, goal patterns formalize common player-objective relationships. Goal Playable Concepts (GPCs) operationalize these abstractions as playable Unity engine implementations, supporting experiential exploration and compositional gameplay design. We frame scalable playable pattern realization as a problem of constrained executable creative synthesis: generated artifacts must satisfy Unity's syntactic and architectural requirements while preserving the semantic gameplay meanings encoded in goal patterns. This dual constraint limits scalability. Therefore, we investigate whether contemporary large language models (LLMs) can perform such synthesis under engine-level structural constraints and generate Unity code (as games) structured and conditioned by goal playable patterns. Using 26 goal pattern instantiations, we compare a direct generation baseline (natural language -> C# -> Unity) with pipelines conditioned on a human-authored Unity-specific intermediate representation (IR), across three IR configurations and two open-source models (DeepSeek-Coder-V2-Lite-Instruct and Qwen2.5-Coder-7B-Instruct). Compilation success is evaluated via automated Unity replay. We propose grounding and hygiene failure modes, identifying structural and project-level grounding as primary bottlenecks.
https://arxiv.org/abs/2603.07101 |
| 25 May 2026 | Matti Karppa Chalmers University of Technology, University of Gothenburg |
Huffman-Bucket Sketch: A Simple 𝑂(𝑚) Algorithm for Cardinality Estimation
|
| 9 Feb 2026 | Filip Kronström | |
| 23 Feb 2026 | Matti Karppa | Engineering Compressed Matrix Multiplication using the Fast Walsh-Hadamard Transform |
| 16 Mar 2026 | Kim Jörgensen |
Introduction to Generative RecSys and Semantic IDs
This talk explores the emerging field of generative recommender systems and their foundation in large language models. We'll examine what foundation models are, when they offer advantages for recommendation tasks, and when traditional approaches may be more suitable. A key focus will be on Semantic IDs—a novel approach to representing items in ways that LLMs can understand and generate. I'll demonstrate how to leverage Semantic IDs to build LLM-based recommender systems, covering practical aspects including:
Implementing models for Semantic ID generation,
Fine-tuning large language models for recommendation tasks,
Optimization strategies for efficient training,
Evaluation methodologies for generative recommendation systems.
The session will provide both conceptual foundations and practical insights for researchers and practitioners interested in applying generative AI to recommendation problems.
|
Past Talks
| Date | Speaker | Talk |
|---|