
The International Workshop on Active Inference (IWAI) was founded by VERSES’ own Tim Verbelen.
We spoke with Tim Verbelen, Director of the Intelligent Systems Lab at VERSES, to discuss the international workshop he launched five years ago to advance the science of active inference.
In our conversation, Tim reflected on the origins of the event, the challenges of growing a new field, and the future of intelligence shaped by theory and computation.
What sparked your interest in Active Inference initially?
I was at Ghent University as a postdoc when the neural network hype was getting started. We were using neural networks to predict the future and capture the world in a latent state. It worked, but we were doing all this effort training the neural net to predict the future, then taking that vector and doing another training with a reinforcement learning algorithm. I was trying to figure out what more we could do.
That's when I stumbled upon a talk by Karl where he was explaining minimizing prediction error or minimizing free energy. That was the epiphany. We were doing the right thing by minimizing prediction error but were thinking about how to link this to the reward signal. You don't need that. You just need the free energy as the thing you minimize. That's all there is to it.
What motivated you to start a workshop like IWAI?
We started working on active inference and we had some cool results, but we didn't really have good venues to submit our work to. When you start with active inference and you don't have a reward, the reinforcement learning community is like, “What?”
In 2020, the machine learning conference (ECML) was going to be hosted in Ghent by a professor in the same building, so I wrote up a proposal. COVID hit that year, so the workshop went virtual for two years. In 2022, at the first onsite event, is when I met Mahault, Riddhi, Magnus and others in person. We were in the tiniest room of the venue for one day. We had great fun.
After the event, the core committee of organizers were like, maybe if we organize it ourselves we can have more time and room for discussion. And everybody said, “Yeah, yeah, that's a good idea, we should do that.” Then there was this awkward silence. So I said, “Well, I might try to organize it again next year.” Everybody immediately said, “Yeah. Good idea. Let's do it again.”
The 2023 edition in Ghent was the first standalone event with a whole bunch of keynotes and 50-60 people over two to three days. That was the first real instantiation of IWAI.
What have been the biggest challenges in advancing Active Inference as a field?
I think a big challenge has been that the success of deep learning sucked a lot of air out of the room. Now it's reaching a plateau and people are looking for other stuff.
The second thing is the high entry bar. The early papers from Karl are very hard to digest. A lot of the people I met in the early stages looked at it but it seemed too complex so they didn't go further in it.
That has been improved a lot with the active inference textbook that came out. Sanjeev (Namjoshi) also has a textbook coming out that makes it even more accessible. Conor Heins built the PYMDP library. You see now that it is getting more and more traction.
What would a world where natural intelligence supercharged with the power of computing look like?
You'll get agents that are not just trained on a ton of data and then rehearsing the patterns that they saw. You'll get agents that actively explore their world in order to better be in agreement with it and also to learn about the world. So you'll get systrems that are genuinely curious and are genuinely epistemic about figuring stuff out to reduce their uncertainty.
That's an entirely different approach than the way that current AI systems are trained and deployed, because you feed them a lot of data and then you lock in the weights and it's more like a reflexive machine that gets deployed that gets some input and responds with what according to the training data would be the most likely output.
An active inference agent would rather be driven by “what do I expect the world to be, and how can I sample the world actively in order to update my model all the time?”. So even if the environment is changing, the agent will change with it, and that's an entirely different paradigm. The agent would search for a way to reduce the uncertainty in order to accomplish the goal.