6 min read

Karl's Corner | August 2025

Karl's Corner | August 2025

 

Transcript

Dan: Welcome to another installment of Karl's Corner. We're here with Karl Friston. And Karl, thank you so much for being here with us.

Karl: Well, it's a pleasure to be here again.

Dan: So, Karl, my first question for you is you've spent years studying biological intelligence. When you see artificial intelligence systems fail, what is fundamentally wrong with their approach?

Karl: I think a failure to comply with those natural laws and principles that underwrite the way that we work both at an evolutionist scale and day-to-day in our daily lives. And those principles can either be read in terms of biology or in terms of physics. So if you go back to your school boy days of physics, you have come across things like Newton's laws or examples of something called a principle of least action. So this basically says that the way that the world works is it finds the path of least effort, the most efficient path, the path of least action, where action is times energy. So if you now want to replicate things like intelligence or ecosystems or any complex system that has some adaptive intelligent aspect, then you need to apply the principles of least action or maximum efficiency, also known as minimum redundancy. I think that artificial intelligence, machine learning research has lost sight of that. I think it's understandable because if you take an engineering approach as opposed to an approach that a physicist would take, it's very easy to wander away from the underlying first principles.

Dan: When you look at real world operations and they require something more than pattern matching or massive data sets, how does active inference get what traditional AI gets wrong? How does it get it right in unpredictable environments?

Karl: Well, I think the answer as usual with your questions is in the way that you ask the question. It's the unpredictability. I mean, just think about what are the problems that people contend with. Say, take the markets. Take financial or fintech. It's the confidence expressed by the market. It's the unpredictability of Trump for example. You know, all of these things speak to the importance of being able to quantify your confidence or its complement, uncertainty.

So in order to be efficient, you have to be able to account for the uncertainty, what you don't know. Which means that you need to be able to quantify, represent, encode, and store your uncertainty as part of your sense-making.

Dan: That, versus we've talked about the idea of active inference versus static or passive inference on a large language model. That is, looking at old patterns in data sets and looking at those, the correlations versus the cause and effect of actively moving into the future. And you suggested the term a forward model.

Could you explain a bit the difference between looking backward and looking forward and a forward model?

Karl: Yeah, I guess that's a really important distinction. So if we just use large language models to exemplify the fundamental difference between generative AI in the sense of machine learning and the kind of generative generalized artificial intelligence that we aspire to, then we're talking about the difference between something that has agency, something that has authentic agency that can act upon the world. Now, to be an agent is to be able to select among different acts, different policies, different paths into the future that will secure the kind of content or outcomes or information that provide the most evidence for you as a model of that content. So you need to be able to act in a way that determines the kind of data or content you're going to be able to use to resolve your uncertainty. So this has a fundamentally future pointing aspect. So agency necessarily implies that your generative model has to include the consequences of your action because if you don't have that you can't evaluate what to do next in the service of gathering the right kind of content or information that you need in order to resolve your uncertainty.

So at the moment things like transformer architectures and most large language models, all large language models simply do not have this future pointing aspect. They can certainly predict what's going to happen next, but this is not under their control. This is not an expression of agency to be an agent. The large language model would have to choose who to listen to. And in fact, strictly speaking, to be an agent, a large language model would have to actually issue the prompts.

Dan: Obviously, you've worked in a lot of capacities with a lot of different organizations and companies. How is working this collaboration with VERSES different than what you've been doing with other groups?

Karl: I think it's different because it was a convergence, and a convergence of academia and industry, and a convergence of minds literally. So, you know, I got into this game through being exposed to the early visions of VERSES, you know, that were encompassing things like the spatial web and the spatial web foundation. So, you know, a very much for the common good approach that was quintessentially biomimetic, taking a sort of natural intelligence ecosystem perspective on things, which was very consilient with the application of things like active inference and the free energy principle to direct interactions amongst agents that we were pursuing and still pursuing indeed in cognitive science and/or cognitive neuroscience and, you know, computational psychiatry, for example.

So that was quite unique. It's, you know, to actually find that the same conclusions and the same visions and the same commitments have arisen independently. Sometimes people refer to this as convergent evolution, when these minds meet. So that's a unique aspect of my involvement with VERSES. A more banal answer is that all my students grew up and left me and the majority of them got paid by VERSES as they said, come, come and company sphere—and did so with great persuasion on the one hand, but also with a unique offering that VERSES, you know, it's not the first time it's happened. I mean, Google DeepMind started exactly this way.

Dan: And do you see in, in the work that's even just happened in the last year, do you see a convergence of these technologies coming together to produce, you know, this biological intelligence, you know, Allah AGI and some sort, that we're headed towards that path? Do you see that convergence coming together?

Karl: In the past few, I would say, weeks of, but certainly months, a lot of these spans have come to fruition to provide proof of concepts and demonstrations which you, I won't say knock your socks off because that's a little, that's a bit trivial, but, it's the same kind of, "Oh, did they really do that?" It's like when you cook your own meal, you can never really—what, you know, what—you can never really enjoy the consumption. But I was able to just stand back and just look at the products of all this cooking for the past two years and think, "What, that's a remarkable piece of work." So yes, I think there have been denouements to these programs that people can now showcase and say and be quite proud of. The next step, of course, is to get somebody to see them and to say, "Yes, well, if you didn't do that in—imagine what you could do in this domain or with this money or in this kind of application."

Dan: And what was most exciting about this last month?

Karl: There were a number of things and I won't pick one, which just reiterates how impressed I was with the simulations and the performance on these, both sort of machine learning benchmarks but also sort of physical intelligence benchmarks. What did intrigue me was dialogue with France Shle, who is the engineer behind what's called an ARC challenge. It just, again, it was a sort of kind of convergence, a meeting of minds, but in this particular instance, between VERSES and a subset of the machine learning community, artificial intelligence community, earnestly trying to find the next big step and both implicitly convinced that it's going to be along this sort of system two style reasoning, more biomimetic kind of intelligence. That is, if you like, what one expression of active inference when properly deployed. And, you know, both sides of the conversation—VERSES and the, I repeat, the engineers of the ARC challenges—just trying to seed each other out to what their respected commitments were. So, not exciting from the point of view of company development, but sort of intellectually quite arousing in the sense that, you know, there are people you can play with and impress and be impressed by that could potentially in the future make a real difference.

Dan: Karl, you get asked lots of questions all the time. Is there one question that you wish you'd been asked that no one has ever asked?

Karl: That's the third time you've done this. That one. I have to think about that. No, I don't think so. To be honest, I mean, the whole point of being asked questions is to say your curiosity and your intrinsic motivation. The receiver, I just enjoy being asked questions. I have to say there aren't many questions that I haven't been asked. When you get old, after a few years, especially when you get known for being an expert in this or that, it's very rare that I get asked a question I haven't been asked several times before. Apart from the two that you've just asked me, I'm a bit stumped by that.

Dan: Karl, thank you very much for taking the time to speak with us today. It's always a treat and I look forward to the next time.

Karl: Yeah, thank you very much.