We are Not Machines! with Johannes Jaeger

Join us as Fotis and Yogi discuss Agency & Science beyond Computationalism

#LoveAndPhilosophy #BeyondTheAgeOfMachines #YogiJaeger #PhilosophyOfScience #AIandAgency #Metamodernism #ScienceAndPhilosophy #ToughLove #RelevanceRealization #ProcessWorldview #LivingSystems #ArtificialIntelligence #FutureOfScience #Agency

#paradox of #love and #agency

More: https://lovephilosophy.substack.com/

Yogi aka. Johannes Jaeger like to fashion himself as a natural philosopher.

After his directorship of the Konrad Lorenz Institute (KLI) in Vienna he left academia for pursuing intellectual production independently. He is part of an science-art collective in Vienna called The Zone. His focus has been primarily focused on his book, Beyond the Age of Machines, which he had been publishing incrementally in digital form.

https://www.johannesjaeger.eu/

The Zone

Fotis Tsiroukis is a cross-disciplinary researcher interested in the intersection between humanities, science and new media. Also a cyborg...

Fotis on LinkedIn

https://www.sts.sot.tum.de/en/sts/people/researchers/fotis-tsiroukis/

https://opensciencestudies.eu/

Give in Support of Love & Philosophy.

Summary:

In this episode of the Love & Philosophy Podcast, host Fotis engages in a deep and

thought-provoking conversation with Johannes "Yogi" Jaeger, a freelance scholar and biologist-turned-philosopher. Yogi critiques the dominant "machine worldview" that has shaped modernity, arguing that it has led human civilization to a dangerous "cliff edge". He advocates for a shift towards a process-oriented, relational metaphysics that emphasizes the interconnectedness of living systems and the limitations of computational models. Yogi also discusses the pitfalls of AI, the dangers of technological hubris, and the need for a new kind of science that reconnects us with reality. The conversation touches on themes of agency, the limitations of a computationalist worldview and the importance of tough love in guiding humanity towards a more sustainable and meaningful future.

Timestamps with Thematic Sections:

Resources:

1. [00:00:00] Snippets

2. [00:02:10] Introduction (by Fotis)

3. [00:10:00] Yogi as a Natural Philosopher

4. [00:13:00] Critique of Modern Science

5. [00:14:30] Yogi’s Journey of Dissilusionment: from Molecular Biologist to Freelance

Theorist

6. [00:20:00] The Problem with the Academic System

7. [00:25:00] The Need for a New Metaphysics

8. [00:28:00] Getting Back in Touch with Reality

9. [00:32:00] Postmodernism & Metamodernism

10. [00:36:00] The Danger of Technological Hubris

11. [00:40:00] Complexity Science and the Pitfalls of the Computationalist Wordlview

12. [00:45:00] The Illusion of Total Control

13. [00:49:00] The Misuse of AI

14. [00:54:00] Preping for the Collapse of this Civilization

15. [00:57:00] AI "Agents" aren't Real Agents

16. [01:05:00] The Illusion of AI Sentience

17. [01:10:00] The Free Energy Principle and Reductionism

18. [01:20:00] The Importance of Relevance Realization

19. [01:25:00] The Role of Relationality and Connection

20. [01:30:00] Tough Love for Humanity

21. [01:35:00] Closing Thoughts

Konrad Lorenz Institute (KLI): https://www.kli.ac.at/

Yogi's Theory Paper on Dynamical

Systems: https://link.springer.com/chapter/10.1007/978-1-4614-3567-9_5

Metamodernism Primer

https://www.brendangrahamdempsey.com/post/metamodernisms-a-cheat-sheet

Referenced Episodes:

Beyond the Age of Machines: https://www.expandingpossibilities.org/an-emerging-

book.html

Relevance Realization:

https://pdfs.semanticscholar.org/f475/aa75a897893a352730011cc4524c4d520b99.

pdf

Santa Fe Institute: https://www.santafe.edu/

Karl Friston and the Free Energy Principle: https://loveandphilosophy.com/beyond-

dichotomy-podcast/karl-friston

Scales and Science Fiction Michael Levin: https://www.youtube.com/watch?

v=n15xS4YcyG0

TRANSCRIPT:

[00:00:00]

Snippets:

humanity has an attitude problem

we become completely passive, brain dead consumers, and we do that out of our own free will.

And my aim would be to come up with a kind of science for the time after we see that the world is not a machine.

Where humanity is right now, at this cliff edge, just about to go down, okay, is because of this thinking. So for all the good things that modernity has given us, The last 400 years. It is time to leave it behind and go beyond it,

machine will never, ever, ever want something. To want something, you have to be precarious, you have to be suffering, you have to die, you know? this is just not within the bounds, the design. Of our computer system. So what we call AI agents are not agents at all. So basically, we are all distracted by this, and we don't see the dangers that come out of the business model of the whole thing, where we give our agency that we [00:01:00] genuinely have a way to machines that have none.

So does your chat GPT get bored when you don't ask it any questions? Just ask yourself this. This is one thing. The other thing is if you simulate the bike, can you ride it to work? No. It's it's not that hard to understand. Okay. So the world is not computation. Not at all. The world is based on our experience of it.

Nobody in their right mind wants to live forever. Okay?

It's as simple as that. And if you extend your life, it's never gonna be enough. If you live 300 years, your 300 years are gonna be too short. So again, underlying the technological craze that these people are doing is a completely wrong idea of the world and a relationship to yourself. If you had a different relationship, you would realize that this makes no sense at all.

true love is sometimes people just need a kick in the butt. If you, if you like them, you cannot spare them. And if you love them, you [00:02:00] have to really, uh, give it to them. Um, you know, like it is. Humanity needs some tough love.

Fotis: Hello, everybody. Welcome to the love and philosophy podcast. So, you might be wondering, What is this What is this timbre? I bet you expected something that sounded a little bit different. Maybe somebody else. So, I will not leave you be suspended in this mystery any longer. My name is Fotis, and I have been a listener of this podcast for some time now, a podcast with some of the world's most interesting voices, yet I thought there were some other ones out there that also needed to be heard. And so I reached out to Andrea to see whether in this mosaic of [00:03:00] voices there was room to add some more that I thought had something important to say to the world. And so today we have one said voice, a passionate and loud voice. Trying to warn us, to caution us, that the world we have been accustomed to, the social structures, and most importantly, the way we gain knowledge about the world, and the way we see ourselves as living beings, as agents in it, that this world is under threat, maybe even under collapse.

And there is a light shining through the cracks that leads us to another. This is Johannes Jaeger, otherwise known as Yogi by his internet and social media presence. Yogi has had an interesting trajectory in his life. Initially [00:04:00] destined as a quite classical, traditional biologist, working as a lab scientist, but, something happened along the way, and after some years of disillusionment, you can see him now free floating as a freelance scholar, as a freelance academic in the internet, and carving his own para institutional independent academic path, which has led him to more deep philosophical investigations around concepts of agency.

And trying to fight against the machine worldview that has dominated, the zeitgeist, the intellectual and cultural zeitgeist of our times, as he sees the current situation. Now, in this, more independent manner, he has been part of a collective local in Vienna called The Zone, an intersection between art and [00:05:00] science, and working on his big release, a book called Beyond the Age of Machines, which, Yogi himself describes as an emerging book, which you can find online, still a work in progress as we speak, but it is being finished also because it is in itself a process which is also the way yogi generally likes to view things, to view the world. A process worldview, a process metaphysics, applied not only to philosophy but also to society and to science and scientific as, uh, you will see, in this episode.

As a last point, I want to just, point out that Yogi is a complex system. a thinker with a lot of might and a lot of, fire. He has [00:06:00] strong views, but also a very optimistic and constructive approach. which is not usual to see, and this is, uh, one of the main reasons why I invited him.

Because in this world, we're either left with a sort of irony, a post modern irony that doesn't lead to much, that leads only to a negation of what we do not want to be for, to what we consider cringe, or a sincerity that is a little bit too naive, a little bit too hopeful, a little bit too over optimistic sometimes, and we tend to be suspicious of this kind of over sincerity, given that we know that We are living in a world that is a bit too complex, and visions, simple visions of how to build society, that tries to go [00:07:00] both against modernity and its structures, but also against the postmodern mode of irony and critique, [00:08:00] [00:09:00] these are the reasons why I took the chance to ask Andrea to platform me and Yogi and make an episode and see how that goes. So I hope you enjoy this episode and let me know if you liked it. So, without further ado, let's get right into it with our guest for today, Yogi Yeager, or Johannes Yeager.

All right, so, Yogi, if [00:10:00] you were to pick your own terms, how would you describe yourself and

Yogi: I like to be a natural philosopher. Like, we used to have, before we had scientists and philosophers, you know, separate. And I think that's very important. We need to rethink how we do science. We'll talk about this, I guess. And philosophy is very important. And science, of course, will never do the job of philosophy.

I like this Dan Dennett quote that says, You know, there's either science, that has taken its metaphysical baggage on board or science that hasn't. There isn't science without any metaphysical baggage. And that's sort of the spirit I would like to do. Philosophically informed science. I'm still a scientist, uh, biologist by training originally.

Fotis: So reviving an old tradition, but basically bringing it to the current moment in a way.

Yogi: Yes, old ideas, but also a new [00:11:00] kind of philosophy, uh, naturalist philosophy that, uh, mutually informs science. So they mutually inform each other, basically. The idea would be that, that the latest science flows into the way we see the world and Uh, the way we interpret the science flows back into science.

Um, that's a model that is not working at the moment and we should bring it back.

Fotis: And, and why, where is the source of this, um, tension you would say that, where did we go wrong? Where, where we, did we not pick up the breadcrumbs of history? Um, and where did this split happen, between the philosophically informed theorizing and the scientific practice?

Yogi: Well, it happened somewhere after the scientific revolution.

The scientific revolution was still mechanical philosophy, they called what they were doing. And it was very explicitly aimed at [00:12:00] overcoming the old ancient view of Aristotle. Um, I think nowadays we've moved away, like every decade we move away from this. And science becomes more and more technology driven.

I think that's one of the big factors. Uh, and also the aims of science. are no longer to understand the world, mainly in the eyes of the public or the politicians, but to produce applications, to cure diseases, to produce new technology. And, uh, I think this is a problem of education. Scientists no longer learn any philosophy of science during their education in most countries, unfortunately.

I teach a course for scientists on philosophy, a crash course, just one week long. And I think we are now at a turning point in history where it's really important to sort of rethink our basic metaphysical philosophical assumptions. And as you know, I'm writing on a book called Beyond the Age of Machines and I think the foundations that we have to [00:13:00] urgently rethink is this modernist view of the world as a machine, of ourselves as machines.

We treat living beings as machines and we urgently need to go beyond that because this view Humanity lies at the very root of many of the other issues, the crises that we have today. These are only manifestations of this underlying attitude that we have. So basically, humanity has an attitude problem today.

And my aim would be to come up with a kind of science for the time after we see that the world is not a machine. Especially living beings are not machines.

Fotis: So what led you to this, um, to your counter attitude? It's, um. The one that you're trying to bring forth into the world because you've also been a practicing scientist in the in the past Uh, so I, I want to know about what led you to, [00:14:00] um, the current framing of yourself and your work.

Um, and also what, uh, led to the work that you do now with Beyond the Age of Machines, where you're also trying to, uh, append, what you see as, uh, being an impasse. that has existed in scientific practice and a broader worldview. So I'm wondering about this trajectory of yours.

Yogi: So it started very conventionally, and that was very important.

I was in an ultra reductionist lab as an undergraduate student doing my undergraduate thesis work, working on fruit fly genetics, and that triggered me. I was just completely alienated from the very beginning that these people were Uh, explaining things and satisfied with explanations that were completely unsatisfactory to me.

And so I started, uh, reading around and I found this, uh, these two authors that were very crucial. Stuart [00:15:00] Kaufman and Brian Goodwin. Stuart at the time was predicting the stock market or something. So I found Brian, um, In this hippie college in England called Schumacher College that unfortunately just closed a few months ago.

And, uh, I applied for a master's there and this was sort of the key turning point where I got, uh, on my way, or you could say brainwashed or indoctrinated early on in my career, into thinking more organically, more holistically, and thinking about the world in a different way. And so this was the beginning of sort of systems biology at the time.

And I was able to, uh, apply what I learned during that master's. It was a master's, it was called holistic science, by the way. And nobody knew what that was, neither did I, actually. I was able to take those approaches back into a more or less conventional science career. I did a PhD in genetics at Stony Brook University and later on postdoc at Cambridge, the Museum of Zoology.

And I had my own lab in Barcelona at the Center for Genomic Regulation, where we [00:16:00] studied the evolution of gene regulatory networks involved in the development of different fly species. So it was a pretty conventional way. At the same time, I was always doing on the side these theoretical kind of investigations where I was interested.

In, uh, how I was doing my scientific work, how I was modeling organisms and what are the limitations that we hit every time that we try to model an organic system. And this was sort of a sideline and there's a funny little anecdote in Barcelona. They asked every year, what is your most important paper?

So I presented them. A theory paper that I had published in the Journal of Experimental Biology and Zoology, I should say. And they said, this is a joke. We want a nature science paper, whatever. And I said, no, this is probably my most important paper this year. Um, it is a theory. What was it about? It was about, um, dynamical systems theory and attractors and development.

So it was, it had [00:17:00] mathematical concepts in it, but it was a conceptual paper. So the idea that we do theory in biology Not by modeling, but by conceptual thinking philosophy, you know, we apply philosophical methods to biological research was totally alien to, to people. And they said, this is a joke. You can't submit this as your most important paper.

And I said, this is my most important paper for this year. And so then I realized just how big of a gap there is between my kind of thinking and the kind of thinking that was happening in most of the labs.

Fotis: That's, that's interesting, and especially what you said about being alienated very early on. Um, so, was it the same kind of, uh, culture class, uh, we could say, that you've sensed very early on in your career as a biologist in the, uh, in laboratory environments, in how things are done?

Was it the same that you sensed when you tried [00:18:00] to uh, so to the world, what's your own most interesting contribution is.

Yogi: Well, I was pretty early on to reflect because it became clear to me pretty early on that people didn't, I explained pretty well what I wanted to do, but people didn't understand the question.

And I think this is the frame shift. We will talk about shifting frames afterwards, I think, as well. But this is very important. shift your frame towards the questions you're asking. A lot of things fall into place. And this was very important. I realized that the questions I was asking were completely different and sometimes maybe not as scientific as the questions that people around me were asking because they were not directly applicable.

Right. They were more questioning what we were doing and that leads indirectly to new questions and then new experiments, of course. So it has an empirical impact, but I think we have a tendency to be sort of ultra empirical today and say, if your idea, if you're thinking doesn't [00:19:00] impact the way I do experiments immediately, then it's not, not worth anything, you know, and this is a disease, especially in biology.

And I think we need to realize that the questions we ask. The framing we have for our research is extremely important and it shapes just as much what kind of experiments we do than sort of a immediately testable hypothesis would. So this, this was very important and this led me then afterwards to take up position as the director of the Institute for the Philosophy of Biology in 2015.

It didn't last too long because of problems at the Institute, but basically that was my hard turn towards. taking an explicitly philosophical approach. And after that, I dropped out of the normal academic career because there is no space for this kind of thinking or biology in a traditional academic career.

So I'm trying to freelance at the moment, selling courses, uh, taking up fellowships here and there. And at [00:20:00] the moment I have a competitive grant so I can do a hundred percent research, but basically I cannot do this kind of research inside the conventional academic structures.

Fotis: That is a, that is very interesting.

And I think it really speaks to. A certain kind of broader sentiment that I see around, um, either it is coming from, scientists who are more philosophically, minded, let's say, uh, or from people in the humanities, too, because There's a lot of people that are feeling a certain kind of suffocation of the university environment, um, but still no viable alternatives.

to produce the same kind of work or to have the means to do so. So it's very, it will be very interesting to know what your experience is of that, of being a freelance scholar. [00:21:00] And, as you said, you have been through a period of, disillusionment. in a way, uh, that led you to where you are now, but I'm, I would be interested if there was a certain kind of moment, that really solidified.

conviction that you have to go rogue and carve your own way in a more independent manner.

Yogi: So I would say it's, it was an evolution, a gradual evolution mainly, but there was this key moment when I, uh, realized that being outside the system was an advantage. And that happened quite a while after I quit my job as a director of this institute, because Afterwards, you tried to get another academic job.

And what I did realize is that

Fotis: The institute was it, uh, the Konrad Lorenz Institute in, uh, Vienna?

Yogi: It was the Konrad Lorenz Institute and, in Vienna and, or close to Vienna. And [00:22:00] I, uh, realized afterwards then that I had so much more time to do theoretical research. when I'm on my own, even struggling to find an income, that it was a better position to be free floating, although, you know, the income part of it is still a big problem, than if you're in an academic position as they come nowadays, where you spend at most maybe 10 15 percent of your time actually doing the thinking and researching.

And, uh, If you're a theoretical researcher, this is a big opportunity. And then I realized that in the US, this idea of an independent academic is already much more established. But in Europe, there is really no such thing. And it's starting up now. Because I mean, what you asked before, we have created an academic system that is the ultimate machine.

This is the irony. So basically, we are now putting in human pressure on young researchers to produce Amounts of, you know, publications [00:23:00] at the highest quality. I mean, these are just impossible demands, right? That by the time you actually apply for a job, everything has to go have gone right in your life to, to even have a chance.

So this causes two problems. First of all, the top echelons of science are full of opportunists that never did anything risky and they're terrible people. I mean, I'll say it right away and they hire other terrible people. So this is a sort of. An oligarchy forming here within science. So this is one of the big problems.

The other problem of course, is that you don't have the time to reflect or to do something crazy, which is what you should do when you do basic research, which means we're exploring the unknown, the unknown unknowns basically of the world. And that. It takes a lot of time. It takes a lot of, uh your capacity to fail, which we don't have anymore because you you immediately, um, get disqualified from, from that crazy race.

So we have an academic system that's completely unqualified to do what it's supposed to do, and that is exploring the unknown unknowns of this world and to give us a [00:24:00] better grip on reality. So my book, uh, Beyond the Age of Machines is sort of a, you know, a manifesto for a science that rediscovers this purpose and, uh, sees, uh, Research is serious play, so it has to be playful, it has to be able to fail, it has to have this freedom, it has to have a certain leisure as well.

You know, you have your best thoughts under the shower or walking the dog for a reason because you're not entirely focused. If you're always focused, always left brained about your research, you will not see what's left and right of your path. And this is how. Many of the scientists I know are trapped nowadays and, and also, uh, really unhappy in their, in their positions.

So this is why you have to work outside the system. Um, and this all goes together. Both the, the freelance work and the kind of philosophy I want to bring into science, and the kind of biology I want to do. They all follow the same philosophy. So this is lift philosophy. It's not academic philosophy. [00:25:00] It is a philosophy.

of doing research, a philosophy of practice, even if I'm only doing theoretical research since 2016 when my lab shut down in Barcelona.

Fotis: It's interesting you bring this aspect of leisure. Uh, this reminds me of the, when Otium, used to describe the kind of, uh, experience of work practices of medieval monks, mostly, uh, in abbeys, uh, having this environment, having this nice. Uh, that is designed in a way so as to produce, has to have more mental capacity to produce deep, deeply focused work.

Uh, and in many ways, um, the academic environment is kind of a scaling up or an extension of that. We could see it as that. in history of this more monastic, uh, uh, mode that is, um, [00:26:00] contrasted to negotium in Latin, which is basically this aspect of having to make do having to always be on the run, having to always be on the treadmill, uh, to sustain yourselves, your livelihood.

Um, and it seems that, um, there's a growing awareness that's. Even if we thought that the academic environment is the one that brings the O2, it actually is too saturated by the O2. And, um, and I'm wondering how, uh, your own vision of designing, uh, environments institutionally or, um, uh, in terms of incentives.

In terms of maybe even the design philosophy and worldview that needs to be there in the first place to be able to make that kind of frame shift, let's say. Um, so what needs to [00:27:00] be there as a fundamental, uh, cornerstone for this to even happen? Because you've mentioned that we need a different kinds of metaphysics, uh, that there is already a certain hegemonic.

Uh, metaphysics already in place that you're trying to, um, uh, fight against in a way. So, I'm wondering first, um, if there's a way to articulate the problem area. To, uh, more clearly say what you're, uh, what you're up against in this fight, let's say. Um What would be these things that you're, uh, if you can list them, even , um, that you're, uh, going against and pick up from that, from there.

So

Yogi: I think, so one problem of science today is that we are after prediction and control exclusively. That's [00:28:00] something that comes out of scientific insight, which is nice, but I think the main point of basic science should be. And we have to refocus ourselves to, to, uh, uh, towards that, um, explicitly as an aim.

This is what basic science, government, uh, paid basic research is supposed to do because it's super important in a time where humanity is really losing grip on reality. So we are in a time of history where we are collectively going insane in the sense that Of we are losing our touch with reality at the moment So there should be a dedicated part of society that keeps us in touch with reality And that is what basic science is supposed to do now.

It cannot do this in such an ultra competitive environment I've We've written a blog post about the cult of productivity and how it destroys science. It's not wokeness, it's capitalist market thinking applied to a basic science institution that destroys it, mainly. And also, um, we [00:29:00] have, uh, the problem that we're so result oriented.

What is the point of the outcome of science when scientific research is all about the process of investigating the world and reporting that process and teaching it to other investigators? It's not a lot of people who need to do this in society, but we have to have some people that do this. So basically, what I'm asking for here, as you see, is just that we rediscover the old academy of the Greeks with, of course, more diversity today than just these old guys that we're discussing at the time.

But They had a protected courtyard to discuss in to have the leisure that we had mentioned before, the freedom to have crazy thoughts. And this is no longer what the academic environment provides. So when I teach courses for young researchers, I give them two evolutionary metaphors. You can either try and adapt to this crazy environment, you know, and it will just burn you up.

Or, and this is the other evolutionary metaphor, you [00:30:00] can build your own niche. And it's more risky, it's more work, but once you've established that niche, you have a much better position to actually have an impact, to be seen, and to find satisfaction in your own job. So, how do we survive in this crazy world is to create our own niche.

This is what I'm trying to do for myself. I'm trying to help young researchers do this. Because if you just go along, we can talk about what happens to those people who are adapted to the system. They learn to game the system. And I have several examples of that. Um, and then we can only understand their research output.

a sort of a, uh, an expression of that gaming the system of increasing amounts of self promotion and also, uh, yes, we have to call it bullshit that's being produced to get attention instead of, uh, a leisure, a philosophical, a serious activity, deep theorizing instead of shallow theorizing that gets us closer to the truth.

[00:31:00] And we can discuss that later on, but I mean, there is no such thing as a big truth with a big T, but This idea of perspectivism that I find very convincing as a, as a philosophy of science for the 21st century says two things. It says, yes, we always see the world from a particular point of view, but we are in the world.

So this is not some kind of relativism. You know, everything we come up with is just some sort of. It's a power game. It's a discourse, a social discourse. It is us being in dialogue with the world and learning from our encounters with the world. The reality out there is that which we cannot control in our experience.

And that is You know, it's not under our control, but it's only accessible through our own idiosyncratic, um, concepts. And, and so this is the kind of philosophy that I envisage a compromise between a sort of a postmodern relativism, where [00:32:00] anything goes, which is not useful, and this sort of old fashioned idea that there is a scientific method.

We apply it like an algorithm and it gets us to the truth. Old modernist worldviews, what gives us the machine view, of course, this is again, everywhere you look, this machine view is at the bottom of what we're doing, and is in the way of our own progress and sustainable survival into the future.

Fotis: So, um, just to go back to a previous point.

Um, I mean, there are, of course, uh, players in the game who will try to game, uh, the system. But, um, wouldn't you say there's some kind of value, uh, to, um, the current zeitgeist? Would there

be a certain kind of virtue to a more, uh, product oriented? way of doing, uh, [00:33:00] science, especially given that there's increasing demands, uh, when it comes to, um, scaling up processes of medical equipment production. Uh, we saw that in, during COVID for example, but, um, especially given the increasing scale of the world, uh, I'm wondering, I'm trying to see, uh, a dialectical move here.

Um, what would be something that is good, let's say about the world that you're living behind? Um, yeah. So that we can maybe move forward in a more, um, uh, uh,

how would I call this? Honorable fashion, like, uh, honoring, uh, the, the past that lead led, led then to this point to even be able. to think of alternatives. I'm not [00:34:00] sure if it makes sense. Uh, yeah, no, you have to, you have to get to the other side, basically.

Yogi: I think it's very important to acknowledge that the past 400 years since the scientific revolution have been a time of unprecedented and incredibly fast technological progress.

Um, many societal innovations, you know, the democracy we're losing right now, but it has turned on us. So this tale of never ending progress is coming to an end right now because we're hitting Uh, finally hitting the planetary limits, uh, you're destroying the planet, we're, uh, depleting its resources. So it's time for a rethink of this whole, um, philosophy that worked so far, but now it's not long, it's no longer working.

So it's not, the idea is not to completely turn against it, but to absorb the good things about past philosophies, both, um, traditional, modern, [00:35:00] and, and postmodern. And, uh, the philosophy of science that I suggest, I would call it metamodern. Um, which is what comes after postmodernism. So basically the task we have nowadays is to take the, the criticisms that the last few decades have seriously.

Modernism is no longer a good model to run the world. We cannot control and predict the world. There is no single truth. There's no simple algorithmic application of the scientific method. We know that now, but then just deconstructing it doesn't get us anywhere. Uh, ideas to reconstruct after deconstruction and taking everything on board that was good about the previous, um, uh, ideas and episteme, sort of the, the, the, sort of the milieu of ideas of a, of a specific time.

Okay. And so if we take those on board and we build on top of that, we get this kind of, uh, more process oriented, perspectivist philosophy of science that [00:36:00] says there is no. You know, decontextualized universal truth, but there is access that humanity has to a reality that goes beyond us. If you have this sort of attitude, you have another frame shift.

So frame shifts are really important for humans. And the frame shift is towards our limitations. So it's good to look at the progress we've made, and it's good to be optimistic about the future if you don't lose touch with reality. So these ideas of technology is going to save us technological progress is going to go on and get us out of this crisis are completely, not just.

They're, they're not just an illusion, they're dangerous ideas because they're pushing acceleration forward into the abyss, basically. They're really dangerous at this point of human history. So we need to take a step back and rethink these ideas and decelerate. Now, this is not possible. Again, we've created oligarchies in our political systems, in our scientific system, and such oligarchies are really hard to, um, [00:37:00] fight or change.

So it's I no longer believe that we can fight this system. It's, it's a juggernaut. It's, it's a huge, uh, thing way beyond this, but it will, it will crash. So what the idea here is that once the crisis comes and really hits, It should have hit everyone already, but it, you know, it's still not sunk in, but once it sinks in, we should be ready with these new ideas and build something that's better, more sustainable, and, uh, has a better grip.

This is important. Has a better grip on reality. We need a better grip on reality again. And it's astonishing to me how not only politicians seem to think that they can just claim stuff and believe it's true, but this is now even happening within science and Uh, has always happened, uh, in certain branches of philosophy, unfortunately, but is also more pronounced right now.

So this is a tendency where we deliberately ignore reality because we do not want to face the problems that we have right now. [00:38:00]

Fotis: Okay, so I see a certain kind of, um, meta narrative, uh, uh, here, um, of, of a big historical, uh, scale that we, civilizationally, uh, we've been through the modernist, um, era of enlightenment values that have, uh, influenced anything.

And what I see you associating this with is also these, um, philosophies of, uh, mechan uh, materialist, uh, mechanical materialism, uh, coming from Descartes, uh, on, um, uh, natural philosophy and, um, basically view of viewing systems. Um, uh, but I'm wondering more what this shift towards [00:39:00] the postmodern actually means in this case, like, uh, what, um, if, um, mechanistic materialism, industrial processes in society.

If, uh, this kind of techno optimism or like heightened, uh, almost hubristic kind of a sense that we can reconfigure all processes and conform nature to our own will. Um, if this is the mode, if this is the code of a thing that has influenced both society and science, which I would be much more interested in to actually understanding how this shift happens in science.

What would postmodern, the postmodern code be in that sense? Yeah. Because it's not clear to me, it is based on a kind of criticism, on a kind of pessimism. But are there some, something that you can point to? Maybe even in science, like a kind of, uh, a postmodern SIFT, uh, a framework, uh, within science. Would that be [00:40:00] complexity theory, for example?

Um, what would it be?

Yogi: So, complexity theory is an interesting, um, example. It is mainly practiced in a very traditional reductionist sense, which is ironic, but people don't realize that. So, um, take the Sanofi Institute, which used to pioneer complexity theory in the past. They're completely reactionary right now, so they've subscribed 100 percent to a purely computationalist view of the universe.

This is the latest instantiation of the machine view of the world. So, of a mechanistic view, where we imagine that all of physics is computation. And they've, uh, adopted this map of the territory as the territory, okay? So, this is a given, suddenly. So, it's a very recent idea, it's not older than about three or four decades, that all of physics is sort of computation, right?

And so, this is an, like, absolutely [00:41:00] reductionist view because it reduces, it doesn't reduce, uh, everything to a single molecule or, you know, a molecular mechanism, but it reduces everything to a single kind of explanation, computation. You know, we see this all over the place, and this is another kind of reductionism.

Um, so this is very dangerous, because there's nothing, um, uh, new about this kind of complexity theory. It's just in the old tradition of the scientific revolution of viewing the world as a machine, as a, something to be controlled and predicted. Um, the basic insight from complexity theory philosophically is the, is the simple insight that every intervention in a complex system will have unexpected and, Most of the time, um, unfavorable side effects, okay?

So if we would have taken that philosophically on board, we would have much more cautionary principle applied to our society. We would move much more slowly. We wouldn't move fast and break things. This is a completely, [00:42:00] uh, you know, an old mechanistic view of the world and a stupid idea in a complex world.

It's really stupid and dangerous. I can only repeat that. This is, there is nothing. So the postmodern criticism of this, uh, would be, uh, this interpretation of the, you know, taking the side effects serious. Okay, saying, okay, we can't do anything because everything will have side effects. Now the, the metamodern second step would then be and say, okay, so how do we behave in a world that, that works like that?

One of the insights is if we make. Interventions in systems that are potentially disastrous existential. We have to Proceed according to the cautionary principle if we release GMO organisms into the wild we have no idea what that's going to do We shouldn't do that if we manipulate if we want to do geoengineering for climate That's going to be terrible in ways that we cannot predict.

And we know that it's going to be terrible in ways we cannot [00:43:00] predict. That's the funny thing. So, but taking this on board, the metamodern, um, step is then to rebuild the science that can deal with this sort of environment, which can say, okay, we are not in control. We need to proceed carefully. We need to participate in the evolution of our surroundings and not outrun them, you know?

Mm hmm. cannot move fast and break things. We need to go slowly and carefully forward if we want to have a sustainable future. So this would be a sort of a succession of steps. Um, but basically, I mean, computationalism is maybe a nice topic that we can discuss a bit because it's the ultimate, it's not just the latest instantiation of Uh, the mechanistic view of the world, but it's also its ultimate expression, um, if everything is computable, so if you have, uh, uh, the Laplace demon, you know, this being that can measure the state of the whole universe at this point in time, and then, [00:44:00] um, You know, reconstruct the past and predict the future perfectly.

That is possible in a computational universe because the demon is outside the universe looking in. The demon has unlimited computational power, okay? So in a Universe that's completely computational. We are back in the clockwork view of the universe. There's no freedom. There is determinism The only thing that saves us is as limited little beings in that universe We can't see what's happening in the future.

We're limited enough to have the illusion of free will This is what people how people interpret that we have the illusion of an open ended future. But in reality, everything is determined. And if we had something like the Laplacian demon, it could still predict and control the whole world. Now, this gives us a sort of a kind of a hubris, right?

So we then get ideas of being able to manipulate complex systems. So [00:45:00] including living systems, but also suggestions of engineering the weather, of engineering the climate, whatever. These are incredibly foolish ideas because we know that when we do that, there will be unintended consequences. But obviously we can't tell what those consequences are going to be.

They're almost surely going to be bad. So when are we going to take this message on board that we have to behave? We have to have a completely different attitude towards nature. We have to behave in a different way. Um, in order to get a different kind of science. So first we have to change this kind of attitude and then we wouldn't glorify people who promise us that we can, uh, you know, manipulate the weather or whatever, um, and, and engineer everything, I think is what Mike Levin calls it.

These are incredibly, um, foolish and dangerous ideas.

Fotis: So, [00:46:00] the way I take this, um, I can definitely, uh, there, the afterthought, let's say, of uh, unintended consequence, maybe unattended consequences most of the times, uh, would be a more appropriate term, um, is, is tied to a certain kind of impatience, uh, or a certain kind. of, um,

uh, non acknowledgement of some limitations, uh, that you've mentioned that we're hitting, uh, against some barriers, uh, when it, uh, when it comes to growth, when it comes to economic growth, when it comes to scaling of production, when it comes to, um, all kinds of like viewing systems as this purely open ended things that don't have any closed loops.

Um, [00:47:00] um, so there is this, I can see this aspect of, uh, and like an inability to see the wall and try to pass through it. Um, but also in a way, uh, wouldn't you say that this whole process of hubris of Promethean hubris has also been incredibly constructive? Um, and part of the set of, uh, unattended consequences are also things that are not expected to be good, but end up being so.

Would you have an example of that ready, of a certain kind of? process in the end, that actually, uh, has led to some goods in the world, uh, because we know already of the ones that are incredibly bad. We all have been through this years of like trying to regulate nuclear, uh, um, uh, apocalypse through various means.

Uh, um, but why [00:48:00] would be, wouldn't you say, although that there's. As part of that, there are things that are, might be good, and maybe we have to distinguish them, uh, from the things that we might anticipate some consequences that are, uh, purely catastrophic. Um

Yogi: So, at this point in time, there's a book by Rupert Greed that I recommend, it's called This Civilization Is Finished, and the title says it all.

There is no way that I see that this civilization, this instance of our civilization is going to survive the next few, maybe the next century, let's say, to not put a too fine point on it. And the reason for that is that there is no example. So the balance has shifted. So take as a good example, Take AI, you know, AI could be an incredibly useful technology for diagnosis in medicine for, uh, managing complex system to a certain extent, right?

To better manage the use of [00:49:00] resources, supply chains, things like that. But again, there lurks this idea, for example, that we could now have an AI controlling our economies. That is misunderstanding the nature of an economy. An economy is fundamentally unpredictable because it's full of agents, right? And these agents behave in an unexpected way.

And so they will not, the problem, for example, is when you predict the behavior of the economy, the financial markets, whatever, of course, they immediately change behavior because they now have new information about the future of, of, of that. So the idea of controlling a state economy by AI, yes, it could be useful if you do specific, um, optimizations where optimization is useful, but this idea of total control is, is completely, uh, Illusory.

And now let's look at the application of AI and this is so typical. So we have lots of really interesting applications that we could use. We could understand whale song and talk to the whales. That would be fantastic. But. [00:50:00] There are two things to say here. First of all, uh, the business model behind current AI is so incredibly stupid and dangerous that it is gonna have such a massively destructive effect on our societies that this is gonna outweigh the positive effects a thousandfold at least, you know?

So look, we are creating AI not Not to do good things, not to increase, to augment our own intelligence, it should be IA, intelligence augmentation of human intelligence. And we can come back to why. These machines that we create, they're not agents, they have no agency, they have no volition, they have no sentiment, they're not sentient, they don't think, they don't understand anything.

They're just algorithms, sophisticated algorithms that help us detect high level correlations and make predictions about certain things that we're very bad at predicting as human beings. So there is a useful, um, use case for this technology, but first of all, [00:51:00] the kind of business it's supposed to be right now is completely overblown.

I mean, there's no, there's no application that I see that is useful to justify the amount of money that's It's energy use, uh, that, uh, the Trump administration just banned any research on is, uh, and any mention of is horrendous. And it's going to deplete water and energy resources all over the world for what?

For making fake movies of fake people, producing fake news, confusing ourselves. and leading to widespread social disruption that there is this really stupid idea around that's the creative disruption, right? Schumpeter's idea that we need innovation, we need to disrupt. The kind of disruption we're creating, um, in this capitalist society that we have right now, this absolutely overly, uh, applied, You know, when, when market thinking is applied to, to, to areas of society where it really shouldn't like education, like research, like health, then you get [00:52:00] AI just as a burning accelerator of that whole process.

So basically it's going to accelerate the downfall of the civilization massively for many, many reasons. And I see absolutely no case to be made that this is going to have a net positive effect. And the same, um, absolutely look, we've had. Um, for almost 100 years, nuclear power and nuclear weapons now and We have the three major powers that have, uh, nuclear weapons at this point in history are either ruled by complete idiots or by fascist governments, okay?

So how did we get ourselves into this, uh, situation? Think about, uh, genome engineering, CRISPR Cas9, technologies like that. Super dangerous, absolutely, horrifically dangerous in the hands of a species that is And here's the keyword that you asked me before, what's wrong with us? Immature. We're immature.

We're not ready to wield this kind of power. It's absolutely [00:53:00] horrific to see what kind of, um, breakdowns we have in our democratic societies, what kind of people, um, this, uh, absolutely narcissistic, horrible society we've set up the last 40 years is putting into, into charge, even in science, everywhere.

And, and this is sort of the perfect storm that we're heading into. And it all comes down, if you think about it again, It comes down to the fact that we think we can control the world, that the world is a machine that we have to manipulate. Everything is connected to that idea. We need to get over that idea and we need to wait for the crash.

This society is not stoppable. I don't think it's realistic to think that we're going to do anything about the coming crisis before it's too late. But then when the absolutely horrific and global breakdown is going to come, is unprecedented. It will make World War Two look like a little picnic. Once that has happened, we need to rebuild.

We need to prepare for that. So it is [00:54:00] not a time to be optimistic in the sense of thinking that we can get out of that. This is the most realistic path forward. We have to prepare for that. We have to get through that. And then we have to rebuild. From the experience we've made through that. And we have to rebuild a wiser society that is definitely not based on this universal cult of productivity this amazingly stupid rush to go somewhere when we have no idea where we're going.

To rush blindly through the woods and think we're not going to bump into a tree.

Fotis: Yeah, it sounds like, um, there needs to be some prepping that is not just, uh, building like a bun fallout bunkers with a bunch of canned food in the basement. The libertarians are, you know, the first

Yogi: thing you should know about the apocalypse is the libertarians are the first ones to go because they have no friends.

They're not gonna survive very long, so you should make friends instead of bunker up and buy guns.

Fotis: Um, well, yeah, I say there's a lot of solidarity [00:55:00] in the libertarian community. That's how the , that's how it's, uh, it fits on itself, uh, anyway, as an adult, but let's not get on that. Uh, I'm very interested in what you said about, um, the, the, the centrality of.

And, um, treating, um, basically artificial intelligence or any kind of, uh, system that's mimics basically, uh, cognitive processes as something. that is prosthetic in a way as an augmentation, uh, because certainly from what I've observed the past few years, there's this discourse of treating, um, a computer, a device even, uh, or it be the robot, be any kind of, um, artificial gizmo as As this individual, as this [00:56:00] self, um, powered kind of thing, uh, and we use this notions of agency, we ascribe it, um, to the conversational, um, LLM interfaces that we interact with.

Um, so it seems like, um, treating it as an augmentation really shifts the frame that it is in, it is not a different, it is not something alien. It's actually parts of us, part of my own interaction, my everyday interaction. Uh, with a very, with information systems, um, uh, so I'm wondering about how can we more clearly demarcate, uh, agency, what is an agent, what is not, and what is, what we, what is really kind of more of a [00:57:00] prosthesis rather than something that is, uh, kind of, um, going against us or trying to, uh, uh, eat our pie.

Uh, and, um, go off and become its own, um, terminator kind of entity, which is also part of the imaginary, uh, when it comes to artificial intelligence. So I'm wondering about how can we sift this imaginary through prosthesis, uh, and, uh, what this means about our own interaction with the world, about us as cognitive embodied agents.

capacities as living bodies, uh, if these things are prosthesis to us. What the idea that this provokes is a kind of cyborgism. So first of all, I think like, uh, what is the ultimate basis of agency? Uh, and then how does, is agency [00:58:00] reconfigured in the current environment that we are now? Guess that would be my question.

Yogi: So there are two very simple things that you need to understand here. One is it means that you're mistaking the map for the territory. If you think that agency and living beings has something to do with computation, because you think the world is all about computation. Okay, this is the first mistake.

And the second is, you have to realize in a positive way that true agency, what living beings have, what a bacterium has, what a, um, human being has, so basic agencies, that, that, that's, uh, sort of common to all living things. It's not based on computation, cannot happen in a simulated environment, but requires self manufacture.

Okay. So the trick is very simple. Living beings are distinguished from non living beings by this capacity. It's called [00:59:00] autopoiesis to make themselves. And that includes two steps. You are able to produce all the parts you need for your continued existence, and you are able to assemble them in a way that allows you to continue existing.

Okay. So you've evolved somehow to do this, but A computer is not like that, okay? So this is maybe best explained with the difference between a feedback system, cybernetic system, and an autopoietic system, okay? So feedback happens between processes. So you consist of a bunch of processes, let's say, okay, as an organism.

You're just a bundle of processes, and they interact. So there are two ways in which processes can interact. One of them is feedback. There are two existing processes that are independent of each other, but they feed back on each other and alter each other's behavior. This is how a thermostat works. This is how a heat seeking missile works, [01:00:00] uh, finds its target, and so on and so forth.

So this is a cybernetic system, yeah? And long ago, uh, 50 years ago, people believed that this is the key, um, to, to explain living systems. Some people That, you know, like the free energy principle and other stuff like that, still believe that, and that's one of their basic mistakes, okay? So, there are independent processes that feed back on each other, they alter each other's behavior.

But what happens in a living organism is that these processes build on each other, okay? So they literally don't exist if the other processes around them don't exist either. They have to exist together to co construct each other. And that gives you behavior that's fundamentally unpredictable and not computable in the sense that it's not predictable.

Um, but in the book, I argue one step further. You cannot even capture this kind of behavior 100 percent with a formal model. Whatever model you make of an organism, eventually, maybe very rarely in the case of the bacterium, every few million years [01:01:00] only, a bacterium does something unexpected. Okay. It must have done that.

It sounds absurd that a bacterium would do something unexpected. Most of its behavior is programmed into it by, uh, genetics and adaptive evolution. But a bacterium like cell must have done something, uh, unexpected at some point. Otherwise, we wouldn't exist. We came out of that. Um, through evolution. Okay.

So how can we emerge from a single cell like that only if that single cell can sometimes maybe rarely, but sometimes do something that is really not in the space of possibilities a moment before. Okay, so these are completely this is a completely different Kind of emergence and the kind of emergence you see in computational systems, you know, the game of life.

Yeah. Yeah, or Computer systems emergence just means something you have to calculate step by step But this kind of emergence happens within the rules of the computer system Okay So this idea that a modern large language model AI would suddenly create [01:02:00] some sentience It's completely absurd because it only has this computational kind of emergence.

It plays within the rules. It's algorithmic in the end. And even if the rules are defined very widely and very indirectly, it still follows those rules. It can't get out of its hardware architecture, of its programming environment. It is bounded by these things. Organisms are not like that. They build, they're basically like computers that build themselves, their hardware, their programming environment as they go along.

There is no such thing as a separation between hardware and software, or between a programming environment and the program that's written in it. These are all integrated completely. And this is what we mean by embodiment, okay? So, living beings are embodied. Very differently. This happens according to the laws of thermodynamics far from equilibrium thermodynamics in the real world And if you're not constructing yourself like this, then you are not an agent You are [01:03:00] not thinking you don't have cognition You have no consciousness and and this is the claim that a lot of people always tell me How can you say that you will never ever ever get sentience Thinking, understanding in systems like that, if you are in an algorithmic framework, like all of AI is right now, these will never, ever, ever become artificial general intelligence.

Such a

machine will never, ever, ever want something. To want something, you have to be precarious, you have to be suffering, you have to die, you know?

Yogi: So,

this is just not within the bounds, the design. Of our computer system. So what we call AI agents are not agents at all.

Yogi: So what is happening right now is we have a design of AI and a business model that pretends.

Wants to make AI that's more and more lifelike, because that's, they want AGI, they want to pretend that they have created [01:04:00] AGI. Because they need that for the money, because otherwise there is no application for the, the, the bullshit they're trying to sell us. And

so basically, we are all distracted by this, and we don't see the dangers that come out of the business model of the whole thing, where we give our agency that we genuinely have a way to machines that have none.

Yogi: We produce art, we, uh, produce movies, whatever, with AI now. Um,

we become completely passive, brain dead consumers, and we do that out of our own free will.

Yogi: So this is what we need to realize. So you get a completely different ethic. Um, about technology in general from this view, if you realize, you know, this idea, uh, Mike Levin's idea of hybrid agencies, it's completely nonsensical, okay?

There is no such thing. There is agency in living beings, and there is, uh, behavior that's programmed, but sort of complicated, so we think. its agency. I call it [01:05:00] algorithmic mimicry. So AI should be called algorithmic mimicry in machines. And then we convince ourselves, even engineers working in this field, convince themselves that there is a, there is a there, there, there is a person, there is something, there is sentience, because it's very convincing.

It's all imitation. That's why Turing called his Turing test. Imitation game, which the term test doesn't show you whether the, the, it just shows you that the algorithm is good at fooling you. It doesn't show you that the algorithm has developed any sort of tensions or consciousness or anything like that.

So if we understand these very basic points, okay. And it's not hard to understand. We have, we just have to look at the design parameters of our technology. To realize what its limitations are. And we can say that, you know, with confidence. This is not gonna happen. This is not the same thing as a living being.

But for that, you have to understand how important, uh, self manufacture and embodiment are. And there are good theories out there that [01:06:00] describe Uh, self manufacture. So I recommend Robert Rosen and Yanni Hofmeyer's work on that. And, uh, there's less good work on embodiment. I'm trying to write a paper with Yanni Hofmeyer at the moment about this.

But, um, the kind of, to describe, there is a lot of embodied cognition and good work in that field, but there is not, uh, sort of a specific formalized theory of how Uh, the particular embodiment of a living cell differs and what that means for, uh, agency. Um, there is a lot of, a lot of nonsense in this field of agency, unfortunately.

And, uh, almost all of it comes from the failure to recognize that in order to understand the behavior of a living system, you need to know how it's organized, how it manufactures itself. Okay, so if you ignore this, if somebody comes with a theory of life, and here we come to Mike Levin's theories, maybe, and to the free energy principle, these two ignore completely that there is a difference between, in the organization of a living system and a non living system.

They believe that Some form of [01:07:00] computation is responsible for agency. So, uh, Levin has a very reductionist, again, he's ultra reductionist. He tries to reduce everything to computation done by electric fields. in development, in his experimental work, ultra reductionist, okay? So, the idea there is that you have something that computes, uh, it's thinking, right?

So that's why he believes weather things, he has a paper where he, sorting algorithms think. He now has a paper about where these minds come out of a platonic realm, I think he's gone a bit overboard with this whole thing. But, so this is interesting, so if you have This assumption. So his justification for not accepting that organisms are organized in a different way than non living systems is, he says, this is an assumption I don't need.

Okay, so that's fine. Okay, so you don't need that assumption. Then, okay.

Fotis: Is it explicit? Is it explicitly stated? Yeah,

Yogi: so he says that that's an assumption I wouldn't make because it's metaphysical. Yeah, so I'm an empiricist, so I'm not making this assumption. [01:08:00] So, okay, granted, you do that, and then you take that forward.

And you get completely absurd conclusions from that. Then you should be rethinking your assumptions. You shouldn't be going out there saying, self promoting yourself, saying, Oh, look, I found that sorting algorithms can think. That is absurd. Okay. So if you get an absurd conclusion like that. You should be going back, taking the time, and rethinking your basic assumptions, but they're not doing that.

The same happens in the free energy principle. They have largely abandoned this claim that it explains life now. They're, it's always a shifting target, right? So they have, they first, they claim it's, it's a theory of cognition, then it's a theory of life, and then they were debunked about that. Now, I think it's, uh, they're trying to solve physics with it.

Um, that's always a, a, another sign of something dodgy going on. So basically the claim there was that all of life is about inference. Inference is just computation, okay? It's the same thing. So again, [01:09:00] mistake. Basic mistake. Map and territory. Karl Friston gets very upset when he's criticized for mistaking the map for the territory.

Oh, it's just a model. It's just a tool. Okay, fine. But. Uh, it hits a sore spot, because he does say that it's just a tool, but at the same time makes claims in the written literature that it explains somehow what makes life different from non life, okay? And so, when you go there, you have a problem, because you've basically misunderstood what the difference between non life and life is, which is the way, the self manufacturing organization.

You don't have that in your theory, and the secret of life is not in computation or inference. about the environment. And so these are basic mistakes that make these, um, approaches fundamentally less. Um, now there's a difference between Friston and Levin. Levin is a chameleon, okay? Whenever, you know, you know Groucho Marx, uh, I have principles, but if you [01:10:00] don't like them, I have others.

So if you go and criticize him, he will just change his stance. This is something that is very pervasive, but at the same time, uh, Friston is a different kind of, of, of, uh, modernist against. Formal thinking. This is modernism and it's heyday, you know. So basically, he has a hammer. That's his model. And so everything looks like a nail.

So the free energy principle is modeling. It is actually modeling everything, you know. It fits everything. So it's not very useful. Okay, only if you would tell us something interesting about all the things it would model, would it be useful? I haven't discovered. I've read a lot of that literature, very frustrating, and haven't found that yet.

So basically, these are massively overhyped, uh, theories in the Realm of the Living that make a basic mistake. They mistake the map for the territory, both of them, and they don't realize that the key to understanding life is how it is organized. It's very simple. It's not rocket science. [01:11:00] And that organization has absolutely massively deep consequences, not just for our understanding of the world, but for how we would treat ourselves, our technology and our interactions.

with technology. So then you get a cyborg of a very different kind. You don't get a cyborg where agency lies in the machine and you, but the cyborg enhances your abilities, which is the original idea of a cyborg. Um, and then you can, you know, do something like thesis ship. You can say, Oh, what if you replace everything with mechanical parts?

But it doesn't work because that, I mean, I think the substrate of, uh, agency, cognition, and consciousness is absolutely not, uh, unimportant, okay? So I think, I believe that you can get agency, cognition, and consciousness on different substrates, but you need to have very peculiar substrates to get it. And you have to have substrates that allow you to have the kind of, um, [01:12:00] organization that living systems have, and our current architecture of computation.

The Von Neumann architecture, that everything, all the AI, we have everything, except for some biomorphic chips, everything runs on such an architecture, and the biomorphic chips are really hard to get to do something reproducible. So all of that computation happens on an architect.

Fotis: Of biomorphic chips? It would be some example, yeah.

Yogi: They have, uh, real valued, uh, parameter values. So they, but it's basically, if your calculation depends on a precise value of such a parameter, it's not reproducible ever. You can't do it twice. So basically, I think that, um, this architecture of maximally separating software from hardware, because you want the universal machine.

If you buy a computer, if you buy a phone, you want all kinds of apps to run on it, not just one app. So this design is absolutely in opposition to, to the architecture of the organization of living beings. And it doesn't [01:13:00] allow agency, cognition, or consciousness. Sorry, that was a long rant.

Fotis: Um, uh, that's, that's interesting because it's fairly, uh, well. puts the worldview out, uh, uh, that's the antithesis, I would say. Um, and to that, I have a few questions, in fact, a few clarifications, and maybe, uh, to play the devil's advocate here. Um, isn't this, um, aspect of this attitude, let's say, of let's not make some metaphysical assumptions about the system. Let's look at the system and its, uh, behavior.

Let's see, like, what it does. What's important, then? And do not, not say like, Oh, what is cognitive cognition, uh, intelligence, how many high orders of emergence you need to do that. Let's just see them in the system. Um, so like this, which we could say in itself is kind of a metaphysical view, especially in [01:14:00] systems theory and modern systems theory.

You also see that there has been a few turns, like three or four now, but basically there's this, uh, attitude that we're seeing functional roles. Uh, of systems rather than just like demarcating them based on their substrate. Uh, so like, uh, you, you see the functional role first rather than, uh, like the, what the possibilities and conditions and constraints of the organization of the substrate itself.

Uh, so like you can look at the bacterium or you can, uh, look at your fridge or you can look at the, um, uh, traffic, uh, uh, in the streets. And say like, Oh, we're not going to make any distinction between the systems. Let's treat them, uh, all as, uh, the same and see what properties come out. It seems like both, uh, Levin and Friston, uh, and especially Friston and say like, Uh, make this, uh, have this as an axiom.

Uh, but then again, [01:15:00] aren't theories like, uh, Pamela Leon's or, uh, Fred Kaiser's who also like say like, let's not make this move. Let's just see like what happens at minimal organisms. Uh, let's not make any assumptions about where cognition arises. And just like look at the organization, which I think has also helped, uh, I think they have like these intellectuals made a similar kind of move in trying to see how processes of cognition and life are more integrated.

Maybe there's something that relates to even systems that are non living, uh, that. Might be, be needed to integrate, be integrated into that picture. Uh, maybe we don't have to talk about cognition as coming like prepackaged with life. Maybe this process is there already. Um, I don't know if they have to be based on inference or maybe they have to be based in a certain kind of.

Um, preference, [01:16:00] uh, like a bacterium, uh, prefers to go upward, up, uh, glucose gradient. Maybe the tornado also prefers to, um, also kind of, uh, minimize some sort of kind of enter into a kind of, um, uh, attractor point where it feels better. Like, but these again. So I'm wondering if, um, um, this is something that we have as a, as a strategy, we have to go against, uh, all together, like throw the baby with the bathwater or if it is useful.

And also, um, I think that I would disagree with calling it. Uh, modernist, I'd say it's very metamodernist to like make this flattening. Like this is also what Latour does in actor network theory, isn't it? It's like, Oh, we have agents, like we have devices, [01:17:00] people in the network. They're all agents. We are flattening their agency and we're seeing them all together.

And I would say this is a very postmodernist framework. So it's

Yogi: postmodern. It's postmodern. Exactly. So. Postmodern also, it shows up in the AI, uh, right wing, um, Silicon Valley tech community now. So because their algorithm behave like humans when they speak, they claim that our brain is, is, you know, there is, there was even a book called the brain is, is shallower.

The brain is flat or something like that. So this is logical next move is to, to trivialize human, uh, experience and agency. There's a nice book by Evan Thompson, Adam Frank, and Marcelo Gleiser about that called The Blind Spot. Um, so it's trivializing experience, the experience that we have, and the experience that is the ultimate.

So here's the thing. People accuse me of building my own metaphysics that I like, and this is what I'm doing. Here it is on the table. Yes, I've built myself a metaphysics that I [01:18:00] like. Why would I do something? Why would I build a metaphysics that I don't like? It's like, okay, because the claim then is that the machine view of the world, mechanicism, computationalism, has less metaphysical assumptions or that these metaphysical assumptions are somehow scientifically supported by evidence.

None of this is true. This is what my book Uh, Beyond the Age of Machines is about, this is absolutely not true. The metaphysical baggage of this machine worldview is absolutely heavy. It's influenced by completely random historical events, like the invention of big mechanical clocks. People, that was the high tech of the day, and they thought, oh, God, you know, Descartes literally thought the universe had to work like the best human technology of the time because the universe was as good as it gets.

And the same happens again, this completely absurd idea that we live in a simulation. It's just, oh, look, we have pretty convincing computer simulations. They look good. Oh, maybe we live in a simulation. Okay, about Nick Bostrom, I will use this word. This is [01:19:00] completely stupid. And Bostrom is not a proper philosopher.

Everything he's ever said is completely idiotic. Okay, and he's hugely popular. Because people. Follow wishful thinking, okay? They want this because it makes the world cozy to them. These are people who want to control and predict the world. It's almost religious. When I discuss with these people, when I upset them, I do upset them, then they have this religious fervor because I am actually going against their metaphysical, almost religion, you know, because they need to believe that the world is like that.

Otherwise, it's scary. Open, not controllable, and you have to behave in a completely different way, which is what we have to do. And so, I am not even countering, if somebody accuses me of building my own metaphysics, yeah, that's exactly the point of my project. I built myself a metaphysics that's consistent with everything.

It's called naturalistic, this approach, with everything that science knows, I told you at the very beginning. And my view, [01:20:00] as I lay out in the book, is more consistent than the machine view. And, it allows us freedom, free will. It allows us, uh, emotions, agency. It allows us to be better than our technology and more special.

And this allows us to interact with that technology in a much more ethical, a much more sustainable way. Okay, so what's not to like about that? I don't understand. And then somebody comes, Yeah, but this is all just wishful thinking. The world is unfeeling. No, the world. The world that you live in is full of values because of your interaction with the world, because of you, okay?

And because of the way we've organized society, and because of the way we interact with technology, they all matter. But the values, what is important and what not, what is relevant, we haven't actually mentioned this word yet, relevance realization. That is the central point. You realize what's relevant for you, and a machine cannot do that.

And something that only does computation, or only inference, cannot do that. [01:21:00] Not in principle. Not ever. So this is what makes, you can only realize relevance when you are a self manufacturing organism. Because otherwise, if you don't build yourself all the time, and you have to invest work to build yourself, nothing is relevant or irrelevant to you.

So

Yogi: does, does your chat GPT get bored when you don't ask it any questions? Just ask yourself this. This is one thing. The other thing is if you simulate the bike, can you ride it to work? No.

Yogi: Yeah, it's like

it's

Yogi: not

it's not that hard to understand. Okay. So the world is not computation. Not at all. The world is based on our experience of it.

Yogi: Okay, everything we know about the world comes from our experience. This is the important metaphysics that we have to remember. And this is why I'm repeating this. To think that everything we know about the world is based on the laws of physics or computation is mistaking the map for the territory. The laws of physics are a model of the [01:22:00] world.

Computation is a model, not even of the world, of a particular human activity. And that is to calculate With pen and paper, okay? That's what the theory of computation was about. To believe that it explains the whole world when it only believes how we explain the world is the ultimate mistaking the map for the territory.

And it's not, I, I don't understand why people don't see this. This is why I get sometimes a little grumpy online about it. Because how can you not see this? Okay? It is not difficult to see that the worldview of computationalism, of mechanicism, is very historically peculiar and inconsistent, logically inconsistent in the end.

While the perspectivism of the kind I'm proposing here is not inconsistent and opens up. So if you treat your computationalism as just another perspective, okay, so is it useful to use computational models to understand the world? Yes, it is. Yes, if you treat it as a tool, as a perspective to gain [01:23:00] knowledge.

That's fine. If you treat it as a worldview, it becomes really pernicious. The same for reductionism. Reductionism is really useful as a method in science. Otherwise, we can just go hug a tree. We have to analyze things and reduce them. But if this becomes your worldview, it becomes pernicious. Because it is dangerous, hubristic.

And foolish, I say it again, it's foolish to think that way. And this is what got us into this spot.

Where humanity is right now, at this cliff edge, just about to go down, okay, is because of this thinking. So for all the good things that modernity has given us, The last 400 years. It is time to leave it behind and go beyond it,

Yogi: not to go back.

This is the other reaction that we have. Reactionaries. So people now want to just turn back the clock and say, oh, it should be like in the good old times. This is no longer possible. A complex world is not reversible. Okay, we cannot turn the clock back. We cannot go back to [01:24:00] nature. We cannot go back to the good old times.

We have to move forward in a positive way. And you can only do that if you have a proper philosophy and attitude towards the world, towards yourself, which I'm trying to build here. And the people who come up with these immensely popular stories of technology will save us. And this is why computationalism is so popular.

It Tells us we can control everything and get ourselves out of this, uh, situation that we've put ourselves in just by, uh, developing more technology. This is what people want to hear. It is, and I've written a blog post about this, uh, a recycled, uh, Catholic salvation myth. There is a better world waiting in, in the cloud, upload, uh, for us, uh, than this miserable world.

And it is Catholic, uh, salvation. Okay? And, uh, some people It's

Fotis: soteriology.

Yogi: Like Joshua Bach even uses, uh, the, the, uh, you know, um, [01:25:00] philosophy of the, uh, Catholic Church to, to, to talk about it. It's, it's ridiculous. And it's, it's just that. It's, it's a salvation myth. And it's not a good myth. We need better myths.

Because It is trying to turn the wheel back. It is a traditional myth in a, in a, in a metamodern world. It doesn't work. The complexity of the world is so much bigger now that it won't work. We have to get with the time. We have to get with it. You know, this is what I'm trying to do in the book. And this is why I'm so, uh, adamant and sometimes passionate and sometimes really grumpy when I discuss online with other people.

Fotis: Yeah, right. That's that puts it very succinctly. I'd say. Um, um, and I would say that definitely from my experience also hits the bill. I'm two cards on the table. I used to be a transhumanist. I get the allure so I can tell you why. Um, and this myths of, uh, they are, they're, they deal with salvation and some [01:26:00] salvation, especially when you deal with the reality of pain is something, um, that gives hope.

Um, and I think this is very central. So it'd say like from, from my end, I would be a little bit more compassionate because I understand maybe the pain that leads to such a kind of. Um, view and action, uh, but I'm, uh, I want to focus more and maybe to, um, get things together. Um, um, have, um, this seems to be a thread going around and I want to take this thread and start bringing things together as we, um, uh, um, get closer to the end of this episode, I guess.

Uh, which can go on forever because these issues are incredibly interesting, but I think, uh, this aspect of concern, this aspect of that it's not [01:27:00] just merely about computation. It is about relevance. And relevance is also tied to a certain kind of normativity that is, are irreducible. Uh, and it seems like par, the most symbol of organisms of living systems are normative par excellence in how, what we've talked about.

Uh, before that they reconstruct themselves and they go towards certain kind of, um, they have some form of preferences, even the most simple of organisms. Um, and maybe this ties also to the question of love, which is the theme of this podcast. Um, which is the normative, the most, uh, normative thing, par excellence to.

And I think that there seems to be a capacity for love or a capacity to be attracted to certain kind of [01:28:00] things. being agents, um, that is, that seems to get out of this world pictures that you've described. Um, so what I'm wondering is, um, um, because there's also all these other issues about like, if this gets out of the picture, maybe we're also trivializing our own self.

Maybe, uh, we don't love ourselves enough as humans in our very limited capacity, in our very fallible, uh, and very, uh, dreadful existence that we're trying to escape. Uh, so I see a reform of humanism, one that reintegrates a kind of appreciation of our love for being human, even though we're trying to disenter ourselves.

Uh, I think it was a living systems and so, um, wondering how that resonates, uh, with you, where's the place of love and maybe even [01:29:00] if we can love our enemy a little bit. And so could you find maybe some kind of understanding or middle ground with what you're going against in the end?

Yogi: Ah, okay. So I want to push back a little here because I think.

Uh,

true love is sometimes people just need a kick in the butt. If you, if you like them, you cannot spare them. And if you love them, you have to really, uh, give it to them. Um, you know, like it is. And I think that's, uh, an important aspect.

Yogi: Uh,

humanity needs some tough love.

Yogi: I think humanity is The aggravating thing about this whole thing is we have so much potential.

The most intelligence of us are so amazing. And, uh, I have had the privilege. I come from a family of carpenters in the Swiss mountains. Okay. So these ancestors of mine never had the, before my dad was the first to go to university. [01:30:00] They've never had the opportunity to meet so many excellent human beings, um, that I could meet in, in my lifetime.

So I'm really grateful for that. Um, at the same time, what you see happening right now in this society, we've managed to, to get, I mean, the kind of people who get to the top of society right now. So here I have to say something really harsh. We should not reach out. This is what, in metamodern circles, it's always like, oh, you have to have this dialogos with everybody.

No. These are fundamentally, they're either fundamentally evil people, like I believe some of them are, or fundamentally deluded, and they really need to get, uh, our love absolutely refused. Okay, so they need, uh, to have that love, uh, Taken from them because it is a critical time in human history and we need to use our, I interpret Andrea’s, I've [01:31:00] been puzzling about the love theme of the podcast and I, um, take Andrea’s, uh, interpretation of it as, uh, very heavily relying on connection.

So for her, it's like, it's a sort of connection, right? So this worldview I am proposing is fundamentally based on relations. Yeah, it's a Bates, you know, Gregory Bateson was the most prominent, um, uh, proponent of this, that relations come before everything else, before objects, before everything. So, the relations that we need to have with ourselves, with nature, are very different.

Uh, love living beings, not just humans. This is the problem I have with humanism. We have to, um, can't call it transhumanism either anymore because that's taken. But sort of, uh, uh, sort of a, uh, a better human humanism,

Fotis: a better relationship with the living world. Yeah,

Yogi: an appreciation of the living world. Um, that we should take from [01:32:00] very traditional, old animistic cultures again.

We need to regain that. Yeah, that's very important. And, um, so, I think, well, another topic is, is, we treat relationships as things that we possess. Our human relationships are relationships with nature. We have to treat them as processes that evolve and develop. This is also a personal relationship style that I practice.

And so I think this theme, uh, I haven't thought enough about the theme of love in this context, to be honest, as you recognize now that I'm not very coherent about it. But I think what I see it is, is mostly as the praise of connection, the focus on connection. And the quality of those connections, again, so away from outcomes, away from possession and control towards participation in a process and a relational process.

And I think, personally, this also has, uh, a very nice, [01:33:00] uh, I, I'm not afraid of dying if I know that I'm going to be part of the, the world and that the parts of me. Are going to contribute in many ways at different levels to the world in the, in the future. It, this allows us to let go. I mean, one of the most.

Um, striking manifestations of the machine view and how bad it is, is this, um, craze for longevity and eternal life that these, I, this is literally insane, okay? This is a form of, of, uh, mental disorder that the people have who want to live forever.

Nobody in their right mind wants to live forever. Okay?

It's as simple as that. And if you extend your life, it's never gonna be enough. If you live 300 years, your 300 years are gonna be too short. So again, underlying the technological craze that these people are doing is a completely wrong idea of the world and a relationship to yourself. If you had a different relationship, you would realize that this makes no sense at all.

Yogi: And you're [01:34:00] taking yourself way too serious. So I think These kind of people do not deserve our love at the moment. They need, or let's put it the other way, they need a tough love. They need a kick in the butt, okay? Because, uh, we are all part of humanity and we have to, unfortunately, sometimes I think I really don't want to live with these people on the same planet, but you have to.

So this is another principle of metamodernism. You just have to accept that they're there, and you have to find a system where they also are. In it and happy and maybe different than they are now, because I mean, one thing I can tell you, we can either have rich oligarchs or we can have a future and to then go and say, Oh, we just have to talk to these people and everything will get better is just as naive as Yuval Harari saying, Oh, let's do mindfulness and the future will be bright.

We need, uh, to know where we invest our energies. We need to know which relationships to build. And, uh, we need to have some tough relationships, [01:35:00] uh, to, it's not all just going to be, uh, you know, resolved by being nice to each other.

Fotis: We need to better fine tune the relevance. Exactly. Yeah. So that was a wonderful conversation, Yogi.

Uh, we definitely need a kick in the butt, I guess, some tough love. And, um, the LLM, uh, that you're using is not an agent. Do not forget. So thank you for a very warm blooded, uh, uh, conversation that's, and things that needed to be said. Um,

Yogi: thank you. And, uh, yes, it's my privilege to be an agent with emotions.

I am not a machine.

Fotis: We're not machines. Uh, there you go. Cool. I'm stopping

Yogi: the [01:36:00] recording.

Next
Next

Body Knowledge