Scaling Autonomous Self Actualization
Timo is working on ways of applying artificial intelligence in corporate learning and development, focusing on scaling self-actualization in his Ipseity project. In this conversation, he shares his personal catalyst for change triggered by a rough patch in his life and how discovering Jordan Peterson's lectures on psychology opened new avenues of self-reflection and personal development for him. The conversation delves into Timo's philosophical and psychological insights, his project on developing a chatbot to aid in personal development and self-transformation, and the potential of using technology to navigate complex personal growth. The discussion also touches on contemporary issues in cognitive science, the challenges of interpreting Jordan Peterson within academic circles, and the broader implications of tech-assisted self-help methodologies.
#chatbots #artificialintelligence #jordanpeterson #andreahiott #timoschuler
00:00 Introduction to Love and Philosophy
00:26 Journey to a PhD: From Swiss Post to AI in Learning
01:41 Exploring Self-Actualization and Personal Development
03:33 The Impact of Jordan Peterson on Personal Growth
13:12 Navigating Corporate and Environmental Realities
23:26 Embracing Complexity: From Psychological Entropy to Ipseity
29:06 Exploring Personal Growth and Anxiety
29:29 The Role of Love and Consciousness in Personal Development
30:57 Navigating Life's Complexity with Maps of Meaning
35:01 The Ship of Theseus: Understanding Self Through Change
37:05 Developing the Grateful Chatbot: A Technological Companion for Growth
41:50 The Future of AI in Personal Development and Social Sense-Making
50:24 Reflecting on the Journey and Future Aspirations
Ipseity Project: https://ipseit.li
Timo on LinkedIn: https://LinkedIn.com/in/timoschuler
Grateful Chatbots paper: https://ieeexplore.ieee.org/document/10122089
Jacques Monod: https://en.wikipedia.org/wiki/Jacques_Monod
https://www.researchgate.net/publication/361114681_Ecological_Memory_towards_assessing_intelligence_as_navigability
Love and Philosophy Beyond Dichotomy started as research conversations across disciplines. There was so much I wanted to explore but I was being told I shouldn't explore beyond certain bounds because it didn't fit into this or that discipline, or because this or that idea was too wild or too uncomfortable or too popular or unpopular, but because I study and work in so many, those barriers just no longer made sense. The same felt true relative to passions and love.
So I decided to open myself to all of it beyond traditional distinctions, towards learning and development, so long as love and health were the intentions behind those doing the work I was exploring. This podcast is where those voices gather together in one space as I try and notice the patterns that connect.
It's part of my life work and research, but it's also something I hope to share with you and to invite you to share your perspective and position. Thank you for being here.
If you want to go deeper and build philosophy together, please sign up for the Substack at https://communityphilosophy.substack.com/
Transcript:
Timo Scaling Self Actualization
[00:00:00]
Andrea Hiott: Hey, Timo.
Timo Schuler: Hey, Andrea. Welcome to Love and Philosophy.
We're just going to have a little chat here about your work and research.
Timo Schuler: Gladly. I'm all ears. I'm here.
Andrea Hiott: Yeah. Where are you?
Timo Schuler: I'm actually based in Switzerland, in Lausanne, the French part of, uh, Switzerland.
Andrea Hiott: You're working on a PhD, right? That's part of what we're I'm
Timo Schuler: working on a PhD it's been already four years since I embarked on this, this intellectual journey, uh, with, uh, highs and lows, let's say.
Started pretty much randomly. I applied at the position at the Swiss post, uh, the national, post company of Switzerland, and they. were interested in hiring me within their corporate learning and development department. And they saw that I was, let's say, intellectually inclined and said, Hey, why not start a PhD kind of with us in synergy [00:01:00] with us?
And I received the challenge of basically see how to kind of, what would be ways to apply artificial intelligence with corporate learning and development. So people development, uh, learning all that. huge space, kind of, I like to use the, to borrow the terms from design science research, where you have a problem space and you have a solution space and you try to bridge the two.
Uh, that's, kind of what was suggested to me quite early in the process of doing the PhD. And within those two big fields, uh, that I arbitrarily tried to categorize within cognitive science, I tried to find my niche and what I was most interested in. What I'm really interested in is how can somebody embark on a personal development journey by themself?
Andrea Hiott: I do want to say there's this notion of, scaling self actualization, which is kind of what got us to decide to just have this little conversation. So. These are really interesting [00:02:00] terms, scaling self actualization.
So I just wanted to mention that before, before you say, but yeah, I would like to hear what happened before you started the PhD? What were you studying? What were you thinking about? Does this come only from a sort of academic or business sense? Is there something personal involved in this too?
Cause you know you're working with technology. You're working with a possible political or business model. You're working with therapeutic, psychological issues. So I wonder, in your own life, what threads brought you into that?
Timo Schuler: Uh, very good question. That's, it's actually a kind of, uh, all those threads that you mentioned, uh, kind of joined together, uh, to give me a sense of purpose.
Uh, I was, it was, well, a few years back, actually actually just before I was Applying for that position at the Swiss post where I was kind of at a low point in my life recently broke up. And I felt a bit purposeless and. Somehow, all the stars [00:03:00] aligned to kind of make that one North Star that I want, that I was seeking, kind of light up.
And I was trying to pursue that. What's that North Star? What do you mean by that? North Star is that scaling of, uh, autonomous self actualization, uh, that's the best I can do at the moment to phrase it. Somehow it goes into that direction that, well, I see the potential to apply technology for one's self owned, kind of one's own well being.
So that's what I was pursuing because, well, at the time I was really Going down the rabbit hole, uh, in psychology, because, only a few, kinda, I would say a year before that, I discovered, the work of Jordan Peterson, completely by happenstance, I was browsing on 9gag looking at memes, and suddenly, this guy, with the Kermit, uh, frog voice, Appeared, uh, like all over the place with a meme about with his [00:04:00] interview about Kathy with Kathy Newman.
And, uh, from there it is called, uh, all his lectures that was posted on his YouTube channels. And that just opened up a bit the realm of psychology, something I was always, uh, interested at, but kind of from a distance because my, uh, My background both academically and professionally, it's more business administration.
I did a bachelor and a master's degree in business administration. I was working for, uh, for a few years in a large, uh, multinational company, uh, based in Switzerland, kind of worldwide, uh, famous company. And. I was always approaching stuff more at the economical or business point of view, a systemic, more large scale, and something was missing, and that's where I kind of discovered the whole realm of psychology, and that took me down on this path.
So it was
Andrea Hiott: Jordan Peterson really that started, that was when you first started with all the psychology.
Timo Schuler: Yeah, he's my gateway drug into cognitive sciences. Yeah, this is really interesting.
Andrea Hiott: This is very, [00:05:00] because I mean, when I was studying philosophy, I studied neuroscience and philosophy. And so basically when I was in these cognitive science departments, you couldn't mention the name Jordan Peterson.
Because some people just thought like, uh, it was too, it wasn't real psychology or something. I mean, he's a psychologist, he has papers and, but there was something, I think it was around that same time when you discovered him, there was a crazy hype about him. There was a lot of controversy.
So, yeah, I struggle with that a little bit. I, you sent me some things from him and I hadn't listened to him. Someone that I was dating actually once gave me one of his books, The Map of Meaning, and that was a long time ago, but I was glad that you sent me these podcasts and that I listened to him.
I listened to the one with him and Carl Friston. And Furvaki but in any case, it was very interesting, very relevant. But I wonder like, what was it about him that really drew you in? And just, out of curiosity and out of trying to dispel some of this weird dichotomous stuff where you can't be interested in [00:06:00] someone like, Jordan Peterson and also study some kind of very heavy academic philosophy.
I don't like that and I want to get past it. So for me, it's interesting to think what, this actually, this sounds like it really did something very good for you. So I just wonder, do you remember what that was? Was it like waking you up in a way or something to some other ideas or?
Timo Schuler: That's a very good question. Uh, yes. That's, what I would, uh, it, it kind of woke me up. It activated me, it put me, it confronted me to, to loads of stuff that I haven't really, uh, been confronted to or kind of willfully ignored in my life. What, so more personal issues and, yeah personal, I would say in kind of relationships and
Andrea Hiott: And
Timo Schuler: also the beliefs I had in the sense that, uh, uh, I would say kind of during my late teens and twenties, I was rather. Let's say progressive, liberal in the American sense of the term. I was also liberal in the English sense of term, classical liberalism. And I was trying to figure out stuff and [00:07:00] I had quite strong kind of, I had weak beliefs, but strongly held and he kind of opened up that kind of, he put a lot of doubt in my head. And what brought me to kind of be taking him seriously is what that interview of Kathy Newman, just the way he conducted himself how well spoken he was, and how he received all the attacks from her, because that was her job, and he really calmly just stayed at the level of Like reason and logic and sighting and really that intellectual integrity that kind of spoke to me. And from that, I discovered all the other material that he had and where it was way deeper. And the more I went into it, the more I questioned myself and it was just more about like, okay, yeah, let's take personal responsibility.
And It allowed me to reconceive loads of stuff in my life and gave me more, I'd say, purpose, more realism, it was more practical, [00:08:00] it allowed me to see my own situation more pragmatically and kind of, okay I have to take ownership of my life and how to do that. And. It's interesting that you mentioned the dichotomy because, well, since then he's been wildly famous.
He's a household name and there is this dichotomy about him, the people who hate him and the people who love him. And that's not helpful. Exactly. That's
Andrea Hiott: not
Timo Schuler: helpful. The thing is that we have to put him in context of kind of in which context does he speak about which topic? And based on this, well, you can take stuff.
And what kind of baffled me was how simplistic and superficial the conversation about him is. Whereas, well, all his material is available through internet, which is kind of, I'd say something that is new. a first in, in our history, let's say that so much material of such [00:09:00] quality is freely available at that scale so that everybody can just go back to water whatever the source material is and check for themselves.
But that's very demanding in terms of time, in terms of cognitive resources. So a lot of people just stay at that level of the dichotomy because, well, it's a simplification of your worldviews.
Andrea Hiott: Yeah, this is interesting. I didn't know we were going to get into Jordan Peterson, but actually I think this leads to a lot of the other things, it's kind of almost like when something is too popular or too many people who aren't in, in the know, like studying philosophy, like it, then it must be wrong, which seems almost like the opposite of what the point of psychology and philosophy should be, should probably be access and healing and connection and so forth.
But. There's also there can also be this peril, which I think all these guys have probably felt in a way of once you're so famous and so public, you can start to take yourself a little too seriously, or it's really easy to get lost, and then you have all these [00:10:00] people, listening and looking at you and so on, but what struck me when you were talking, you said something about the Cathy Newman, and I can't let that go, because I don't remember exactly what that was, but I feel like it was something about women, wasn't it?
Timo Schuler: Indeed. That
Andrea Hiott: conversation?
Timo Schuler: It was a whole conversation that was, uh, very let's say, very, uh, I won't say controversial topic, but very contentious topic in the sense that it was about mostly the gender pay gap in the UK. And he was interviewed as a social kind of, as a social psychologist in the sense that kind of he was bringing a lot of social study kind of Sociology studies about like large scale effects of personality and how does this play into the wage gap and so on and kind of, yeah, that was what it's, what it was about and what I like beyond kind of beyond the topic itself.
which is actually kind of merits its own deep dive into really trying [00:11:00] to understand what's the reality and what should be said, what should not be said, what's relevant, what's not relevant. It was more the confrontation from like two people and she was really in attack mode and trying to corner him and he was calmly dismantling all her arguments and responding calmly with lots of, of facts and good arguments and reasoning and trying to put things in perspective.
While staying calm and also witty and also a bit humorous. It was his
Andrea Hiott: presence, right? His presence is sort of palpable on film of his calm demeanor and that he's. Really taking in what she says and able to respond and I think, uh, that's what we won't get into this gender thing because that's just too much. I could start to feel uh, what is he talking about with women? Uh, it really is hard to kind of open the space and let everyone talk.
But I think what you're saying that I want to go towards now is that it was a time of your life when you didn't feel like you had much control [00:12:00] and you didn't feel like you were. Maybe being present in that sense and you saw this person who was And you wanted to know how he got like, what is that?
And how do you get more? I feel like a lot of especially maybe men I don't know if that's good to say but that was a time when I think a lot of people felt very powerless and also when we start to think about self actualization that is like a demonstration of it. It could be seen as a demonstration of it, that someone is so poised and calm and present, uh, in that way.
Does that resonate at all?
Timo Schuler: Well, it's definitely something that I wanted therefore to emulate, kind of, it was something I was looking up to. I was just discovering him and from that I also discovered kind of a lot of other virtues that I kind of dismissed earlier in my life. Uh, willingly or unconsciously I don't know.
And like, yeah, it's certainly appealed kind of virtue. [00:13:00] Well, more like, let's say, yeah, personal responsibility. I was mostly looking at stuff from a systemic point of view and organizational point of view. I was working within a large multinational, uh, and fast moving goods industry. And a lot of my thinking went into, well, let's say environmental issues.
And I was just kind of trying to reconcile between, okay, what an organization was doing, the production, the logistics, kind of the scale of just like the human activities, what's, what impacts that has on the planet and kind of on ecosystems. Let's not talk about the planet, but and trying to reconcile with also the motivation of people working in those larger companies, because I was within the learning analytics department.
It was most about governance, let's say, and there it was about like large scale population within a company. It's about 300, 000 people worldwide and trying to do strategic planning. And that's where like, okay, just, okay. Trying [00:14:00] to go into, okay, we have a position it's called medical delegate. Why do they have a turnover kind of, uh, kind of two year tenure and a high turnover rate.
And then you go into those like, it's a bit of a segue, but you go into, okay, the profession of medical delegates, there is a strong regulation in some markets, less than others. And that's where the reality is. Okay. Okay. Those people, they were working for two years within that company. Building their portfolio of kind of network, their portfolio, their skills, their experience.
And then they moved into another industry that was similar, but where it was less regulated or there is more financial incentive. And therefore they moved out of that because of that. So that brought back to the psychological reality. And I was just trying to make sense how the world works and it was way too overwhelming.
And therefore I stayed at the level of, I wouldn't say victimization personally, but more like, Oh, everything [00:15:00] is doomed. Um, we're screwing up the planet. We are producing way too much. Uh, everything, there is an inertia. Anything is doomed and. I was like, and there's nothing that can be done because the power in play are too strong.
And it removed a sense of agency from my life. And unconsciously, I was there for suffering from that throughout the years. And suddenly I was at the point of my life where I kind of had to reconceive stuff.
Andrea Hiott: That's really interesting that you bring in the ecological because I think that happens a lot when we get involved in environmental issues and we really start to see what's going on and what it's really like that it can just become, it's easier somehow to just get, be overwhelmed and feel like there's nothing we can do.
And it's interesting for me to think about that kind of a feeling and the environment and the ecological connection. And then also this business corporate environment that you're describing where it's almost like we have to play roles there. You develop a kind of role. And then you play it in that environment.
It's [00:16:00] very different from this authenticity that I feel like you noticed maybe in Peterson or something, where even if you hate him, you feel like it's authentic or something. So I don't know. It's interesting to think about you being in this corporate environment, but also obviously you had a connection to the environment that starts usually with a very personal, emotional kind of connection.
Were you also kind of playing a role in terms of the business side of things? I'm trying to get at the ideas that I think are motivating a lot of your work now, which is this idea of trying to be better, not only as the scale of the individual, but within the societies that we're part of and within the ecologies and the ecosystems that we're part of.
I guess you weren't feeling that connection at the time, or I don't know.
Timo Schuler: I wasn't. Or I wouldn't say I wasn't, I would say kind of, it didn't reconcile with other aspects of my perception my understanding, because just to [00:17:00] bring some nuance and balance to what I had, what I just said,
Andrea Hiott: at the
Timo Schuler: same time, there are also so many, amazingly competent, but mostly also authentic and motivated and good people working in those in that large corporate company.
And also just thinking back also what that specific company brought to the world, to different people. It's kind of difficult To reconcile stuff and especially then more in my private life in discussions, uh, suddenly just evoking the name of the company, uh, with conversation with people, it was immediately that dichotomy of like, Oh, I love the products or like, Oh, they're the evil incarnate.
And kind of that's, it was a struggle for me to kind of, to not feel a lot of cognitive dissonance in that sense. And therefore I, at some point I was like seeing that, okay, I was operating in two different realms. And there was no connection in between. And that's where kind of personal [00:18:00] crisis without going into the word trauma led me to kind of better understand myself.
So I did a lot of work on myself with, uh, with therapists and that led to a better understanding of myself. And that opened up just the fact that, okay, I was completely disconnected from my emotions. Let's not exaggerate, but let's go with that metaphor of, I was, I had that dichotomy of at the same time, I'm some, I'm somebody very sensitive, very emotional.
And at the same time, I'm very say logical and intellectual and kind of, yeah pragmatic and rational, let's say. And it was, there was a dichotomy there. And through the work that was done. initiated in my personal life and in my intellectual explorations, it slowly built those bridges.
And then throughout the years, I was struggling with just like reconciling those things, but it led to a [00:19:00] much more granular worldview. And that's where I just like, spotted all those, it's a bit easy to say that, especially the considering the title of your podcast, but those dichotomies those dualities that were split, uh, that were isolated, that were not connected.
And that led me to open up the complexity, which actually I'd say I'm still trying to recover from because even at the same time, it was. Burning off a lot of, uh, dead wood, uh, using a Peter sonian uh, metaphor. It was, yeah, completely changing me. And now I'm still trying to build up my new self.
And that led me to those intellectual inquiries and where I'm really fascinated about like the yet, uh, the capability of self transformation. And from that, I [00:20:00] went into the whole ecosystem of those brilliant scientists that are freely available, kind of that you have a direct connection with them through podcasts.
And kind of, that's how I,
Andrea Hiott: What you're saying resonates a lot with me, there's all these different landscapes that we live within or scales at which we live. As you were saying, there's always this kind of dissonance between what the corporation stands for in terms of, even in other people's eyes, what you're trying to do, being an individual or being just associated with that.
I think it's very similar to what we're talking about with Peterson too, where, that's probably a lot of the resistance people feel they don't want to be like lumped together with that Uh, energy or that group of people without even understanding what they're, what that group of people is, so there's always these kind of strange scales that we're trying to somehow make sense of, put together, we don't really, I feel like that [00:21:00] we don't realize that it's okay to have many different kinds of landscapes that you're involved in and to maybe even be a little bit different in each of those landscapes, but to still be able to kind of look at all of those and reconcile them, it can be very hard.
And also, when you go deep into the psychology and you start to have your own feelings and emotions that are very strong, uh, that can be kind of hard too, because when you go back into those other environments, I don't know if you ever feel it, but it can feel like. Oh, how do I be that person in this environment?
I guess you had therapy yourself and that can kind of help bring all these different worlds and skills back together again, but When you're thinking about designing some technology that can help people do this, and do you see that as one of the goals of self actualization?
Andrea Hiott: Is it trying to bring all these different scales and levels and selves in a way, even our individual self, our corporate self, our ecological self? Is it a way of helping us understand those separate, but [00:22:00] also part of maybe one process? Do you think about that much?
Timo Schuler: All the time. You kinda, yeah you put your finger on, on, on exactly the essence of kind of the, Questions I'm wrestling with and there, I mean, from what you just said it's about like, well, uh, identity and between, let's say your personal identity, social identities, because at the time, uh, my master's degree and my master's thesis with my master's degree was about the mentor mentee relationship within a program.
Let's say startup context
Andrea Hiott: through
Timo Schuler: the lens of social identity theory and that in conjunction with the discovery of Peterson's work and lots of other work and so on, uh, we're going into the realm of psychometrics with also the big five personality traits and all sorts of other psychometric, uh, dimensions, tools, measurement, whatever, this really helped me to reconceive the world from more kind of from a [00:23:00] subjective in the sense from a human dimension, rather than from a system or ecological point of view. About the technology. I see really the power of technology to help on that self transformation self.
Um, So I'm going to try to open a bit, uh, some topics without too much so that we can still somehow close them together. All this, I would say can, uh, comes down to one words that I really tried to build as a brand, which is ipseity which is, I think, a phrase. Uh, it's a Latin, Latin roots. I think it's a concept used mostly in philosophy.
Uh, I cannot claim that I fully understand it yet, but it's basically what I've tried to brand my whole venture on. And I built a website. So define it
Andrea Hiott: for us. From your
Timo Schuler: point of view, there's an official definition, but my point of view is like the quality of you [00:24:00] being you. So it's what makes a person unique and that linked in terms like circumambulation in the Jungian terms of and also religious term of circling around the sacred object the concept of self actualization where it's about self transform, uh, of self transcendence, Jungian terms of, individuation so it's all the question. Okay. Okay. What does. What does make me and asking that question and going a bit more into that direction, you realize that, okay, what is me and how is it not changing constantly and that's where you open also the dimension of. of biology and neuroscience. And you realize that, okay, your cells are completely changing and transforming on a daily basis on a, I don't know the scales, but, uh, but yeah, it's constantly changing. Everything is dynamic, right? Exactly. And so that's the question. Okay. But still, there is a sense of selfhood and having [00:25:00] kind of studied that in my master's thesis from a social identity lens where it's highly dynamic and trying to just understand that, and at the same time, okay, what's still that, that I, that sense of myself and observing myself being different in different time, different spaces.
It's kind of, it's highly complex and the more you try to understand it, the more you see the complexity and it's just overwhelming. That's where I discovered also kind of something I discovered when I was 18 through the work of Jacques Monod, one of the founder of modern biology, the concept of entropy, uh, kind of, it's not him that coined this, but through him, I discovered that and through the work of Jordan Peterson and, uh, very specific blog post, I discovered also the concept of psychological entropy.
And I was just something that really spoke to me in the sense that, okay, conceive psychological entropy as a multiplication of possibilities. And since you have a multiplication of possibilities, you don't know [00:26:00] which to choose. And therefore there's no clear path forward to come back to way making. And therefore you are more stuck.
And kind of having suffered from crippling anxiety, it was always something that kind of, I never could properly define it. But suddenly through Loads of stuff happening on internet. I suddenly came about that word of crippling anxiety.
I because it's my second language in English.
I was conceptual crippling as handicapped. We're actually crippling. It's just like, it blocks you. It freezes you. And just that, that, that link between, okay, psychological entropy and let's say freezing, crippling anxiety, uncertainty, not knowing where to go. That just that spoke to me in depth
Andrea Hiott: let's stop there for a second. Yeah. Yeah, please. Because this is, you've, this is really fascinating and I don't want to go over all that too fast because what I hear you saying is it's almost like there's too, so many selves, so many options, so many possibilities, the more you [00:27:00] become aware and the more you learn it seems to kind of computationally explode and there is a kind of Feeling of entropy in that there is an entropy like you feel like you're always trying to keep Minimize, to talk about minimizing surprise or something.
You're always trying to kind of minimize it enough to survive, but also that can become this crippling anxiety of either you don't know which path to take, you don't know how, what is the through line that is yourself, as you were discussing or even just this, I think also what you're bringing out is just this becoming aware of yourself as a subject and how that can almost be a crippling anxiety at first until you learn how to deal with all these Selves and possibilities.
It's very delicate, real psychological stuff, but something I really wanted to bring out was what you were talking about. What is it that's the continuity of self? Of course, this is a huge problem in philosophy and psychology and neuroscience. You can look at it so many different ways, but how I've come to think of it lately and something I'd [00:28:00] like to talk to you about as we get into this idea of entropy and kind of where I see you're going as we talk about.
What you're doing is that it's a pattern, uh, that's similar, not necessarily that your body's staying the same, as you said, all the cells are changing biologically, everything's changing, but there are kind of patterns that are similar and that overlap, even the me that's in the corporate environment, the me that's with my friends, the me that's with my friends.
There are still always patterns that are similar, even if there's a lot of divergence. And as you know, with this idea of waymaking, what you say is really resonant because each individual, just because you're a different spatio temporal position, uh, that comes into the world, you're going to develop in a unique way.
So there is something unique. There's no question about it. There's a going to be a way in which you're moving and making way. That's unique to everything else around you and there's some way in which sharing that becomes very meaningful [00:29:00] But first you have to get through all the stuff you were talking about and we need help with that.
Don't we? So yeah, I'll just like how did you start to deal with this crumpling anxiety and this idea of entropy? You which can seem like something we want to push away, but we kind of have to bring it closer first. Don't we? So that's kind of what therapy and what I think your technology can do. But I'm just wondering, yeah, maybe start there, like back there
Timo Schuler: for sure. Spot on in the sense that, well, let's conceive it in the sense of a journey within a landscape. At the same time, I was like, I was. Opening kind of, I was opening up the exploration of my ecology, of my environment, of my spaces that's increased dramatically the entropy because I was suddenly more aware of everything that I was not aware of before.
Before I was not conscious enough and something just maybe putting a pin on [00:30:00] it since also this talk this podcast is about love and philosophy. Uh, well, we men very simply put, we are mostly activated or we become conscious because of women or thanks to women, let's say, and kind of, that's kind of what happened to me.
But therefore I was suddenly. It kind of activated me in the sense that it led me to try to be even more aware, but that drastically increased my entropy, my uncertainty, my anxiety. And while I was exploring the understanding of that complexity, okay, why is this happening to me? What's happening? At the same time, I received also through the same work, the same people, the same intellectuals, the same, yeah, the same spaces, I received also the, let's say, the solutions, the tools.
And that's where I just came across this. It's incredibly simple and elegant [00:31:00] framework. That kind of allow me to just make sense of, okay, not completely disintegrate into a puddle of possibility without any sense of self or unity. It was the maps of meaning from Jordan Peterson to go back to him, which is really something very simple.
You are at any point in time and space, you are in point A and you want, which is the unbearable present. And you move. to, and you want to move to point B, which is the ideal future. And you do that dynamically, constantly, infinitely, and along the way, well, therefore, in order to move to point B, you put in place a sequence of behaviors.
And that's just the simplest framework of how we operate psychologically based on, on, on him. Then you complexify it a bit in the sense that along the way, since you put in place. sequences of behavior to move towards point B, [00:32:00] well, you encounter something that is salient in your affordance landscape.
And what you encounter, you very quickly conceive it either as an obstacle and therefore opens up all the cascade of psychophysiological consequences, or you perceive it as a tool. And same thing. And then comes in. And so that's already kind of, that's allowed me from that very simple complexify a bit and then conceive it even more kind of simply, but it was a tool that helped me just navigate this like, oh, I want to move there because at the moment it's unbearable.
Whether I want it or not, I want always to move to point B in the sense that I could be in the most comfortable situation. lying down and whatever, at some point I'm going to be thirsty or hungry and I want to move towards something or just like micro moving my muscle [00:33:00] because it's slightly unbearable.
So, and that's where it comes in the word of dukkha, I think from Sanskrit about life is uncomfort. Yeah.
Andrea Hiott: Discomfort.
Timo Schuler: Yeah. Discomfort. Thank you. And so this framework just allowed me to navigate that. opening up of the ambiguity, uncertainty of the psychological entropy that I had.
And that simple framework then allow me to move into even more solution spaces in the sense that then it was, At another level of analysis, this is more philosophical, even I would say, uh, theological, the notion of chaos and order, and that you are the agent that traverses chaos and order, and that within order there is a potential for chaos, and within chaos there's a potential for order, and just that infinite cycle of that dichotomy that we kind of, I would say, intrinsically, naturally go [00:34:00] towards.
you can still move beyond that dichotomy while not being completely overwhelmed by the infinite complexity of reality.
You got a way to kind of go step back and see yourself as moving in a landscape. This can be just so helpful as just a way to conceptualize that there's space around you because in those moments you can feel so Trapped and like as if there's no way you can get out of that discomfort so it can just be so helpful to even imagine it.
Andrea Hiott: Okay. I'm here now There's a lot of space around it reminds me when I used to feel depressed as like when I remember being at the beach And feeling kind of sad and thinking I think I even must have read this in some buddhist text But looking at the ocean and seeing like a little boat and thinking okay, that's my problem now, and there's this huge ocean And how helpful that was to just be able to conceptualize this landscape in that way.
Which reminds [00:35:00] me of the Ship of Theseus. Do you know this?
Timo Schuler: Uh, I've heard it quite a lot recently. So that's another synchronicity. Oh, that's funny. It's been over the past month. I think I heard it three times on three different podcasts, but I haven't looked into yet. But okay, I'll do a very brief.
You should look into it. Please go ahead. Yeah.
Andrea Hiott: You remind me of it because there's, okay, it's a philosophical problem. Basically, there's a ship that starts. I'm going to generalize, but it's an old ship, right? We're back in the day when there's like wooden ships and it's going to travel around the world, but of course it starts at one point and it's going to come back to where it started, but along the way, of course, it has every single plank has to be replaced at some point because it's, the water is seeping in and so on every crew member at some point has to be replaced too, because it's a very long trip.
It takes, forever. So by the time it gets back, every plank has been changed. Every crew member, there's really nothing there that was originally there when it started. So how is it the same ship? I think that Speaks to what you were saying about the self. How do we know it's the same self? But that's also kind of the answer I [00:36:00] have is that it's the same pattern, right?
It's the pattern is the same through that whole shift and it's dynamically changing the very minutiae parts over time and space but the pattern remains. And I think that's an interesting idea. To juxtapose with what you were saying about, you sent me this Peterson paper and you reminded me of that too.
I think it's from like 2012 and he's talking about entropy as, uncertainty emerging as a function of This conflict between all these different worlds, or I think he puts it in terms of affordances, right? You have all these possible behaviors, perceptual affordances, and that's where the uncertainty comes, which is kind of what you were expressing too. And it sounds like it gave you a way to conceptualize all of this. And to start to reconcile these different scales of self and also give you access to other people who had been through it and to possible other trajectories that you could see the regularities of those fitting with yours in a way so you felt less [00:37:00] alone.
And, there's all these kind of layers of how it can help. So that said, I want to know what's this, uh, grateful chat bot. In the paper, I read a paper of yours and I'll link to that. But in that context, what's this grateful chat bot just maybe briefly what are you trying to do? And cause to me it sounds like you're trying to do something like this gift that you received from this access of technology and openness in terms of Peterson and probably others
Timo Schuler: definitely. So, to link that back to really from the very deep and very abstract and very philosophical, it helped me to conceptualize, it helped me to navigate, but I was still stuck with, okay, what should I do? Because. Despite suddenly constraining the possibility to, let's say my immediate possibilities, the immediate possibilities were still almost unending.
And there was like, okay, I need to point at something following his, uh, his suggestion that's where also discovered. I suddenly stumbled upon a interview with Elon Musk where he [00:38:00] says about like, okay, first principle analysis. That's why I went deep into really, let's say the kernel of our psychological functioning or way making
Andrea Hiott: with
Timo Schuler: that maps of meaning.
And that was like, okay, I need to do something about this. And that's where the sudden realization of okay. I want to. Scale autonomous self actualization. So let's start with scale. Scale is about, like, how can we, well, scale it up and scale it down. Then it's about self actualization. And then scaling is mostly about technology, let's say.
Self actualization, no, sorry, then comes autonomy. The thing is that, okay, autonomous, You want to do it by yourself, and then the self actualization. I would simply put, conceptualize that every self actualization, self transformation, development, growing, learning, go through conversation.
Where, whether it's a conversation with yourself, with other people of or with, let's say, an abstract third [00:39:00] party, whether it's an object, a landscape, a deity, whatever. The thing is that all those conversations have huge entry barriers for loads of people, whether it's financial. I mean, a coach, a personal coach is very expensive.
There is time in place. But that's just the financial piece. Then there is the cognitive limitations, the emotional limitations, the cultural limitations, all those barriers that makes it quite harder for people to go into that conversation about their own transformation and development. So that's where I see, okay, you need the mediating agent.
And that's where I was starting to discover technology on the technology front, chatbots became very, uh, huge. And there was more and more buzz about it. That was back in, let's say 2014, 15, 16, and so on. Even kind of really when it was like, I'd say at the height of hype [00:40:00] before 2022, it was, I would say in 2000
18, 19, it was really like, uh, one wave of chatbots of conversational agents.
Andrea Hiott: Then I was,
Timo Schuler: yeah, before, that's way before that and there I saw that, well, chatbots would be a great way because it's interactive, it's dynamic, it simulates a conversation, whereas it's not a text because that's also very cognitively demanding for lots of people.
It's not a conversation audio because the technology was just not for speech. Speech to text to speech and so on. It wasn't really there technologically. So I was like, okay, let's go with no code solutions, meaning that I don't have to code because I don't have a technical background. So let's build a chatbot that would mediate.
And therefore I just explored and tried to figure out a way to design this so that it's still dynamic without having to parametrize every single thing in the conversation and how to take into consideration the complexity of human. And therefore I tried to build a no code solution and I [00:41:00] based my first.
Experiment on, uh, gratitude, intervention protocol based on Andrew Huberman the Huberman lab podcasts exploration of the literature around gratitude in neuroscience. He derived very easy protocol that takes less than five minutes, completely free, uh, very easy to do, but even there, it might be a Barriers to do that by yourself.
So that's what I wanted to do. I try to build the technological infrastructure for this, but suddenly now there was a huge shift in technology with the event of chat GPT in the large language model, and now it just opens up to possibilities that were not possible before, at least at that scale, at that level.
And what I'm trying to do now. Is to, instead of going from very precise stuff and try to build from there taking a more holistic approach and technologically what I aim to do over the coming year and I [00:42:00] guess years, because it's quite an endeavor is to build what's called technically in the industry of, uh, Artificial and kind of generative artificial intelligence, uh, rag pipeline or retrieval augmented generation pipeline, where it's about enhancing the reasoning and let's say computing processing a capability of a large language model.
And now even large multimodal models from Google open AI, whatever, like all those, the space is completely transforming itself constantly. How can you enhance Those capabilities with more knowledge, more cognition, more structure while being adaptive and be personal and allow for autonomy.
And that's where my simple solution, but actually that opens up a huge complexity is, well let's build ontology. So it's basically a representation of. Real of a reality structured [00:43:00] in, let's say a graph a knowledge graph type of technology where it's just, uh, nodes and edges of concept being linked together through language, let's say.
Pizza is is cooked in an oven is heated by either gas or is heated by wood and so on. So that's a very simple ontology. Let's build an ontology about self development, self actualization of things. Personal development of human development, but at the same time, how can we ensure that this doesn't go into a combinatorial explosion or goes into a false route that is like a dead end.
And that's where I hope to build a second ontology that will kind of bound the knowledge. Of that large language model in order to constrain them to remove the relevant ground and see things more saliently in the interaction with the coachee for personal development. So very [00:44:00] simply put. An ontology that is about cybernetics, which is about helping the machine navigate through the immense knowledge that we have over the whole history of mankind and beyond about human development and an ontology, which is about the knowledge about human development, the interaction between the large language model.
And those two ontologies would help to have a more It's a productive, tailored conversation about development with the coachee. So that's what I want to put in technically, but also let's say, uh, conceptually by, yeah, just, uh, starting to build that. And that's what actually has been done by science, let's say over the past hundreds of years.
The thing is that now, instead of human doing that, the machine can do it by itself. Hopefully. So, simply put, that's what I That
Andrea Hiott: last part is kind of tricky for me. This machine can do it by itself. Because [00:45:00] as I understand it, you would have to train these algorithms, I mean, even if it's GPT, it's always trained and the information is often coming from many different sources.
And that's why there's still like this kind of entropy or I mean, you don't really have the entropy in the machine, but you have this in the relationship. You still have the entropy because there's always kind of differing views still coming in the training data. So I kind of wonder, first I should say, I love this idea because to have something you interact with daily in a way that's not static and that's kind of helping you actualize, uh, in the sense that you described seems like
no brainer, definitely a good idea, but also dangerous in this way that I was trying to describe, because if the training data is only coming from you for your interaction, you're going to get up in loops, aren't you? No matter how smart the AI is. So I find, I w I wonder if I'm just missing something or like, have you thought about Uh, yeah.
Is it [00:46:00] trained from therapists? Like in real life, is it really, are you really connect, disconnecting it completely from the human
Timo Schuler: coach? That's the that's the, let's say the problem space that I want to find solution for. And those are the the challenges in the sense that, well, first of all, we have to go beyond the dichotomy of is it good or bad?
It's neither in the sense that, well, it's one solution to a very complex problem. And it's one, what I aim at is an artifact and therefore it's an object. And therefore it should be conceived either as an obstacle or as a tool to come back to the Petersonian maps of meaning. So this is very pragmatically.
It's just one tool in the arsenal of a person to navigate their own. Let's say conceptual landscape. That's where I see
It's when the rubber hits the road that is actually kind of in those details that it's the challenge, but conceptually. Well, I perceive large language [00:47:00] model. So this new form of generative kind of this new form of generative AI that we have available now. I see them as. Both, uh, knowledge engine and a reasoning engine, and without opening up all the complexity and technological complication behind, I see them as they basically were artificial brains that were trained on everything available to the people that trained them and through this, because people are using it, somehow it is sufficient for people wanting to interact with that machine.
So I consider like they have been trained on, let's say everything it's uncomplete, it's biased, who cares? What I want to do is use those capabilities to just crunch through every piece of knowledge that we have available about human development and build an ontology upon [00:48:00] of that and there. My own assumption is that another anthology about cybernetics, since it has been well studied, putting the principle of cybernetics into a, let's say, another large language model that would be specifically trained or fine tuned or constrained by this, would allow a more targeted and relevant usage of that knowledge.
And we can go into loops or we could go into opening up the possibilities. But that's where cybernetics should constrain it because it's based on a goal to achieve and therefore it should always Find a way to either open up or close or open up and close and through the iteration. And it's very conceptually and pragmatically, I see loads of solution, but kind of that's where I'm at.
And that's what I'm pursuing as a challenge, let's say.
Andrea Hiott: That's interesting. So you have to find a way to build in, it goes back to what we were [00:49:00] discussing in terms of entropy in a way, you have to find a way to build entropy into the system, but. not to the point where it's going to crash the system. So, in terms of cybernetics, it's almost not like a loop they call it a loop, but it's not really a loop.
It's more like a spiral. You never, You come back, it's that pattern thing again. It's the same kind of pattern, you're coming back uh, to the pattern, but you're observing it and you're, it's changing each time. There's a little bit of entropy that comes in that you have to deal with and in dealing with it.
You grow and change, it's that very delicate kind of, spiraling that I think you could build into an AI, and I think, uh, one thing I do want to bring up is that you start with actually the group, right? You talk about how to build like resilient and adaptive, public relationships and systems. So maybe that's a way too that if you're starting with a complex system if you have a lot of input and a lot of different systems, human systems working with it in that way, maybe that's, uh, already kind of at least a [00:50:00] solution in the short term and in a way to kind of work it out so that when you get to individual, you have a lot more entropy, good entropy.
Sometimes entropy sounds such a bad word, but Uh, sometimes that uncertainty of one system is the certainty of another. So when you have this group of people, you do grow and help each other. So. I don't know, that's just a lot of kind of thoughts, but I guess before we go, what would you, what do you really want for this?
I think it goes back to the boat. I think you said you want it to be kind of like the first, the captains, uh, first mate or whatever,, helping the person navigate through the waters or something. Do you still see it like that?
Timo Schuler: I see the let's say, uh, the social landscape is, uh, a fleet of boats. Navigating the ocean of transcendence and that technology, that artifact, the tool would be the first mate of a person's of a person's boat. And they are the captain of their own boat of self actualization to come back to the [00:51:00] metaphor of, uh, Scott, Mary Kaufman, that technology would be the first mate.
So it's not the, it's not the, in the driver's seat. It's just a help along the way. It's the first mate. And then that's on the individual, because I start from the individual, but then it trans kind of it, it translates into, well, the social landscape that we're in, and everybody's on their own ship. But those ship are navigating the same water.
And that's where you open up the whole other can of worms and complexity and challenges. And in the paper, I try to highlight where I would like to go next, which is a public sense making a social landscape. Sense making rather than it's, let's say, an individual sense making machine or tool or artifact.
So that, but that's even more broad and future oriented and I'm definitely not there yet. I have my conceptions, my ideas, [00:52:00] practically and conceptually, but Let's start with the individual.
Andrea Hiott: Well, it would certainly be helpful if we, if you or someone could help us, uh, design some sort of technology that could help us step back in the way we talked about that a lot of, uh, the sources of Peterson and others helped you step back and see yourself as part of a landscape.
I feel like with our, uh, In terms of the political and social environment, we kind of are stuck in these trajectories and it's very hard to get everyone to collectively step back and see that there's many other trajectories, there's many other ways of being in this water, in this ocean of even getting to the same place in the ocean, so I feel like you're heading of a way to give us that kind of a perspective.
So I wish you a lot of luck with it and we'll have to talk about it again once you've, proceeded further. But last question if you think about yourself back in that moment when you were needing some some help navigating your boat. How would this have helped you, [00:53:00] ideally? Like, what would you have done?
Would it have been a daily thing? Where , you're talking to the chatbot daily? Or, like, how would you imagine that? That that you back then?
Timo Schuler: Back then, obviously, I had the vision and, let's say, fantasy of being Ironman, uh, Tony Stark and discussing with Friday or with Jarvis and just having somebody that would, that I can converse with that would help me to put things in perspective to, kind of put, yeah. See where I, Oh, you're going into that direction.
Are you sure? Is this logical? Does it make sense? Hey, be careful. You need this. Be careful. You need that. Hey, you're doing a good job. So everything that let's say people. Would bring you, I hope a machine can, you know, everything that you would hope to get from other people, you would get also by the machine just in order to to ensure the minimum to be able to function more properly as a human being and also [00:54:00] being, yeah, feeling more wellbeing psychologically.
So that's what I, so I would see interacting with it daily. And just a companion that helps you ground yourself and helps you figure out stuff and helps you navigate, yeah, navigate to that really, that companion that would help you understand the point A you're in that is unbearable, what you should do to move to the point B, to understand which point B you want to aim at, to make stuff irrelevant, even further irrelevant that you think are relevant.
Transcribed But actually are not, or on the contrary, all the things that you think are irrelevant, but actually should be relevant, make them relevant so that you can have a more, a better path. So it's really a navigational aid. It's a GPS for your conceptual landscape. That's how I would phrase it.
Andrea Hiott: I actually liked that a lot better as thinking of You as the captain of your ship, and you have a crew, but actually this isn't one of the crew members it's actually a tool, or even maybe a smart ship, that, that helps [00:55:00] you in that way, so that kind of shifts it, and and then it becomes a facilitator for your human relationships, and your human life, rather than something that, You're just waking up and spending your day with this AI machine.
I mean, that, that could also be the, that you just get stuck in that relationship, but actually what you just described is more like a facilitator, almost like an emotional sort of GPS as well.
But it's, I think, is it right? The main facilitation is between you and your ecology, all the different scales of beings that you're interacting with, not just you, yourself and the machine.
Timo Schuler: Indeed. Definitely. Yeah. That's, uh, yeah.
Andrea Hiott: Okay. Well, that's great. Well, I wish you lots of luck and thanks for, reaching out and sharing it.
It's really been interesting for me to, even just to open up to Jordan Peterson. That was important, thing that I hadn't done yet. So thank you.
Timo Schuler: My pleasure. Uh thank you. It's really helped, to better phrase my own, uh, stuff at the moment. So thank you for this. It's been a really, [00:56:00] uh, engaging conversation.
Uh, thank you a lot and looking forward to the next one.
Andrea Hiott: Yeah. Me too. Be well.
Timo Schuler: Thank you.