Is anything objective?

Is anything objective? Philosophers Inês Hipólito and Andrea discuss how Inês became a philosopher and in so doing, discuss the computational theory of mind, critical thinking in the digital age, Wittgenstein, enaction, and how AI might contribute to our understanding of our environment. They move into a holistic perspective on technology, humanity, and nature, emphasizing the need for a philosophy and a technology that rekindles our ecological roots and fosters sustainable, interconnected living.

Beginning with the impact of viewing the mind as a Turing machine, it emphasizes how this perspective has shaped the development and perception of AI and LLMs as entities with human-like consciousness and autonomy. Addressing the philosophical and societal repercussions of such views, it warns against the ethical pitfalls and regulatory challenges that could arise from misinterpreting AI’s capabilities. The discussion underscores the importance of incorporating ethical considerations in AI development and advocates for an understanding of cognition that includes embodied interactions with the environment.

Highlighting the concept of 'biophilia deficiency,' it stresses the need for humans and AI to reconnect with nature to ensure well-being and address the climate crisis. The conversation culminates in a call for interdisciplinary efforts to create culturally sensitive, transparent, inclusive AI technologies that can foster a sustainable, interconnected future.

#philosophyofmind #biophilia #computational #artificialintelligence #criticalthinking #ineshipolito #andreahiott #wittgenstein

00:00 Exploring the Dangers of Computational Theory of Mind 02:16 A Warm Welcome and the Power of Philosophy 02:40 The Philosophical Journey Begins: From Portugal to Neuroscience 04:56 Diving Deep into Philosophy: Questions That Shape a Career 07:10 The Intersection of Eastern Philosophy and Cognitive Science 09:20 Enaction and Its Philosophical Roots 21:51 The Role of Philosophy in Science and the Importance of Critical Thinking 38:33 Embracing Change: The Dynamical Systems Approach 42:38 Exploring the Philosophical Foundations of Experience 43:40 Bridging Phenomenology, Ecological Psychology, and Wittgenstein 45:16 The Intersection of Technology, AI, and Cognitive Science 46:42 AI's Societal Impact and the Importance of a Philosophical Approach 49:30 Challenging the Computational Theory of Mind 53:42 The Role of Philosophy in Unraveling AI's Complexities 01:03:29 Reimagining AI Through Active Inference and Environmental Sensitivity 01:21:22 Concluding Thoughts on Objectivity, Diversity, and the Future of AI …the dangers of A.I. are not what you think

website for Ines: https://ineshipolito.my.canva.site/

Google scholar: https://scholar.google.com/citations?...

Book mentioned from Paul van Geert, Rijksuniversiteit Groningen, The Netherlands, and Naomi de Ruiter, Rijksuniversiteit Groningen, The Netherlands: https://www.cambridge.org/core/books/...

If you want to go deeper and build philosophy together, please sign up for the Substack at https://communityphilosophy.substack....

About this podcast:

Beyond Dichotomy started as research conversations & has expanded beyond my own academic pursuits towards noticing the patterns that connect across traditional divides. When I started my studies, there was so much I wanted to explore that I was told I shouldn't explore because it didn't fit into this or that discipline, but having studied and worked in so many fields, those barriers no longer made sense. The same felt true relative to passions and love. So I decided to open myself to all of it beyond traditional distinctions, towards learning and development. This podcast is where those voices gather together in one space as I try and notice the patterns that connect. It's part of my life work and research, but it's also something I hope to share with you and to invite you to share your perspective and position. Thank you for being here. The main goal is to have conversations across disciplines to understand how our approach to life and cognition might address some of the urgent divides we face today. By love and philosophy, I mean the people, passions, and ideas that move us, shape the trajectories of our lives, and co-create our wider social landscapes. Partly due to my trajectory in philosophy, technology, & the cognitive sciences, I’m hoping to better observe binary distinctions in our academic & personal lives: What positive roles have these structures played? How might rethinking these structures & their parameters open new paths & ecological potentials?

TRANSCRIPT:

Is anything objective?

Inês Hipólito: [00:00:00] It's precisely on that notion that there are serious dangers. It's not just a theoretical exercise that at some point in the 50s we had Turing machines and let's just It's trying to conceive and develop a theory in which we would see the mind through the lenses of a Turing machine.

And then we have the modularity of the mind. And then we started having the computational theory of mind. So that's an exercise that you can do. You can advocate that that's a useful exercise because it gives you a tool kit. Wait a minute, this is actually quite useful because now I can explain like all of these processes in the brain in terms of inputs, outputs and those kinds of things. Useful, right?

So then the, what was the metaphor that we could use but now see. as things develop, see how dangers are coming with that.

It's so easy now, once you conceive the, once you make the argument with your premises that the mind is computational in character, right? Then you can, by analogy, say that, well, large language models.

[00:01:00] are computational, they are predictive models, therefore they're minded. It's very easy to take that argument of the mind as a computer and then make the argument for now computers such as LLMs are minded, right? Now they're conscious, now they are independent entities from our own doings

this is where philosophy must come in and that's the role of philosophy. Again, we go back to Wittgenstein because I don't want to take credit for it. It's to precisely eliminate confusions of thinking and confusions of the ways in which we are stacking our beliefs after beliefs and building a theory, right? So sometimes you might have to reverse engineer, go back and test every single, layer of that stacking that got us into a certain theory and really use our best critical thinking to adjudicate whether or not these is more useful other than having a common language for a very, I don't want to say pre scientific, but in a way that we can talk about it, as long as we do not fall for it being an ontology, and I feel [00:02:00] like the computational theory of mind is something that began with as a metaphor and at some point we kind of like lost a little bit track on it and we became using it as an ontology.

 Hi, Ines. It's so good to see you. I'm so glad we meet even though we're on opposite sides of the world this time.

Inês Hipólito: Absolutely. Likewise. I'm delighted to see you again and to be here. So thank you so much. Yes. Good morning to Australia. Yeah, exactly.

Andrea Hiott: So there's so much I want to talk to you about, but since it's already after midnight here, we're probably only going to have about an hour.

So I'm just going to jump right in. And first thing I've always wondered, and I don't know, is how you got into philosophy and what it was that pulled you into this, this world, which has since become philosophy and science and neuroscience and many other things. But How did you, what was, do you remember first being interested in this?

What was it? [00:03:00]

Inês Hipólito: Yeah. So I was, um, I grew up in Portugal and Portugal has this very peculiar, um, interesting feature, which is that the education system in high school, um, you have two, um, subjects that are mandatory. So one of them is Portuguese. Obviously. And then the second one is philosophy. So regardless of you have to take philosophy.

Yes. That's very smart. It's a atory. Absolutely. Wow. It's incredible. So you have a population that has been, um, that has been educated for three years in philosophy. Because those are regardless of the top of the area that you choose, economics, science, humanities, arts, you'll have to have philosophy for three years as a ministry subject.

Fascinating. So

Andrea Hiott: everyone in Portugal is a philosopher. I didn't know that.

Inês Hipólito: At least everyone in Portugal has three years of philosophy and the curriculum is not even that easy. It's quite like It's, it's, [00:04:00] it is quite exceptional.

It's, um, it's it's um, quite, um, interesting. So, um, so I have that. And then, um, before I had philosophy, I was always thinking that I was going to do psychology. I've, because I've been always very, very interested growing up, I was reading all the books that were to do with like, kind of like diaries or kind of like personal experiences.

That's what was interesting me as I was growing up to understand like people's life, um, people's psychological life. Um, so then I was like, yeah, I'm going to do psychology and I want to become a clinical psychologist. So that was the plan. So you wanted to figure people

Andrea Hiott: out.

Inês Hipólito: Yes, I suppose so. And that's very practical way of helping people overcome their psychological challenges and navigate the world.

So I wanted to have a really hands on approach. And psychology seemed like the way to go. And then, um, as I started progressing in, [00:05:00] in philosophy during those times. three mandatory years. And I started coming up with certain questions to my, um, my philosophy teachers at the time. And I remember exactly what the question was that was like really disturbing me and perturbing me was I was coming to them and I'll be like talking about I just wanted to talk about, um, how can we, if at all, can we have, um, an objective perception of the world.

So this was what was bothering me at the time was, of course, um, um, um, I didn't know, um, as much as I know now, but of course it was about like subjectivity and objectivity of perception and all of that philosophy of mind stuff, isn't it? So, and then he told me, you know, I know that you've been like making plans to go to psychology, but the questions that you come up with and that you're interested in, they're philosophical.

And I was like, Oh, okay. So then, um, here we go. Change, change of, um, of path. And then I got into a philosophy, um, degree. [00:06:00] Um, and, um, yeah, and then I started progressing into philosophy of mind. It became very, very clear where I was where, where I was very, very interested the topics that I cared about and that really excited me.

So it was a perfect fit. Um, and then eventually I started realizing that, oh, I want to know more what, what other sciences have to say about the mind. So then I started progressing into cognitive science and getting, um, training and skills in neuroscience as well.

Andrea Hiott: Has that question remained an important question for you?

Trying to understand how we can have something objective? Because I mean, that's kind of at the heart of what science is trying to do in a way. Um, but then there's also the psychological aspect, which is so subjective and phenomenological and you're, you're dealing with both of it.

Inês Hipólito: Yeah, absolutely. Yes.

There's so much to do and there's so much that excites me. So it's interesting to see that, that question ever since high school, that was. Disturbing me was keeping me up at night is still the [00:07:00] one that I love. It was keeping you

Andrea Hiott: up at night, really? You were awake at night in high school thinking about how, how can there be objectivity?

Inês Hipólito: So to speak. That's great. So to speak. But I was reading so much at the time. And, and actually my beginning in philosophy, began with starting to read Buddhist philosophy. So I, I didn't begin with, um, I didn't begin with, um, with the Western philosophy. It was with the Buddhist philosophy that I, that I started.

And then as I started receiving training in Western philosophy, so I started, um, having more conceptual toolkits within Western philosophy. And that's kind of like where I, I work and I'm most, um, skilled and trained to do. And I, as I usually say, I am a student still of Eastern philosophy. That's the best way to go

Andrea Hiott: through life,

Inês Hipólito: yeah.

Yeah. Yeah. So I suppose it's still, it's with me and then that's a re, that's a reason why I think that eventually I did my masters in Wittgenstein, because Wittgenstein and his philosophical, um. No wonder.

Andrea Hiott: Okay. You do talk about language a lot. Language games and so on. So, and [00:08:00] Wittgenstein. So now I, I get it.

Okay.

Inês Hipólito: Exactly. So then Wittgenstein was where I found, um, the most reasonable to me, of course, to my perspective to my worldview, um, the most reasonable way of understanding that peculiarity of subjectivity of standpoint, subjectivity of perceptions. Um, and I found you know, that Soloth in, in Wittgenstein, which is very much pervasive in the philosophy work that I do presently.

Andrea Hiott: Yeah, was it Wittgenstein that led you into ideas of inactivism, inaction, I don't know which term you prefer, or the four E's, some, some call it, because, um, I remember when you came to Berlin School of Mind and Brain, which was back in, right before COVID hit, or right as COVID hit, um, It was right in the middle, yeah.

Yeah, um, that school and most of, kind of, the departments were very analytical in the philosophy. And, um, I believe you were one of the first to start really actually teaching a class on inactivism. Though a lot of people, I think, were hungry for it at the time. And there's been a big [00:09:00] switch, you know, just in the past five years towards that, that becoming pretty mainstream and even in analytical circles.

But there was a time when it wasn't, and there's still a lot of debate. So I wondered, you know, were you studying all the analytic critical thinking, which is wonderful, too, which, It's one reason I think it's wonderful Portugal makes you study philosophy. We should all learn how to think critically. But, um, I wonder when did you find enact, enact, inaction, inactivism that world?

Inês Hipólito: It's quite interesting. Thank you for asking that. So inactivism has different, um, so, so to speak, like, approaches and one of them, the earlier one it comes from all two places Maturana and Varela, um, and then developed further on by Evan Thompson and, um, and then further on with these other brilliant minds that are in, um, with the Basque country like Tipo.

Yeah, with the Eleanor

Andrea Hiott: Rush book, Evan Thompson, Eleanor Rush, and Varela, and then, yeah,

Inês Hipólito: Yeah, [00:10:00] precisely. So, and that branch of inactivism has very strong connections to Buddhist philosophy. So, there I found Vrila did, right? Francisco Vrila. Exactly. Yes, and Evan as well. And there I found, okay, so here I can continue progress and engage Mm hmm.

in a place that is precisely on the intersection between where I started in philosophy, which was Buddhist philosophy, and where my toolkit is coming from now, where I've been trained, which is Western philosophy, and specifically analytic philosophy of mine. And then you have another branch, so to speak, of inactivism, which would be the one that is mostly developed by Sean Gallagher, and that's embodied in activism.

And there you find another very interesting intersection that I was very keen to be collaborating, um, and engaging with because then you find intersection between the ideas of inactivism and phenomenology, which is another topic and another stream of, um, of work that I, I, I'm very [00:11:00] keen to study and I engaged with in the past, um, closely.

Indeed you have radical inactivism which is got a basis on Wittgenstein. So I was quite happy. I was very much, I was like, okay, so this really works for me. I really like these approaches. They have their idiosyncrasies and they are quite important that we understand their idiosyncrasies. It's because their focus is slightly different than once as Wittgenstein would have it.

Once you start with a certain kind of like premise and you start you know, building your systems of beliefs or reason, then you will end up in slightly different, um, kinds of approaches, um, and toolkits. So even though they're slightly, they're idiosyncratic, um, these three different dimensions of inactivism that I just laid out, they are quite idiosyncratic.

Um, they all, um, offer, um, in, um, sort of like more developed or less developed ways, um, toolkits that are, I find extremely useful to describe and [00:12:00] understand and to inform scientific experiments as well in what comes to mind and psychological life.

Andrea Hiott: It's interesting because I hadn't thought of it until you just said that, but in a way, the more Buddhist side, um, there's a kind of an objective feeling there that's a bit closer to maybe your original question, even though of course it's, you know, there's a lot of phenomenology in, um, in that book, The Embodied Mind with, you know, Thompson and Rush and and Varela too.

And phenomenology is obviously like the subjective point of view and taking the individual perspective seriously. So it's, as you were talking, I was thinking actually a lot of your work seems to be kind of blurring traditional demarcations in a way, but towards more clarity, if that makes sense. I mean, I even think like, when you.

You were talking about Wittgenstein, it made me think about one of your papers um, in some of your writing, you talk about cognitive modeling, I think, or computational modeling as maybe a [00:13:00] language game, and I felt like when I was reading that or listening to that, that you were sort of trying to unstick us a little bit from taking our models or, you know, Even the idea of objectivity and subjectivity a little too seriously.

You've also mentioned toolkit a lot, and I feel like in a way you're trying to kind of unstick these things and, and look at them as tools. Does that make sense at all?

Inês Hipólito: It does. It does make sense. Yes, very much indeed. So that paper that you're referring to, um, I really need to get back to it because it's now still a preprint.

I'm working on it to come in a collection of mental representation and the study of of the mind and brain. So yes, so that paper is one in which, um, One can think of computational modeling through the lenses of Wittgenstein's concept of language games, right? So you can think of, um, so for starters, you can think of a very pluralistic approach when it [00:14:00] comes to computational modeling of, um, cognitive phenomena, let me say it like that.

of whatever kind. And of course, we need to get more precise about what is the question that you're after, what is the phenomenon that we want to understand. So we need to nail that down. But for the sake of it, cognitive phenomena can be understood by virtue of us developing as scientists, developing computational models right.

Um, because we don't have, or sometimes for certain phenomena that is so complex, um, there's complex in the sense of having high degrees of freedom. the very difficult to predict, um, the, how the systems features future states or how the system is going to behave in the future, given what we know, then we need to use this computational machinery to try to understand that because you cannot set up an experiment to do that.

So in those particular cases, we need to define. which toolkit is going to be the most useful to understand that particular phenomenon that we, we, we are [00:15:00] after. And I find it that, um, sometimes that gets a little bit, um, that idea that it's a toolkit, it's not ontology, right? Ontology is, um, an activity that of understanding and of description that we engage theoretically with given the information and the data that we have, we reasonably think about it and try to make sense of it.

And then we develop an ontology. But what I, what I tend to reject is the idea that out of a model, you can get an ontology, right? And there are many reasons for that. One of them, I'll just, I'll just leave it here and point out for those who are acquainted with that fallacy, then they'll know exactly what I'm referring to.

I'm referring to, to the, to the map territory fallacy. I believe that when we as scientists want to understand the phenomenon cognitive phenomenon of high [00:16:00] complexity and we develop computational models that are incredibly useful. Otherwise, we would not even get a glimpse and we've been making very good progress to understand Neurodynamics or even psychological phenomenon or phenomena through those kinds of models.

And they are very relevant and very important. The, um, the way to look at it, I think, is that we develop them as tools as we could, um, think about it as a metaphor, as some goggles that we put on, where these computational models, they are, The first step of a computational model is, um, a bunch of assumptions and premises.

We need to do that because we don't know what's the state of affairs. That's, that's what's on the other side after you run the model that you want to get. You get, you want to get some answers, but you don't have them. So then the models are leveraged from assumptions that we make, that we think it may be the case, right?

Um, so we start by doing that and then we define what's the algorithm. What kind of data am I going to use. So all of [00:17:00] these are decisions that are made by that scientific community or that researcher that is developing this model. So in the sense that We have a model that is like a sort of like a package with a certain, I've, I've decided that I'm going to use this kind of like algorithm, I'm going to use this kind of like, modeling strategy rather than all of these other bunch that we have available right?

That means that the model must be some sort of tool that we scientists in our scientific practices engage with in order to get to something that we didn't have before, which is the understanding of the phenomenon. So that we can get to a reasonable explanation and hopefully, ideally, we would get some predictive power.

Such that we not only get an explanation and understanding of the phenomenon, but we get to say, we understand and we can, we get to say from that understanding, what would be the prediction of how people behave if they were in that particular situation. So predictive power would be um, an ideal case that is not always the case.[00:18:00]

Now what are the implications of that? The implications of that is that We can take many different routes. We can take many different goggles to look at the phenomenon, and then we will achieve different kinds of epistemic virtues or advances or gains, right? And that looks to me like engaging in a language game, because we have a very specific kind of language.

an algorithm, a computational language very specific kinds of like assumptions that we do, um, that is used and that's been agreed upon within a scientific community or that works within that language, right? And then by virtue of engaging in that as a language game, then we will get a certain kind of, um, we get to a certain kind of conclusion that allows us to derive some kind of like, inferences about what is likely the case.

Now, what this means is that For me to understand or to study a specific phenomenon, I can use different language games. And what comes out of [00:19:00] that is that if I can use different language games, then what comes out is that these language games, or these computational models, It cannot be isomorphic to the phenomenon itself, because then we would have the task of saying, okay, so which one is going to be, and of course what I'm, what I'm, um,

Andrea Hiott: outlining.

Yeah, the map would have to sort of be the territory in a

Inês Hipólito: way or something. Exactly. Right. So then the map will be the territory. But then once the map becomes the territory, the map is not, it doesn't have use anymore. It's not a map anymore. Because it is a territory.

Andrea Hiott: Right. And the territory is always changing, so, I mean, there's all these nested scales of this.

Inês Hipólito: Absolutely. So, when a model is isomorphic to the target, then it is not a model anymore. So that's the thought. But once We fail to grasp that fallacy, then scientific modeling. So, of whatever field it is, scientific modeling becomes who is, um, on top of [00:20:00] the best tool to explain the phenomenon, right?

Which is pretty much the set of affairs. But even if, if, if it's in there and we fail to grasp that. There are different languages, they can be seen as different language games. Yeah, there's many

Andrea Hiott: different paths or routes to the same place. And I don't see why it would be different. It's not different. If you look at the history of science, um, we, we found similar ways to similar answers and we found very divergent ways to similar answers.

So I think it's, um, it's really interesting though to, do you know C. T. Nguyen and his work on games. Have you ever heard of him, the philosopher? I don't

Inês Hipólito: think I have.

Andrea Hiott: But talk to him for the show. I think you would really like some of it. Just thinking about games, um, it's really different from you, but it would really, it might be a nice he might be somebody interesting to work with.

Um, anyway, I'll send it to you later. But, um, this, you know, you often talk about, um, theory as practice and, um, you know, These kind [00:21:00] of things. And right now you're talking about models and it sounds like you're talking about something like a dynamical systems model with or some algorithms or something.

And it sounds very kind of concrete, but I think what you're saying could also be applied. I mean, we can generalize on this, on this show and, and kind of think wildly a bit, but can't that same idea sort of apply to theories like computationalism? Um, maybe even in activism or, functionalism, or I feel like in, in the world of philosophy, sometimes it feels like one or the other has to be correct instead of what are we oriented toward trying to understand and solve and which one of those paths helps us get there the best way.

Um, Does that make sense? Can that, can, can you kind of take what you were just saying and think of it in terms of those wider, more, you know, messier theories?

Inês Hipólito: Yes, absolutely. And here I'm completely going to borrow from Wittgenstein because my understanding of the relation between philosophy and science is very [00:22:00] Wittgensteinian in the sense that they are not in the same business.

They are in different businesses that can, um, and should, and should um, reinforce each other and complement each other. So, um, when we engage in scientific practices, our goals are in, um, advancing scientific explanation. by virtue of making sense of patterns, by virtue of looking at the natural world, identifying patterns.

And then we can use that identification of patterns either to set up experiments, if that's possible, if the levels of complexity are low, and we can set up experiments, um, to understand those patterns. Or if there's a high level of complexity, then we can develop computational models precisely to help us make sense of those patterns.

And hopefully if we do a great job, um, then in the end we will have something that we didn't have before. And what we [00:23:00] have that we didn't have before is epistemic gain. We have an explain in the form of an explanation. So I think that both philosophy and science are in the same game, game, game of epistemic But they're in different ways, but they're extremely complementary.

So we have on science, um, the goal of developing explanations and predictions. That's the scientific goal. And then we have in philosophy, although the same goal of epistemic gain, but the epistemic gain is slightly different. What we have in philosophy is a goal that, rather than it being of or for scientific explanation, is for the description.

So this is slightly different because, um, what happens is that, um, in that particular case of philosophy, what you want to achieve, we are talking about completely different methodologies. Logically speaking, we have inductive [00:24:00] reasoning for science and deductive reasoning for philosophy, right? Um, so for that reason alone, by virtue of thinking it in terms of logics, right, philosophy isn't, isn't the goal, isn't the, the, the game of Advancing a description of how things are, which is why I was like, let's be very, very careful.

We don't get ontology as a commodity that comes from using or developing computational modelings. We get prediction power. Ideally. And then with philosophy, what we need to look for and strive for is ontology, which is a description of how things are. That is potentially why, um, you were referring to, well, it sounds that we may have some theories that may be more accurate or correct than others as we describe the phenomena in front of us.

Um, and I agree with that. I think that

Andrea Hiott: Yeah, they might be paths that help us better understand. Ourselves or our situation or the text or, [00:25:00] yeah, I don't know. I'm, I guess, yeah, the question, do you, do you think you have to choose between computation, computationalism and, um, the four E's for example, is that like a choice?

Inês Hipólito: Yeah, I think you do. I think you do. And the reason that there are many reasons that you do, um, that you need to make a call, um, you can hold both. So it begins by the fact that if you hold both, you're holding contradiction. So you can't, you don't want to do that. Um, and another reason is that, because the ways in which I was saying that philosophy and science, although they share the main goal for epistemic gain, but they have different within themselves goals, where philosophy is for description and science is for explanation, what happens is once philosophy does a very good job, which By the way, not always is the case.

If philosophy does a very good job in offering these descriptions, these conceptual analyses, these conceptual definitions, [00:26:00] right, these very strong theories, right, that are not fallacious, if philosophy does that really good job, which is what philosophy should be doing, then it can do something that is really important, which is to inform the experimental settings, if, inform the computational models such that when, someone has the hard task of developing a computational model to understand a certain cognitive phenomenon, then they can rely and use on these descriptions that hopefully are as accurate as possible.

That's why we really need to choose. We really need to use our best critical thinking in falsehood. That's what falsehood is about, is to use the best critical thinking such that we get into a description that is the one that is, that mostly approximates the state of how things are. And once you get that, that is going to be really useful to inform computational modeling of a certain phenomenon or that we would like to understand.

For example, in my case, in philosophy of mind or in cognitive science, it would be anything to do with the mind and brain.

Andrea Hiott: So that's, I understand what you're saying [00:27:00] and I agree that, um, a choice has to be made on an individual level in terms of What you're going to understand and the logic you're going to use in certain situations.

Um, and as a philosopher, of course I can see what you mean that a philosopher needs to choose, but I can also kind of zoom out and think, well, but there's not only going to be one description that's always going to fit in the same way that there's not going to be one map that's going to fit to one territory because the territory is always changing.

So I really don't. Um, it's hard for me to think that there could be one kind of, um, description, as you said, um, because we're, we're still describing the ontology. It's not actually the ontology, even if it's ontological, even if it's an ontological description. It's not the process. It's not the thing. That would be confusing the map with the territory.

So it seems to me that something like computationalism or, um, a lot of the things that have, that Going into, you know, that I think have gone overboard and that I want to talk, talk to you [00:28:00] about in terms of the metaphor of the computer and so on. But at some point that could, that could be seen as helping us understand something about the mind and cognition.

Same with any other ism that we wanna, wanna say, I, I just, it, I don't know, but are you saying there should only be one that we then decide is the only one? Um, or is it more that everyone needs to kind of take their place and then you have this very healthy, critical thinking that then takes place, um, as those things kind of spar and evolve each other?

Inês Hipólito: That's a very good question. So, let's say that when we, once we are on the business of trying to make headway into a description of something in the world, um, let's say that to some extent there is one more correct description rather than other, others. Other options, right? Let's say that there's something like that.[00:29:00]

There are, um, conditions to whether or not we can attain that. And that goes back to the question that I was struggling with in high school, which is, can we have an objective perception or perspective or standpoint? of the world. And science reveals to us that although we could work with that as an ideal, um, that is not the case.

Um, it is quite useful more recently in feminist epistemology, or standpoint theory, it's quite useful to have diversity of perspectives precisely because our own limits to our own perspectives and perceptions. Um, so in that sense, um, I think that in the end of the day, there is a correct way of describing.

That's what the description is about, right? It wants to be, or it, it should be as accurate as possible. And other descriptions may be far off the target. [00:30:00] And I think that the best attitude for us to have in relation to that is to understand that we are sitting in our own perspective of what makes most sense.

So in a way we might be working without a safety net. So then what shall we do, right? When we need to adjudicate, what shall we do? Um, if we understand in a very Wittgensteinian sense that I've got a perspective, so do all of the people around me, right? Um, so then The best way to deal with that, or the best attitude I find, is to use our best critical thinking together with our humility that we understand that this is our limited perspective of, um, studying and trying to say something that is valuable, that epistemically valuable about the state of affairs of a certain phenomenon.

In this case, God. the mind, for example, right? And so that in the same way that I've got this perspective, other people got their [00:31:00] own perspectives. So, and we shall respect that. Um, I do think that the way out of it is to train ourselves as much as possible into critical thinking and in ways that we kind of like, um, try to put aside ourself and our prior bias, biases, and all of those kinds of things, and think about, okay, given all the knowledge that I've been, um, receiving, engagingly developing what do I think that is going to be a theory that best captures, by describing a certain phenomenon, that then we can You know, provides to science in order to, um, for science to develop computational models or scientific experiments.

Andrea Hiott: I definitely think critical thinking is It's, um, it's wonderful that you said Portugal has to have philosophy, and I guess that's what you learn is critical thinking, but I think that's such a skill, and it's way underrated, especially in many countries, and it would help a lot with a lot of the [00:32:00] misinformation and issues we have, just if we were trained to think critically.

So I'm saying that because I think it's really, really important, and philosophers sometimes can just bring that to the table, and it's a huge, huge difference, right? Just to be able to think critically. I'm a little more mystical and esoteric, and I know this, so the logic side of philosophy was always harder for me, but for that reason it was probably the most beneficial to learn.

Okay, so that said, though, um, I find that we can even, because it's such, it is a bit of a game in a way, and you want to win, once you get into the logic and the analytic and the deep philosophy, it, there's, I don't know, this is at least my experience, it does become, it feels like a game, and it feels very competitive, and it feels like you really need to win, and it can be very easy to latch on to certain, things in a way and a kind of attachment to go back to the Buddhist kind of thing and not really know that you are.

And then just sort of end up arguing about logical things, trying to, trying to be right, um, instead of [00:33:00] a bigger picture of this description of how the world is. Have you seen that at all? Um,

Inês Hipólito: is that,

Andrea Hiott: is that ever a problem in your world?

Inês Hipólito: Not a problem. I think the people that I, or the philosophers that I admire the most are those that have changed their mind.

And they came up and they said, I was wrong. Those are the ones that I trust. Because precisely that notion of attachment and Thomas Kuhn explains that, um, within his explanation of scientific revolutions, right? That even in even within when you are confronted with data that, um, that is challenging your own theory, you are going to You're, you're gonna, your tendency is to reject it.

So then anomaly starts coming up, anomaly starts coming up, and then a revolution takes so long because those people that are so engaged and potentially all their lives engaged in a certain theory and the theory starts coming up with anomalies, they really have, [00:34:00] we humans have that tendency to Stay attached to those theories that we so much cared about and it's like for some of us would be our life work

Andrea Hiott: Yeah I was about to say to be fair also the academia is sort of structured that way so that you almost have to do that To make your name and get a job.

It's very hard to you know, open the space to everything

Inês Hipólito: 100%. So I was, that would be, that was going to be exactly my second point, which is that academia and science and philosophy is not something that is encapsulated from the real world. And the real world is a world where there is a social cultural setting with power dynamics.

So we need to situate our that's why when I refer to science and scientists work, I refer to it at scientific as scientific practices. We are when we come into doing our research, we come with and from within not only our own individual perspectives. Of course, but [00:35:00] also the kind of like scientific, the kind of like social enculturation that we are within a culture or that we've been enculturated with.

And we bring that into in, I mean, the idea of objective science that where we do not bring any biases is, is great. But it's really just an ideal more and more, the more we've been having more recently in the last. 10, 20 years, I dare say, that we've been, um, able to overcome that really mainstream, um, white men or great men theory.

We've been able to overcome that by having and bringing much more diversity into our social experience. into our scientific experiments. And what that reveals is that the topics of research slowly change, um, into, um, into more diverse topics of research. The, the scientific conceptual toolkits also change.

We have less narratives and less metaphors that are complete, that are based on a much more. male [00:36:00] gaze. And we have different topics of research. All those

Andrea Hiott: paths were already there, you know, a hundred years ago. It's just that they weren't being made obvious. So yeah, it's interesting though, what you say.

I mean, It's like, this idea of objectivity assumes there's a beginning and an end to things, right? And it also assumes a kind of static nature to, to all whatever this is that, this encounter we're having that we're trying to figure it out. Because to be objective, even if, even if we could find a way with our technology to somehow um, share our own spatio temporal position or viewpoint with one another and sort of have a moment of objectivity.

who knows how, having objectivity where everyone's perspective is somehow illuminated, it would, it would have already changed, you know, so it's a very weird kind of idea that, that we can have it, and yet at the same time it's incredibly important, because there is a kind of, ongoing objectivity that, in the sense that, the world is the way it is in those spatiotumoral [00:37:00] moments, and we do want to try to figure it out.

That's what science is doing. So, I say that because it feels like holding the paradox a bit, which is also how it feels when I hear you talking about, the importance of finding a description that is right. And also the importance of, being able to change your mind. Somehow those don't need to be thought of as opposites, right?

That, that's kind of the space we're trying to open in a weird way, maybe even with the kind of philosophy you're doing. Um, but it's very nested, right? It's very hard to talk about. It's easier to go to beginnings, ends, either, or right, wrong. Yes.

Inês Hipólito: Yes, absolutely. Exactly. And those are the pitfalls that come with that rigidity of thinking.

So I absolutely, um, we, you are 100 percent correct when you say that even if we were to find this objective channel between the two of us, as we are, um, having this conversation, the world is already changed, right? So that's exactly it. That's why the people that I admire the most are those that are able to change their minds because those, it [00:38:00] really shows that you are in it.

for the sake of epistemic gain and not for your own sake of proving your own theory that it is right.

Andrea Hiott: I agree with you too. And it's such a strength to be able to see that the world has changed or things have changed and you see things differently and share that. And I think it gives so much. I mean, you know, it shows us that we can, we can change our mind and still be Authentic.

You know, I think that's

Inês Hipólito: And that is science. That is exactly what science should be about, right? It should be, because the world is

Andrea Hiott: changing. The science will also need to change.

Inês Hipólito: Yes. Um, and I want to say two things about the notion of change. So one of the theories that one of the computational frameworks that I prefer is precisely the one that takes into high regards the notion of change.

Right? Where that is the main centerpiece, which is dynamical and complex systems theory. That's the reason, and I'll give you here the reason, that's the reason why I think that computational [00:39:00] framework has better tools to capture, to model, to gain epistemic, um, virtue upon something that I want to study.

Because if we begin with the idea that things are changing constantly, let's study that change. Let's study the patterns of change. I think that we are already in a good direction. Right? But this comes from, by virtue of my philosophical work, this is informing me towards this computational, um, this computational framework rather than other computational frameworks that would take more rigid approaches to how things are in a very linear way, which is not how things are and we know that from physics.

So in that sense, I, I am very, um, much into, um, computational model of dynamical and complex systems theory, and I know that you are in the Netherlands, and in the Netherlands it's quite a special place where it's quite mainstream to use this, um, these kinds of, this kind of approach, and it's really, really a great place to be and, and do that kind of, like, work.

work. [00:40:00] And I will point out to a book that I found really, really impressive. It's from 2022 by, um, by Naomi, the writer and, um, um, Paul van Geert. And they are both in Groningen. And it's, it's, it's incredible because That everything changes and you cannot bathe in the same river twice. And then they build this, this, this, this psycho, this psychology, this whole, how we should understand psychological life and how can we study psychological life with complexity, complexity science.

It's extraordinary. You

Andrea Hiott: have to see that book. I didn't

Inês Hipólito: know that. I didn't know that. It's excellent. Um, it's, it's absolutely excellent.

Andrea Hiott: No, but it's so true because, um, that, at least that, that sort of math and that sort of way of thinking in dynamical systems and complexity theory, and also, of course, we can , go into sort of active inference and things like that do start to give us ways to think.

Yeah. out of, outside of the linearity that I think a lot of us have kind of, [00:41:00] you know, you've been talking about our orientation or what I think of as a trajectory or a path that we're on, and it kind of sets us, um, up with a string of, you know, affordances and ways of seeing the world. Much of that for much of us is just assumed linearity or even assumed sort of duality in our language Even it's built in and it's hard to get out of it Even when you know, it's not real, you know It's hard to get out of it because it's so built in but something like complex systems and dynamical systems shows us Oh, we can think of it as nested or fractal or not necessarily ending or beginning or depending which node you are The world's gonna look different So, I think it is very, very important as a tool and I know you work with that a lot, but I, I wonder how to bridge like Wittgenstein and that, how did, when did that all open up in your life?

Inês Hipólito: Ah, well, so. Wittgenstein informs me on my positionality as a philosopher in respect to my positionality as a scientist. [00:42:00] So, in my understanding of how philosophy relates to science. So, that's With one, one of the ways in which Wittgenstein comes into place, and I find is, um, is, is philosophy very useful.

Um, the other way is, um, remarks on philosophical psychology, the ways in which we understand, um, or can understand psychological life. And by psychological life, I mean, our experiential life. So, I don't mean neural dynamics or that or cognitive processes, um, of information kind, that kind of thing.

I'm not, I don't mean that. I mean the experiential side. So, then we can sign, kind of like lays out that in that way, which allows then us to bridge with, for example, if we like to bridge it to phenomenology and say, well, okay, in order for us to understand experience, um, of what the world is like for us, then we can, for example, look into the work of [00:43:00] Mahloub Ponty for example, and then we can make these connections.

Or we can look into the more inactive way, which is this more coupling with the environment and how the world becomes meaningful in a certain way. Or we can connect with ecological psychology, how the world invites us into explorations. And by virtue of what is afforded to us because it is meaningful.

And it is meaningful how? Well, it's not because it is objectively meaningful. It is meaningful in relation to my past experiences and my very situated interaction with the world right now. So who I am right now is what is going to determine how and what is going to be relevant for me.

Wittgenstein has all of that in his remarks on philosophy of psychology. So it's quite interesting that what he's saying is compatible. So I find him to be. for his remarks on philosophical psychology to be like a really good standpoint upon which you can then use, um, and then explore or expand into one of the E's,

Andrea Hiott: even the way he writes in a [00:44:00] way a lot of my fa favorite things, And the way they write kind of ends up being almost like a practice itself. And there is a kind of nestedness and complexity and kind of a way of dynamical systems kind of way in which, um, when you, at least, yeah, some of his writing when you're reading it, it does sort of feel like that.

And actually it reminds me of this, um, Elizabeth Anscombe, there's a paper that a friend and I just kind of discuss where I can't remember the name of the paper now, but they're kind of using Anscombe. Um, and Wittgenstein in a way to link, um, ecological psychology in an enactivism. I know the paper you're talking about.

You know that paper? Because. Okay.

Inês Hipólito: Yes. It is by a colleague of mine, Miguel Siglouartin. Yeah. Yeah. Okay.

Andrea Hiott: Very interesting. So I'm starting to see now, um. How those kind of go together. Yeah. Yeah. Um, so you work a lot with the ecological and not, and the environment.

I mean, we haven't said it yet, but enactivism is very much about, you said coupling and [00:45:00] it's very much embodied and the body is very important, not just the brain and the environment and this ongoing coupling this feeling of dynamism of change, all the things we've been talking about as part, as part of that philosophy, but I should say it kind of literally now.

Um, but then you also work. with, um, computational models and things like this too. So I know you've written a lot about it. You even edited a book about the idea of this technology the, the mind technology problem. Is that what it's called? Technological mind technology problem.

I think it is. Um, so I guess what I'm trying to get at is where did these things start to clash? Um, the technology and, um, the thoughts about cognition and mind, or the idea of an embodied inactive cognition. And right.

Inês Hipólito: So from, from, from my standpoint, um, from my standpoint, I will say that there's, there's a lot to say.

Yeah. Just one [00:46:00] pathway, which is what my, my most recent work has been to think about artificial intelligence within e cognition because of some of the reasons I've stated here that I find it to be the most compelling way of describing cognitive slash psychological phenomena.

Therefore I navigate in that field and in those theories rather than other theories that I don't find as compelling, um, as descriptions of, um, of the cognitive psychological phenomenon. So then from that standpoint that I work on and I'm at, um, thinking about technology and artificial intelligence specifically is where I'm working at the moment.

So, um, technology itself as, um, tools that we have been developing throughout time during the human civilization, um, tools can be seen as a form of niche construction. And there's a bunch of work that is developed on that and I find it very [00:47:00] compelling. Then artificial intelligence is going to be slightly different.

It's still a tool, but it's a tool that has much more impact and influence in our human slash communities, um, cultural practices, narratives, identities, belongings, and all of those kinds of things. So that's why it requires a specific very clear, very nuanced, um, understanding and description. So that's what I'm working on as a philosopher, is to describe a feedback loop where, um, The question is, how does AI shape and is shaped by our doings, our cultural practices?

It can be a scientific practice, it can be a techno scientific practice, right? So the idea is Yeah, let's stop on that for a

Andrea Hiott: minute, because that's a very important point. So the, the AI would be shaped by our cognition and shaping our [00:48:00] cognition, which is different than the AI being cognitive or having cognition.

Yeah.

Inês Hipólito: Yeah. Exactly. So what I'm pushing back, um, and there's been talks that I've been giving more recently, what I'm pushing back is the idea that AI is this separate entity that is completely independent from our doings. And I'm starting with a premise, which is very e cognition. I'm starting with the premise that AI is, um, an output and emerges from our technoscientific practices.

Because I've got, I've got, I've got several reasons to pursue the premise that I'm pursuing and reject the one in which AI is an independent entity. And one of the reasons that, that brings me to say that AI is an emergent property from our own techno scientific doings is, or, or falls within the notion of accountability.

Because once we push it too far, back and [00:49:00] we say, once we would endorse AI as an entity that is independent of our own doings, and we fail to see that, right, that is, that is going to be quite, um, hard for us to develop some sort of, like, regulation strategies, et cetera, because we are endorsing, um, that pathway in which we are almost doomed or panic culture, you know, around AI where it's really hard to get it under control.

 That's where

Andrea Hiott: we all think we are now in a weird way. I know. Exactly. So I'm pushing I just want to say, yeah, I mean, I want you to keep going, but I also want to talk about, um, how traditional AI is kind of built like that in a way. And what you're saying now relates a bit to this idea of the computer metaphor of the mind, I think, too.

I can see how that's maybe connected. You've, you've also talked about that and written about that and, um, somehow I want to try to link these things of how we have come to think of the mind as a computer. I mean, it reminds me of the difference [00:50:00] between thinking of the body as a machine in terms of trying to study it and thinking of the mind as a machine or something.

I mean, there's all this nuance in here that I feel like is kind of nested all the way through from the computer metaphor up through what you're describing now in terms of AI. And it's a very crucial actually, because it has to do with the kind of AI we're going to build and the kind of future we're going to have.

So.

Inês Hipólito: Absolutely, you are 100 percent right. So that's exactly the, the, one of the talks that I've been giving. It's precisely on that notion that there are serious dangers. It's not just the fact that it is a theoretical exercise that at some point in the 50s we had Turing machines and let's just It's trying to conceive and develop a theory in which we would see the mind through the lenses of a Turing machine.

And then we have the modularity of the mind. And then we started having the computational theory of mind. So that's an exercise that you can do. And you can, you can advocate that that's a useful exercise [00:51:00] because it gives you a tool kit. More like the language

Andrea Hiott: game thing that you described. That's exactly.

Inês Hipólito: Yeah. And then you, then you have, Oh, wait a minute, this is actually quite useful because now I can explain like all of these processes in the brain in terms of inputs, outputs in, in, in processing that is going on in the black box and those kinds of things. Useful, right?

Well, it gives us a kind of system

Andrea Hiott: three, what I call system three. It gives us a space to interact so we can look at it, you know, the same way we might have an artwork or a, you know, I mean, it does the same thing. But the problem is we've taken it as the territory somehow.

Inês Hipólito: Absolutely. So then the, what was the metaphor that we could use because the, the, okay.

So I can see why it would be a benefit. The benefit is now we have a common language that we both understand. Right. So then what language does it helps us communicate? It's yeah, exactly. So that's why it's a language game. So it could be useful, but now see. as things develop, see how dangers are coming with that.

The dangers that are coming with that, from my perspective, um, is that it's so easy now, once you, once you [00:52:00] conceive the, once you make the argument with your premises that the mind is computational in character, right? Then you can, by analogy, say that, well, compute large language models.

are computational, they are predictive models, therefore they're minded. It's very easy to take that argument of the mind as a computer and then make the argument for now computers such as LLMs are minded, right? So it's quite easy to, now they're conscious, now they are independent entities from our own doings and this is real world dangers.

Andrea Hiott: Yeah. And it's happening. And I think we have to stop it just for a minute cause it's so nuanced, right? Because what's happened or you can correct me a bit. Let me just be messy with it. It's. It's like we, we, we got so used to thinking of minds as computational and computers that we now think when, when the things we built as what I call system three or like language, you know, these, these machines, these tools are actually kind of like the language that we're using to share with each other.

But we now think that those [00:53:00] things that we built using these kind of metaphors are, um, somehow. or as good as us or separate from us. It's a very strange difficult thing to pull apart, I think what's happened now. And, and the danger in that is you're kind of trying to say is that we then start to trust them as if, so we've programmed them in a certain way and train them, but then we think they're actually sort of, um, trustworthy and creating this on their own, which.

They're not really they, but at the same time they can create new things which seem like they are. So it becomes very, very tricky. There's no transparency.

Inês Hipólito: Absolutely. See, this is where philosophy must come in and that's the role of philosophy. Right. It's to kind of like as. Again, we go back to Wittgenstein because I don't want to take credit for it.

It's to precisely eliminate confusions of thinking and confusions of the ways in which we are stacking our beliefs after beliefs [00:54:00] and building a theory, right? So sometimes you might have to reverse engineer, go back and test every single, um, layer of that stacking that got us into a certain theory and really use our best critical thinking to adjudicate whether or not these is more useful other than having a common language for a very, I don't want to say pre scientific, but, but in a way that we can talk about it, as long as we do not fall for it being an ontology, and I feel like the computational theory of mind is something that began with as a metaphor and at some point we kind of like lost a little bit track on it and we became using it as an ontology.

Now, by virtue of that very idiosyncratic way of thinking, now you have a very easy way into thinking about AI as being quite similar to us to the extent that you start advocating for AI as being independent. Now, this has, in, other than having [00:55:00] philosophical implications, where I think that that's a mistake to do that kind of analogy, it also has, maybe potentially more importantly, it has societal implications, where once you start generating this, you start generating what, what, what is very clear in media outlets, a panic culture.

And that's quite useful for voiding regulation because then people are just scared and not knowing where to begin. People are easily manipulated. Legislation. Yeah, we've seen what happens in the past. Exactly, and what I've been working on is to shed light upon the fact that, um, AI is not something that is, An independent entity from our own human doings, but it is completely dependent on our own doings, and it is completely idiosyncratic to our social cultural practices.

I give you a very real world example. I give you the example of smart assistants. Smart assistants are typically, um, women presenting. [00:56:00] submissive, um, docile in here to assist, right? So this is a very idiosyncratic, socio cultural way of doing techno science, way of developing AI, right? Um, or you have other examples, um, like, humanoid robots.

And we have developed that work with Kate Finkel in which she's in Stockholm. She's a social roboticist and Mariette Lee, and she's a brilliant, um, feminist epistemology philosopher. And she's in Norway. And in that paper that we have, which is subverting gender norms in human robot interaction is precisely what we are looking at in that paper is the fact that developing AI Completely, um, socio culturally situated in which, like, the problems that come up are problems that emerge not out of nowhere, but they are idiosyncratic to the cultures that we with that we engage, reinforce, or challenge.

Andrea Hiott: Um, yeah. It's almost like going back to, we're going back to things we've dealt with, you know, by sort of somehow projecting them into machines and then, you know, [00:57:00] learning from the machines, from the kind of negative stuff that we have projected into them. It's a very weird cycle. Absolutely.

Inês Hipólito: And, and, and concerns that, um, have been emerging more prominently recently, mostly because of large language models outputs, is the worry that some of the progress that we may have made As a community, or within certain cultures, within certain communities, might be being reverted by virtue of having these systems that are going to enhance, reinforce certain stereotypes that we, as communities, have been making progress.

So it's not only that. It may hinder the progress, but it may take us a little bit back.

Andrea Hiott: It's, it's crazy how much misinformation or disinformation or just lack of information there is about it all, but you can actually think of it in a really simple way.

 It's just like, um, if we were all suddenly watching movies that were really detrimental to women I mean, if, if that is being pumped into your world over and over and over, and that's what everyone's making and [00:58:00] watching, then somehow over time, It changes things, right?

And it's no different with what we now call AI. I mean, it's the same kind of thing. It's just that people don't understand that that stuff, that language, or even action, or whatever you want to, whatever you're programming it, or feeding it on, or giving it to train on, is, is doing that, but people don't see that link, right?

So

Inês Hipólito: that's why I conceptualize it in terms of this feedback loop where AI is going to shape our social cultural practices, right? So imagine that that continuation of the reinforcement that is going now to impact and influence our cultural practices, our cultural identities, our cultural narratives, all the things that we do are going to be influenced.

by the outputs that come that are AI generated outputs. Right. And I think that is an

Andrea Hiott: essential point that people really, really need to, to understand because the reverse is also true. It's dangerous and there's like a lot of power in it [00:59:00] that could be manipulated and used in the wrong way, but the opposite is also true, right.

That. We could also figure out ways to use that it could be, um, a good tool and, you know, and generate positive things.

Inês Hipólito: Exactly. And once you formulate it like this, you have opportunities, but once you formulate it as something that is completely distinguished, completely mind independent, it's got its own thing going on and there's nothing you can do.

There's no responsibility. You don't have opportunities. So not only it's going to be negative because we cannot, because it, it kind of like slows us down to be able to, because it generates this panic culture and people are like, Oh, we don't know what to do with it. We are like frozen here.

Overwhelming. Exactly. Not only it is detrimental, but, but, um, but, but then there's not, we can't even think about opportunities, right? It's much harder to use it as opportunities. So once you see that from the very early stages, so this is, So how AI content is going to impact and shape, shape our cultural practices, narratives, and [01:00:00] identities, right, in a very embodied way.

Um, then you also have the other way around, which is how our social cultural practices are the ones at the basis of the design and development of certain, um, technoscience tools rather than others are in the way that's so

Andrea Hiott: crucial. We need to say it again because it's, it's really that looping that coupling whereby our actions, our priorities, our values, and our awareness of the AI itself, um, and our ability to access it, um, and the transparency of it, but also just, our actions and what we're reading and thinking about and talking about all of that's creating does for us as a tool, which then shapes again, how the information.

So it's this loop. It's not even, it's more a spiral, right? Because you're never coming back to exactly the same place, I think it's also why dynamical systems is so important because you can start to visualize what's happening. [01:01:00] And if you can do, you're doing that as a philosopher, but that's such an important thing because if people can find a way to understand that's what's happening, it takes that fear and that power to a different level.

And it also increases our responsibility, I mean, we are responsible, we don't know we are, but.

Inês Hipólito: Exactly. So what the panic culture does when you see AI as an independent entity is that it takes power away from us. And that's quite useful for markets, and I'm sorry that I have to mention that, but You know, this kind of like power dynamics is the world in which AI is being developed, right?

So then it's going to take away our agency, it's going to take away that kind of thinking, which, by the way, comes from the computational theory of mind. So now we have come, I think, from my perspective, we have come to a point where we understand the dangers. Okay, might be useful to have this conceptual toolkit at some points in time, but you continue to tell that narrative over and over and over again, you reinforce that computational theory of mind narrative.

And then you end up with this kind of reasoning [01:02:00] that either you have to reject the theory altogether, Or you have to completely, or continue with it, which is not going to be quite useful I don't think for us to move forward because then we are missing out on understanding how there is a reinforcement loop or spiral that is happening in the ways in which the values that Good.

the practices, the narratives that we hold are of a community are on the basis of the certain kinds of AI systems that we are developing. And it can be in terms of the ideation when you're thinking about, um, because the view there is that you don't have, we don't have infinite resources, energy and time.

So then we have to allocate like in grants in science, we have to allocate resources to a specific tool that we are going to develop, and the decisions made upon which tools are going to be useful, which tools should we develop, they don't come out of nowhere, they are completely situated in the social cultural, with part of that a mixed environment, um, so then those decisions [01:03:00] about like, what tools are going to be the ones that we want to invest, they're going to come from that culture.

Culture that is going to, that has certain values, certain identities.

Andrea Hiott: People understanding that that link is there is so important. I think that's incredible philosophical work that can be done just to make that apparent because it's in, in the fast pace that we live in now, it, it just gets kind of washed over, right?

Especially if you're not thinking deeply. So that's another good argument for critical thinking and philosophical work. Um, but just to kind of link it to your work with active inference too. So I always think computation is a little bit like control, like it's kind of built on it or it's kind of exciting because it gives you a sense of control or of how things work.

And I think that speaks a little bit to what you were just saying in a way. Um, but then now we've kind of reached this point where we're creating we're able to set sort of dominoes in motion of programming that then we don't understand really anymore. And that gives this [01:04:00] feeling of power and fear that you were talking about more precedent.

Um so I know that there's some talk of, with, with Verses and active inference AI and stuff of. Designing AI in a different way, which would be more transparent, um, would probably be more ecological if we can use that word or environmental in the sense that I guess it would be able to learn. In real time versus being programmed and set on a kind of domino course into some area we don't understand.

I know this, like, too, too big a subject to say much, but I just wanted to at least bring it up a little bit and see if you have something to say. And then I want to just get to this briefly. The idea of the, um, the Gaia paper to kind of Wrap up,

Inês Hipólito: on a positive note,

right. Um, yes. So the idea is that, so this is quite recent we've been working on active inference as a framework to understand the brain situated in the [01:05:00] body, the body situated in an environment, um, with all that comes with interactions with the environment. And this is in a very. computational framing of what we are after by virtue of using the active inference goggles, what we are after is to understand the pattern dynamics of that coupling.

And, um, by using this computational modeling, it is possible for us to look at natural world patterns. So, so what, what we can do is develop the best, um, or the most accurate descriptions theories philosophically that we can and use them to inform computational modeling. So that's the what you were just

Andrea Hiott: describing about the loop or what I would call the spiral.

Exactly. Can actually make that. Part of what a person understands is AI, which right now isn't because of the control and the manipulation and, um, right now it's easier, like a lot of our social media, there's no, I'm not saying [01:06:00] there's like a bad person creating bad social media, but the way that it's gone with the computation.

the way, all the things we've talked about, that we've sort of assumed that the world works computationally. Um, now we have a lot of social media and stuff that's kind of built on those algorithms of get our attention and control and, and all that stuff we talked about. So I guess what you're saying is, you're gonna re, if you can rethink and sort of disrupt, to put a link to the Gaia paper, disrupt, um, These ways that we're assuming AI works, then we can start to actually maybe orient what is AI towards a healthier place.

Inês Hipólito: Absolutely, because what you are then targeting is the very beginning of something. So what you're targeting is to understand that accountability and responsibility comes from the very early stages where You when you have an idea of like problem that needs a solution, right? So then what you do is you're going to propose a solution to the problem, even before you start designing and developing.

Um, [01:07:00] so what you what you do is you look at AI as a life cycle. You break down the complexity of an AI system into its phases. Right. And this is, you know, you could understand that in a very active inference physics way, you have phase one, which is the problem coming up and the idea of being formulated to address the problem.

And then you have the design, the development, implementation, monetarization, and you can have this phase space, um, development of a life cycle of an AI. Cause then you can look into how AI is continuously impacting and changing. the settings in which it is implemented. So you can look at it as a, as a life cycle.

So then what happens is at a very early on stage, when you understand that we are, this is a, this is a product of our own doings, right? Rather than thinking it's something that is out there. So, um, at the very early stages, you can ask, Fundamental questions as to who is this going to benefit who is this for, what are the, the, the, the foreseen consequences of implementing this kind of [01:08:00] thing.

Is it going to be accessible to everyone? Is it sustainable? Is it culturally sensitive? And you ask these questions from the very beyond. What values

Andrea Hiott: does it promote?

Inês Hipólito: Yeah. Exactly. Exactly. And then you can go into, because people tend to focus a little bit more on like algorithm and biases sorry algorithm biases and data biases.

And those are very important questions. But even before that, you have the first, the first stage or the first phase in which you have problem and the solution that we propose. So, By virtue of looking at the problem itself, you are already, you already can identify the kind of culture we live in. Right?

So for example, I'm thinking of like girlfriend app, which is a terrible idea, right? So by virtue of looking at the problem that you're trying to solve, it's already very telling of the culture that we are living in. So

Andrea Hiott: focusing on that space. Just creating that space of getting people to step back, because I really think most of us A lot of people who are using technology are just so, um, stuck to it, you know, this goes back to the [01:09:00] Buddhist kind of cultivating awareness or mindfulness or something.

Philosophy does that too, critical thinking does that, where you can just stop for a moment and disengage from. whatever it is, I mean, the thought or the technology or the tool, um, and ask kind of why you're using it and what it's leading you towards or, you know, and start to look at this looping that's gone into it.

Who created it? What do they want it, want from me? Is this really the value that I want? Because right now we just assume if it's there and it has a hundred thousand followers, it must be right.

Inês Hipólito: Yeah, yeah, yeah. And, so one issue that we've been there are many issues, but one that I'm going to highlight that has come to being very prominent in the last year is the ways in which students engage with large language models in order to be able to do their own, um, academic work, right?

Um, so I've been participating in sessions with, with students and there, it's precisely what you were [01:10:00] saying, it's precisely taking the step back and asking what kind of epistemic experience do I want to have? Do I want to have an epistemic experience at all? Right. Because if I just want to have a diploma at the end of the day, because the thing is, because we cannot really regulate, um, large language models and how students use.

So this is a very specific case study, right? How students use and engage with it. The only thing that you can do is appeal to critical thinking. So in this world that we are trading and navigating at the moment, where, um, AI is absolutely pervasive, the only thing, the best thing that you can do as a student is to become very skilled and very strong in critical thinking, such that you can precisely do what you are saying.

Take that step back and ask those important, meaningful questions. questions. What kind of epistemic gain, epistemic experience do I want to have, if I want to have one at all? Or do I want to just have my diploma at the end of the three years and that's it? But at least you [01:11:00] are in the clarity, right? At least you know what you're doing.

Andrea Hiott: And you can even start to, I mean, just to be, go back to the embodiment, you can, I think a lot of us, or a lot of younger people, like, just, you know, the students, they're so involved in it, they don't even take a moment to see what makes their body react in a positive way or a negative way. Because quite often, a lot of that stuff, you're getting a lot of negative feelings and a lot of negative stuff.

And when you think about what makes you feel positive, it's often something like being out in nature, hanging out with your friends, without. Looking at your phones or being in love or these kind of moments, right, that are more or doing yoga or running or playing sports. All these kinds of things which I feel like with the computational mindset and all that we've been talking about and the way that AI is gone, um, and the way that it's now got it kind of latched onto our attention in this way that, We, we don't take the moment to realize that that's not actually what we want, that it's not even actually where our most valuable moments in life [01:12:00] are and just as you were saying, the students, if they can think of that, like, is this where I really want my life to go and I want to spend my life feeling like this?

Probably not. There's another way. It can be so amazing and you talk about that a bit in the Gaia Piece of the frontiers piece uh, because you talk about something called biophilic deficiency syndrome, I think, and just to way over generalized. It's also about all this nestedness, right, that I think with the computational mindset and, and the, this current AI mindset that we've been talking about, we've gotten away from understanding the nestedness of the living system and that it's Like living and dynamic and changing.

So maybe you can understand, unpack that just very quickly or, you know, the biophilic deficiency syndrome.

Inês Hipólito: Happy to. Um, so to, to connect, this is not in the paper, this bit is not in the paper, but to connect to what we've been discussing, I think that that [01:13:00] is another danger. The biophilic deficiency is another danger of holding.

The computational theory of mind, right? Because in the ways in which it brings in the value of our relations and closeness in a very embodied way with nature, that is actually well documented in scientific literature, right? That engaging and being in nature and the ocean have very positive impact.

There are some countries There's plenty of

Andrea Hiott: research supporting that.

Inês Hipólito: Absolutely. There's, there's countries where GPs are prescribing nature, right? So, precisely because of that literature. And once you hold, or at least the ways in which I have seen the literature in cognitive science holding computational theory explanations of, the mind or the experience of the world or the brain, there's very little room, if at all, to the [01:14:00] value of embodied interactions with nature.

So, you know, cognition comes down to information processing and then the body is a vehicle for that, to gather more information for that to do, for the brain to be able to do the service of, of, of, of, of that. Computational processing of information. I hear very little, or there's very little room for us to work within the space of that theory to unpack into the value of nature for our embodied health.

So which, that's another reason why I kind of like don't tend to go to the computational theory of mind is because it doesn't allow me to spreading to understanding that scientific research on the value of our embeddedness in nature quite directly affecting our embodied health. So, um, in the paper, we talked about biophilia deficiency for lack of a better concept.

Maybe in a different paper, I will come up with a better concept. But, but the idea there is that, um, it seems quite clear from literature that, [01:15:00] um, Humans and non human animals tend to be continuous with nature. They're part of these ecosystems, this nestedness of ecosystems, um, in a way that, um, That is quite crucial for their well being, right?

And then, um, what we do there is apply active inference to that particular understanding of the coupling between the human species and the ecosystem's sustainability environment, right? Or the, those questions or the experience of the climate crisis. And if you depart from, um, an active inference perspective, then, and you are in an activist perspective in which they converge, in which we would say that, well, a living system is in the business of whatever they do, the actions that they engage with.

Typically, of course, there are exceptions and exceptions are what is really interesting to study in cognitive science. But typically [01:16:00] however you go around to the wild and find, um, and find living systems like, like non human animals or humans behaving, there is one thing that is quite common to them all, is that they do not want to die.

And sometimes they might even put themselves on the line in their individual certainty and safety for the, the better of the, of the group, right? Because of the species, right? And then here we can talk all about, Darwin evolution and all of those kinds of things and, and niche construction, um, et cetera, right?

So, we can engage with all of that literature to say that We engage with the environment in ways that we want to stay here. We want to be alive. We want to change the world such that we make the world more suited for our own existence and continuation. And that's crucial and vital done by engaging with the environment.

So if we find a species that is quite aware that the ways in which they are engaging with the environment are going to [01:17:00] be lethal to their own survival, it's quite strange. very much. Yeah, right. That's strange. So if we're talking about, um, our understanding of the climate crisis or our understanding of the ways in which the human species behaves and engages with the environment of the planet earth, that's the Gaia, um, in ways that it wants to be coupled with the environment for their own survival or the survival of the species.

And they act upon the environment for their own survival and for the survival of the species. Now, if we look at them, we take the overview perspective and we look at the species as acting in ways that are not agreeing to the survival of the species, then that's weird. And that's what we call the biophilia syndrome.

It's the lack of connection with nature. And the disturbance there would be for us to find ways in which we reconnect with nature because what we've been doing, [01:18:00] it seems by the fact that it's weird that we engage in actions that are not for the survival of the species and the planet, um, by extension, then, um, because that, That's kind of weird.

Then we need to find solutions that would require us to treat these as a condition rather than just a state of things as a condition and potentially the disturbance that we need is to reconnect with nature and with each other.

Andrea Hiott: Yeah, I think that's really powerful and there's a lot more I imagine you'll explore and all that because, again, it's a sense of nestedness and scale and that, we might even be able to see this kind of, I mean, as you were talking, I was thinking of when people mess with the pheromones for the ants and they end up just going around and around in circles.

Um, I feel like in some way we've kind of messed around with our, our toolkit and we're, we've, we're kind of directing ourselves around in a kind of unhealthy way. If, if you read that paper and zoom out a bit and maybe the next few papers kind of that you, when you're talking about [01:19:00] that, there might be a way in which that itself is kind of the disruption if we can use it now as the opportunity and see that we've done that and Now maybe kind of start to look at a different way of being in the world or being as the world, you know we could maybe even use our technology to understand ourselves as Gaia.

I mean not to get too mystical but as What is definitely the case nested life in nested life, which is what our body is Which is you know, what what life is? So I do think that's a like a hopeful way of connecting all the other things you were talking about with the AI that You There's something really positive and exciting and kind of a whole new world in there if we can just figure out this shift of mindset, because I really do believe it is a kind of a shift of mindset too, which is why the philosophy becomes so important,

Inês Hipólito: Yeah, absolutely.

Yeah. And the opportunity for that, I think it requires as a condition that we really [01:20:00] look at the AI and environment dynamics as a social cultural environment dynamics, in which case we would need that shift. of understanding such that we can use then AI to help us improve ourselves as, as a human species that is sharing the planet earth with all of the rest of the species in the natural world.

Andrea Hiott: And not this linear thing, but this ongoing kind of this ongoing birthing of, of activity. Absolutely. It's beautiful. It's gorgeous. It's dynamic. It's alive. It is. And it's wonderful. And I think there's something about computation. I just have to say, like, I do understand why people go into that mind, that space, because the body is hard and life can be hard and you feel judged.

You know, and there's all this stuff with the body that can be hard, most, a lot, because of the technology that we've built. So it's all connected, right? We can build healthier ways of, um, observing, appreciating being in our bodies, [01:21:00] too. That's, that loop you described has gotten a bit unhealthy, so I can understand how people just want to think it's all computational and, like, go into the video game and download the body, I mean, download yourself and get away, away from the body.

But I think what you, what we all know, really, is that life is not That, and that. We need a way out of it. So it's great that you're opening those paths. So the last question of course you can say anything you want to say that you have, haven't been able to say, but I do want to think about that question about objectivity you brought up at the beginning.

And if, if that itself is a kind of practice, like not answering that question, but just letting that question kind of be there, or if you've if you've answered it, that you know what objectivity is and it doesn't exist.

Inês Hipólito: Um, yeah. So many years after my high school many degrees after and many interactions with, um, brilliant minds and talented people that I care so much about and have taught me so much, um, Yeah, I [01:22:00] tend to think that, um, we have limits to that objectivity, and that is not possible, which is why, um, our best hope and our best attitude is to, um, fully embody humility of our perspective and understand that our perspective is our perspective, um, and that, um, The best way to check our perspective is by communicating with each other in a very genuine way, our ideas, um, and check whether they are reasonable, whether they make sense, um, so it's goes back to the Wittgenstein quote, by language, all confusion comes up, but by language, we may also be able to eliminate, dissolve confusion.

And I think that's a little bit that the sentiment is that. We do have a limited perspective upon the world, a limited stance upon the world. We cannot [01:23:00] understand other people's perspectives and that should give us the humility to communicate with each other, to improve together the description of the world.

Andrea Hiott: I think that's really good and that sort of makes me think of the whole idea of subject and object becomes a bit of the language game and not ever something that would ever could have a reality in the process or in the. in the dynamism itself, um, which, you know, you've expressed in other ways too, but is there anything that we haven't talked about that you really want to make sure you say?

And of course, I can put a lot of stuff in the show notes, but anything on your mind or that we didn't get said?

Inês Hipólito: No, I think we pretty much covered many of the aspects that I really wanted to talk about. Emphasizing, of course, because of the times that we are living where AI is Pervasive in, um, our development individually, our shaping of our experience individually, and that pervasiveness and shaping is very [01:24:00] embodied in the sense that the ways in which AI sits and is impacting our worlds, our communities, is very, the experience that one has is very much depending on what's The body that we have within the social cultural dynamics and power dynamics that exist.

So people are not impacted in the same way. It's very intersectional. It's very depending on, um, the kinds of cultures, um, The kinds of cultures that are behind the programming, the designing of AI systems. And people are impacted differently depending on where they sit in this power dynamics, depending on how they look and their social cultural background.

So we need to have that very much in, um, our awareness rather than, um, I think endorsing a computational view of AI. Um, because people are impacted differently. And the thing is that it's going to [01:25:00] reinforce already this asymmetries where the people that are more fragile are going to be much more negatively impacted than the people that are sitting on higher levels of the hierarchies.

So I think that the future is to look at how we can develop a culturally sensitive AI or culturally grounded AI. AI, such that it is sensitive to the fact that there are, there is diversity around the world and that the Western culture is only 19 percent of the world. There is diversity and we should include that.

Andrea Hiott: Yeah, very well said. And I think even, I hope, my hope for AI and technology is that we can build ways of helping each other understand that we are all inhabiting different spatio temporal positions. Um, we have made our way differently up to this point and all that whole path is affecting It's creating how we're experiencing the world.

I think it's just too hard right now for us to not assume that everyone's having the same phenomenological experience [01:26:00] as us. And even though it might be shared 80%, that 20 percent that is different is really different according to how you've been treated, what you've been said, where you were where you lived, what, you know, every road you've walked, every book you've read.

So I really hope that this kind of AI, maybe even active inference AI can help us find ways of, of understanding each other's positions more and. In that sense, I see a very beautiful future. I don't know about you, but

Inês Hipólito: Well, hopefully so. I think that the first step is to generate this awareness of stepping outside of, um, the usual perspective.

And taking that stance on diversity and inclusivity for a culturally sensitive audience. ways in which we do science and in which we do technology and in which we do philosophy.

Andrea Hiott: Yeah, it's very important. And thank you for championing that and being a voice for that. I appreciate it. Appreciate the work you're doing.

And I can't wait to see all the, all that is to come. You, I don't know how you do it. So many papers and so many [01:27:00] collaborations. It's like, Almost mind blowing, but I, I hope you continue finding that reservoir of energy

Inês Hipólito: yes, I hope so too. There's a lot to be done.

Andrea Hiott: There is, you just have to prioritize, I guess. And

Inês Hipólito: I'm always very happy to, to the collaborations. It's my favorite because of what we've been discussing because it's a, it's a check, right? We work together. But that's good.

Andrea Hiott: No, that's true. All right. Well, I hope you have a beautiful day there.

Inês Hipólito: Thank you so much. And you too. And thank you so much for having me. It was a great pleasure. It was fun participating. It was fun to talk to you. Beautiful discussion. Thank you.

Previous
Previous

I Am Because We Are

Next
Next

Brain GPT and Rethinking Neuroscience with Brad Love of University College London