Adaptive Resilience with Maria Santacaterina

Remembering our strengths with Maria Santacaterina and Andrea Hiott

Navigating the Human-AI Relationship: Trust, Values, and Adaptive Resilience. In this episode, Andrea has a riveting conversation with Maria Sentacatarina, CEO advisor and Board Director, about her book 'Adaptive Resilience.' The discussion explores the evolving dynamics between human intelligence and artificial intelligence, emphasizing the necessity for technology to complement and not replace human capabilities. Topics include the critical role of ethics, the expansive potential of the human mind, and the journey of earning trust amid complex transformations. The conversation underscores the need to align technology with human values and environmental sustainability, while fostering communication and multidisciplinary approaches for leadership in various sectors. Tune in to rethink the human-AI relationship and envision a resilient, equitable future.

#resilience #artificialintelligence #mind #intelligence #potential

00:00 Introduction: The Limits of Computing Human Emotions

00:24 Reconceptualizing the Human-AI Relationship

01:12 Exploring Adaptive Resilience with Maria Sentacatarina

02:55 The Intersection of Humanities and AI

05:46 The Distinction Between Human and Artificial Intelligence

06:47 The Role of Knowledge and Consciousness

10:20 The Complexity of Modeling Human Experience

14:01 The Evolution of Human Intelligence and Technology

14:58 Ethical Implications of AI in Society

26:15 The Importance of Human Values in Technology

45:08 Irreplaceable Value of Human Experience

46:42 The Constraints of Technology on Human Potential

48:50 The Dangers of Oversimplification and Surveillance

49:57 Global Implications of AI and the Need for Human Agency

53:53 The Importance of Nuance in Data and Decision Making

56:30 The Role of Communication in Technology and Society

01:08:32 Ethics, Governance, and the Future of AI

01:18:30 Hope and Vision for a Human-Centric Future

Buy the book: https://www.amazon.co.uk/dp/1119898188

LinkedIn post mentioned: https://www.linkedin.com/posts/marias...

Maria Santacaterina is a sought after Global Strategic Leader & Board Executive Advisor. She is a renowned author and speaker who re-frames the Human-AI relationship. “Adaptive Resilience - how to thrive in a digital era” (Wiley) offers a comprehensive strategy for business leaders seeking to re-invent the Enterprise and build a sustainable digital future; while responding and adapting to an accelerating rate of change, increasing complexity and rapid technological disruptions.

Santacaterina Consulting: https://www.santacaterinaconsulting.c...

For humanity: https://forhumanity.center/fellow/mar...

Join the Substack: https://lovephilosophy.substack.com/

Go deeper with Community Philosophy: https://communityphilosophy.substack....

About this podcast:

Beyond Dichotomy started as research conversations & has expanded beyond my own academic pursuits towards noticing the patterns that connect across traditional divides.

When I started my studies, there was so much I wanted to explore that I was told I shouldn't explore because it didn't fit into this or that discipline, but having studied and worked in so many fields, those barriers no longer made sense. The same felt true relative to passions and love.

So I decided to open myself to all of it beyond traditional distinctions, towards learning and development. This podcast is where those voices gather together in one space as I try and notice the patterns that connect.

It's part of my life work and research, but it's also something I hope to share with you and to invite you to share your perspective and position. Thank you for being here.

The main goal is to have conversations across disciplines to understand how our approach to life and cognition might address some of the urgent divides we face today.

By love and philosophy, I mean the people, passions, and ideas that move us, shape the trajectories of our lives, and co-create our wider social landscapes.

Partly due to my trajectory in philosophy, technology, & the cognitive sciences, I’m hoping to better observe binary distinctions in our academic & personal lives: What positive roles have these structures played? How might rethinking these structures & their parameters open new paths & ecological potentials?

www.andreahiott.com

#andreahiott #loveandphilosophy

TRANSCRIPT:

Adaptive Resiliency for Humans with A.I.

Maria Santacaterina: [00:00:00] You can't compute human emotions, in my view. You can't compute human intelligence, let alone human potential.

And I think that's really what adaptive resilience is about. It's about exploring and discovering and learning about our potential. And that's the expansive power of the mind, which I don't believe we can put into words. a computational environment. Maybe I'm wrong, but that's my feeling.

the human AI relationship needs to be reconceptualized. And We need to put technology into the right place. contexts into the right dimension.

don't think we can reduce ethics to some sort of finite set of variables or set of rules or algorithms, and if you talk to some of the people who are very much in the field, they will tell you, we do not have the mathematics to complete the system and we need to recognize that

but even if we were to stop at what we have today, and we were to use it in a better way, in a more human way, we could [00:01:00] perform magic and miracles in the sense that we could start to solve some of the most intractable problems that we have in society. We could, but we're not doing it.

And that's very regrettable. It's a missed opportunity

Andrea Hiott: Hello, everyone. Welcome back to love and philosophy beyond dichotomy. This is a conversation with Maria Senta Catarina, who is a CEO advisor board director. It's about her wonderful book, adaptive resilience. Which is about trying to find a way. To understand the relationship between AI and humans. That could be inclusive, equitable, sustainable. And transform. Organizations and businesses. In ways that we haven't quite thought of yet we really must think of in order to move forward with the challenges we face. She wrote a LinkedIn post recently, which I think summarizes it quite well. Especially in terms of beyond dichotomy because. She asks, she says, is AI good or bad? It's neither. But it also [00:02:00] isn't neutral. So this speaks very well to this idea. I am constantly talking about trying to hold the paradox. And in, so doing develop a more nuanced understanding and discourse about what AI intelligence is? And realizing that our tools are not human. It can sound so simple. But it can also make people quite angry from one side or the other. But it's just a fact that artificial intelligence is not the same as human intelligence. And it matters that we understand this and look into the nuance and the difference of it. If we want to have a resilient future. As humans in relation to our technology and to the planet and to one another.

So all this is part of Maria's book and it's what we talk about here today. And I look forward to hearing what you think about it. Hope you're doing well, wherever you're making your way.

Thank you for being here today.

Maria Santacaterina: Hi, thank you very much for inviting me. Delighted

Andrea Hiott: to be here so I was telling you just a minute ago a little bit how I found your [00:03:00] work. I was coming at it from resiliency in terms of environmental issues of mobility and things like this and philosophy and technology and neuroscience and.

Interestingly, trying to connect it to business and there's not a lot, that's a lot of connections to make, but your book is really beautifully, in the intersection of all of those things. So just to start I wonder if there was a starting point for you or if from the beginning, it was all connected together.

Maria Santacaterina: So that's a really good question, Andrea. So where did I start? That's a good question.

Andrea Hiott: I don't really believe in beginnings or ends anyway, to be honest but where can we start to understand it? Because, there's so many paths and threads in there.

Maria Santacaterina: Well, I, I have a humanities background. So my, my my academic career, if you like, let's start there began in studying modern languages and literature and philosophy and politics and identity sociology courses in the process.

So as much as I learned about language, I also learned about the social constructs and really through literature learned about culture and different cultural influences and how that somehow is all intertwined [00:04:00] and you really can't separate humanity's evolution into distinct categories.

And so that's quite revelatory when you have that kind of training and you start looking at AI, but before we get to AI, I'll just interject that I also did a master's in international relations. And that is something that I think is going to become even more relevant, especially, in the trajectory that we're currently following.

Yeah, just to set that up a little bit just as you're telling the rest of your story I'm very interested in how all these different paths and ways that you studied this learning, this curiosity, informed the vision that you have in the book of Adaptive Resilience because, I think that's a big contribution of the book. Reading the book, you want to learn more and you want to try to see the world from different angles. So I suppose that was part of your

Maria Santacaterina: That was my

Andrea Hiott: mission.

Maria Santacaterina: My hope and my dream was to be able to reach an audience inspire people and say, look, there are many different possibilities that we can [00:05:00] imagine and that we can pursue and that we can create.

And I think that is the huge distinction between any mechanical means that we may. Invent, create, develop to help us and that can contribute to our learning, certainly, but they can't replace. That's our human learning. That's the distinction.

Andrea Hiott: What you push back to the front is this curiosity and knowledge and sensuality of exploring the world from many different angles, it's not that you're against technology and AI, of course.

Right. Yeah.

Maria Santacaterina: It has become my passion and the more I learn about ai, the more fascinated I become by ai. But I try to make it clear in the book that AI is a thing. And we are not things we are animate human beings. That means that we are complex and we continue to evolve every single day.

You can argue on the other side of the fence that AI is doing the same thing. At its absolute best, it can imitate, mimic, simulate, mirror, whichever term you would like to use, an element of [00:06:00] our lived experience. And I say lived, not living experience. And there's a big distinction in that too.

So anything that is pertaining to the past perhaps can be somewhat modeled, it's only a fragment. of what that might look like. But the model can't think, reason, feel, believe, imagine, inspire, motivate, all the things that we've been talking about just now in terms of human learning the same way we do.

And in fact there's a bit of a myth that I think we need to bust and that is to say that data and information is not the same thing as knowledge. For me, knowledge pertains to consciousness, human consciousness, and it pertains to wisdom. And in that sense, it is something that is important.

spectacularly human. I don't think we can transfer knowledge through machinery. We can certainly help people to understand elements of the reality, but then we have to distinguish between [00:07:00] what is called data input that generates a data output and what then becomes an outcome. And therein lies human intelligence because we are the interpreters par excellence of our reality.

So we have The ability to explore through science, through the arts, through the humanities, all aspects of the world and of our living planet and of ourselves.

I am at heart a humanist. However, I very much appreciate science too, and also technology, so I went back to Aristotle. I'm in favor of his vision of virtues in the sense that virtues pertain to human values and and, They pertain to the essence of our humanity. And so that's why I'm quite a big fan of that way of thinking. Because how else can we distinguish ourselves from inanimate objects?

If you think about it, if we reduce the human brain, which is Something that came about midway through the 20th century, if I'm not mistaken, computer science emerged as a field of study. And so this idea that [00:08:00] the computational interpretation of the mind was going to be You know the foundation for what we have today as technology.

If we reduce it to that level, then we are really denying our human identity, our human dignity. And I would argue the essence of our humanity and our, and by that, I also mean intelligence and intelligence is complex because you use the word sensuality and that's certainly a part of it. We talk about aesthetic value in philosophy and we talk about beauty in philosophy.

And even if you talk to the scientists or the mathematicians, they will also talk about that and they will also talk about intuition and instinct. So how does science come about? So think about it for a second. It doesn't come about just by accident. It comes about because somebody somewhere has started to think about something and has an intuition, an inkling, an instinct, some sort of a hint that something might be this way.

And then you create an experiment and you try to gather evidence and you try to empirically [00:09:00] test something, but it is an experiment. And so the result from that experiment is not the answer necessarily. It's not the truth. In fact, we don't have a definition of truth or human intelligence or human consciousness.

And

Andrea Hiott: through

Maria Santacaterina: the ages, we have been trying to establish what that looks like. And so. Some of us have turned to religion and there are many expressions of religion. Some of us have a sense of spirituality. Whatever it is, we are beings. That means that we are living. our reality as we perceive it, as we experience it, as we interact with others, and also with our surroundings and the bigger picture of the environment and what's happening in the world around us in general.

So all of those dimensions, I would argue, are very difficult to, let's say, in inverted commas, model. How can you model the complexities of an ever changing reality, in something that is actually quite static and that is anyway defined by a computational [00:10:00] environment, a set of rules, a set of algorithms, and so forth.

And somebody's worldview, because don't forget, it's not that the machinery exists by itself. Somebody somewhere has come up with a set of rules put in their own personal worldview biases, if you would like to call it. But You can't compute human emotions, in my view. You can't compute human intelligence, let alone human potential.

And I think that's really what adaptive resilience is about. It's about exploring and discovering and learning about our potential. And that's the expansive power of the mind, which I don't believe we can put into words. a computational environment. Maybe I'm wrong, but that's my feeling.

Andrea Hiott: As you were just talking, I understand what you mean that we can't, you said we can't model it, but I'm not sure if you, do you mean we can't model it, or that the models aren't the territory Because for me, the modeling is fine. And that's what science and that's what we do.

We model things, we create diagrams, we create [00:11:00] visualizations, books, what I would call representations, which by that I always mean external, not like in the brain, we represent things in the world as models.

And that's how we learn because we, Like a book, right? Or we all come to the same book and we in a way, begin to share perspective through that creation. And I, I think technologies can be like that too, and that we can model things, but the problem I see, and maybe you can push back on me if you really or explain to me where you're coming from because I think we can model these things and that's not the problem, but it's that we've somehow begun to think the model is the territory and that we don't understand we're just modeling. So the whole point of being communicating and finding ever more ways to be human and better humans as you express in your book all of that becomes Transcribed collapsed in a way, because we've thought now the thing we built as the map has become the territory.

The model is the important thing, not the answer.

Maria Santacaterina: Exactly. That's what I mean. That's why I'm pushing back because I think, for a scientist or somebody who [00:12:00] is competent in the field or who understands what a model is and what it isn't, what it can and what it can't do, that's absolutely fine.

But where I see the problem is that, um, we're even teaching kids now through an iPad. We're teaching kids mathematics through an iPad. Well, what kind of mathematical reasoning or logical reasoning is that going to develop? Not very much, I will argue. And so you take that into the workplace or you take that into, the broader society.

And there is this assumption that people understand what the technology is and isn't, what it can and can't do. And that is not the case.

Andrea Hiott: You

Maria Santacaterina: take it into the workplace or critical settings and it's even worse. So, when you model something, you can model an element of. a part of the reality. You can't model the whole.

We don't have a computer big enough to model the whole of our existence because that's what we're really talking about. We're talking about human existence. We talk about

Andrea Hiott: digital twins and things as if we can do that now, right?

Maria Santacaterina: Yeah. And then add into the mix other living organisms [00:13:00] and our interactions and the complexities of our interdependencies and our interconnectedness and so forth and the interrelations and so forth.

The variables are infinite. Variables are infinite. And that's what I mean when I say we can't model the world as a whole. We can model elements of it. And this is what I mean to say Tecne as Aristotle put it, was a completion of nature. So where our imagination fails or our mind has not yet expanded to, Explore those new realms of knowledge.

We can use technology to help us get there. So, we've gone to the moon Why because we now have fantastic computers that can help us get there and we can explore space further to say that Technology is then going to replace human intelligence or substitute for human intelligence is really It's an oxymoron because this technology actually depends upon human intelligence, creativity, ingenuity, without which it would not be able to flourish, develop and so [00:14:00] forth.

And so our trajectories are intertwined. But the human AI relationship needs to be reconceptualized. And we need to put technology into the right place. contexts into the right dimension. So for instance if you use an algorithm to determine whether somebody receives care or not, That is questionable from an ethical and a moral perspective, because we are not able to model the whole of human existence, humanities, experiences, and variables, and so forth, and every individual is unique.

So, how can you say because of, a few variables, such as age or, situational, let's say, elements, how can you determine whether somebody is suitable for treatment or not? And deny their experience and deny their knowledge of their own person, if you like, but that is actually what's happening.

And you take it to the extreme and you have insurance companies tweaking the algorithm in a certain direction. So the machine looks somewhere and not somewhere else. And that person doesn't receive the care or the [00:15:00] treatment. So there are good and bad uses of technology.

And it is about the uses. I'll take a step back. It's about the way you construct the technology, the way you use the technology, the form that it takes, the shape that it takes, if you will. And that is determined by us. Nobody else.

Andrea Hiott: I love that you say that, and I think a lot of people have used these words in different ways, and I think that's exactly why some people get turned on or off where it seems like, okay, she's just against technology

or another person is like, oh, she's too much into technology, but I think this point that you make is something that's really happening now, and that we, just in everyday life, people who don't study technology and don't need to, and they do other things, it seems like, in my experience, that people assume the AI is smarter, and they don't understand that the reason it's smarter is it's just got more access to more human sort of input.

So it's It's not that there's some kind of magical thing happening that gets confused with consciousness, I think. That the AI is somehow creating some [00:16:00] smarter being. But it's more that all, it's like you just put all the stuff into it, and then it's going to regurgitate that to you in different ways, but the responsibility is still there with what went into it, which is human. Somehow there's a kind of blind spot there that people just ask the AI and they think the answer is going to be right or smart. They don't understand that the ultimate responsibility is still based in us. Not that we can just ask the machine and then do what the machine says and we're, we don't have the responsibility because it's somehow smarter.

Maria Santacaterina: Well, I think that's where the cookie crumbles. We've anthropomorphized artificial intelligence. It's artificial, but it's not intelligent. It's not a human, but it's made to look like a human. Robots, dancing around and doing backflips and so forth.

This idea that we can replicate ourselves through artificial means is nothing new. I think in my research even as far as we've discovered so far in pre, ancient Egyptian civilization, there was some [00:17:00] sort of a robot thing going on.

And perhaps even before, maybe we just haven't discovered, our imagination where it took us in the past, we're still learning in that sense. So. First of all, we don't have complete knowledge of everything. We know a lot, we know enough about a lot of things, but that is very different than saying we know everything. And I think that's partly, because that's why we're human, that's why we have human intelligence, is that we are on a quest for meaning, and that is a lifelong journey, and it is a Generational and intergenerational and a millennia long journey. It's nothing that's going to end tomorrow.

And if we knew everything, we probably wouldn't be around anymore, that's a big

Andrea Hiott: part of your, in your writing too, that it's dynamic, it's a process. It's, it, that's why you can't model everything, because as soon as you put the model out, everything has changed.

Maria Santacaterina: And you've only been able to take in certain elements. So whatever data is available to you and then, what is the quality of that data? Where does it come from? So, I think the. When you're in a scientific environment and when you're in the laboratory, that is fantastic because you have very [00:18:00] competent people, highly trained. They understand what they're doing and they know how to interpret the model. But when you suddenly throw that out into the, wild as they're calling it and ordinary people come into contact with this thing that looks like. It's a human because it speaks in natural language, but it actually doesn't speak and actually doesn't communicate.

And it's not really natural language. It's just a pattern matching service from, some sort of statistical database. which happens to put forward tokens, as they're calling them, that look like you and I are speaking, but it's not really happening. It's just some sort of a statistical inference combined with probabilistic analysis, and then comes an output which looks realistic, but it isn't.

So when you take all of that into account, what we're compressing, condensing, repressing, oppressing. human intelligence in a kind of a way because we're conditioning people. So there's a big difference [00:19:00] between acculturation and enculturation. Enculturation is the good stuff, the exploring stuff, the discovery stuff.

I'm learning because I'm a human being and I like to find out new things. The other side of it is acculturation. You're a human being. You must do what I tell you to do in the way that I tell you to do it. And this is the machine talking. Well, there's a problem with that. I've oversimplified it, but so people go to work and they don't feel fulfilled. They are asked to carry out tasks or do jobs, but people don't want tasks and jobs. They don't want to be atomized and disintegrated into things, automatons. The assembly line has ruled for a long time. So if you go back a little bit further, pre scientific revolution, there was an exchange of ideas across the world.

And even if we had less mobility, I know that's one of your interest points, somehow we managed to find a way to get the ideas to flow across the world, philosophy science, [00:20:00] humanities, whatever it might be. Post scientific revolution something changed, and then you get to the industrial revolution, and you have the agricultural revolution in between and then things changed dramatically, because all of a sudden, mechanical means were to replace human capabilities.

And that has just come to a crescendo now in the 21st century, where the new version of those mechanical means that replace the power of Human capability has turned into this thing called AI, and it purports to replace the human mind, but it cannot, because we don't know how to define the human mind, and it's not something that one can say is localized within the human mind.

Let's say the human skull, it, there's a brain and some people will argue that, the brain is the mind and I will say differently. It's not. Look at literature, look at, the existentialist literature of the 1960s and other areas that you can look at. It is impossible to define the mind.

And the point [00:21:00] is, whether you go at the mind through science, or whether you go at the mind through let's say, human, humanities types of subjects, such as literature, for example, or philosophy, and those kinds of areas, Nobody can agree on a definition of the mind.

And that is to say, it is something that is is bigger than us. We all have an individual brain and an individual mind, but there is something collective about it too. That's why we are so entwined. And that is the collective consciousness. That is the real power. That's where knowledge, is stored, if you like, and is passed on in generations, and so some people argue it's culture, it's passed on through culture, and, telling stories, that's why narrative is so important for us.

And I don't think we can assign the art of creativity to a computer, by definition, because a computer computes, calculates, and art is something else. is something that is much more chaotic, much more free, much [00:22:00] more natural and organic and animate. It comes from inside, deep inside of you.

Maybe it's a place called love. I don't know exactly how to define it. In a computer it's mathematical, it's calculated, it's computation, it's a whole different thing. And you can't compare the two. An inanimate object, an empty vessel, produces an output is very different than a human being with all of their essence sentience, intelligence consciousness, imagination, intuition, all of the things that we talked about, senses, the sensuality that you mentioned, all of that Which is not definable it's intangible, you can't locate it somewhere, you can't say it's a phenomena it's not physical in that sense.

It, you can't compare the two, it's very different.

Andrea Hiott: Let's hold on a second here because this is, you said we can't, we can never define mind and cognition and I agree with you because of it is dynamic and it's always changing and not located in an individual it's a relational [00:23:00] process.

I'm just speaking from my own kind of feeling. But at the same time I think part of the problem is that we haven't really I haven't really thought much about what mind and intelligence are outside of a computational way lately. Coming from philosophy and neuroscience, computational approaches have been, wonderful in many ways, in the same way that the creation of computers have been wonderful in many ways, and they're all intertwined, starting with Turing and all of this, but we have started to think of, a lot of people have started to think of, the mind as the brain, and the brain as a computer.

And that's something that's become everyday in, in a weird way. So in that sense I, I almost think that we aren't, we can't define mind and intelligence, but there's a kind of underground assumption that not even, that's not even thought about much anymore, that brains and minds are

synonymous because of all this and that they work something like a computer and that's why we trust the computer weirdly even though We're putting all the [00:24:00] input in as we discussed. So for me too this is why I worked with phenomenology and embodied cognition and ecological notions, because there's something about all of that richness that is mind. And it might be part of this confusion that we now think AI is gonna. Be human. There's a weird looping going on there or do you see Well, not unless

Maria Santacaterina: you're going to become a toaster, no. Yeah. If you're going to become a toaster, then, yep, it's going to be superhuman.

Andrea Hiott: That's what I want to be, a superhuman toaster.

That's

Maria Santacaterina: Floridi's line. No, I think we human beings are a bit more than just a bunch of tasks. Complete as if you like. So I think that's where the problem lies. It's that we are no longer comfortable. Or some of us are no longer comfortable thinking, reasoning, exploring, debating critically thinking about something.

So in this world at the moment if you say something that is contrary to mainstream dogma, I'll put it like that [00:25:00] well, you are, cast out of society, they call it cancel culture, if you say something that is not the norm, of how things are spoken of again, you're not listened to, you're not heard, and actually that's a huge mistake, because you can't have a stakeholder economy or a stakeholder business or any kind of organization where voices are heard if you don't hear and listen to people.

And it doesn't matter what you do. But people are, complex, there's a messiness to it. There's a chaotic order to it, but there is nonetheless an order. And I think we've lost sight of the art of leadership. We've gone scientific about it, but you can't put people in boxes. You can't categorize people.

You can't limit their abilities. If you do and you use algorithms to, uberize, let's say. There's this term now that's being used to uberize, the economy. If you do that, you will have no economy because you won't be doing anything productive. [00:26:00] You're just going to be remixing regurgitating use less objects.

nobody wants and creating more waste. So I've exaggerated, but the point is that is what is happening at the moment. We are not creating anything of value and to create something of value means that you have to have values. So what are our values? And that means you have to start thinking more critically about what you're doing.

So if you're running a business, you need a vision, you need a purpose, you need a strategy, you need, a culture that is vibrant and that is multifaceted and that continues to evolve as your business develops and grows. After all, what is a business? It is a social entity. It's a bunch of people, a bunch of human intelligences, if you like, let's put it like this, that have social intelligence and emotional intelligence and cognitive intelligence, and their potential is infinite.

This is the point that is [00:27:00] always missed. Human potential is never ever valued sufficiently, but it's a necessary thing to do right here and now, because if we don't, then we have machinery dictating to us what the contours are of our world. So, I think we have a choice that we need to make. We like to have tools that can help us learn.

As you mentioned earlier, you can model a reality and you can learn something from it, but you are interacting with reality. a tool, and it's helping you to understand something, but you're not taking that as, the oracle of truth. You're using that as a part of a learning process, but the learning process belongs to you, and you take ownership of that.

What we can't do is cede our agency and our autonomy and that learning process that I'm trying to describe here to the a mechanical tool, an object that cannot understand the [00:28:00] human form of natural language. And that sounds like a big phrase, but I've had to put it like that. It's the human form of natural language, not the mechanical form, not the reductive mathematical formulae that I use.

Shannon's cryptography was very much about reducing to the absolute minimum, the elements to be transmitted. That means that, the nuance of language, the layers of meaning, the richness, the complexities of culture that go into my use of language or your use of language, and it is idiosyncratic.

Each of us perceives and feels and expresses language differently. That is lost. That's what they call informational entropy. But we can't recover it through machinery. And that's what we're trying to do, which is a bit of an oxymoron. So where can we use the machinery successfully, beneficially? And that also means profits, by the way, when I say beneficial, it means profits too, because the world works on a money system at the moment.

So we need that. But how can it be [00:29:00] benefit to people and planet? How can it help us to enhance our surroundings elevate. human knowledge rather than destroy the very capabilities that are essential for this technology to, to continue. So that's the paradox.

Andrea Hiott: So, there's so much in there but to get an international relations, when you were studying that, to me, that sounds like organizations, businesses.

political groups, large groups of people working in ways that we used to think of as almost like machines, I want to understand like what that learning process was like because I feel like it's informing a lot of this. And it's actually part of this, the shift of how our models now are broken and we need to reevaluate them through the lenses that you present in the book.

Maria Santacaterina: International relations is really about human relations, but it's taken at the macro level, if you will. So we look at human history, we look at politics, we look at nation states and their behaviors, we look at, [00:30:00] wars and how they come about and also the mechanics of war.

One of my papers was actually on the mechanics of war, and trying to understand that kind of strange logic that goes into wars and the destruction that comes of them. I was actually at the Hopkins when we were in the midst of the Gulf War. So the Maastricht happened and and the Gulf War happened.

So that was quite interesting. Those were two of the papers that I did. But everything that is at the macro level in terms of human relations or international relations, If you take a big step back, whether it's arts, humanities, or sciences, it's still about people. We're still, on this quest for knowledge but we're just looking at it through different disciplines. The scientific revolution made everything quite You put everything on a trajectory of reduction, let's reduce everything to the simplest form, the atom or whatever. But this atomization of life, if you will, is is problematic because not everybody, is prepared or well versed in in, in how to understand science.

Okay. And so it's not necessarily [00:31:00] instinctive or intuitive for many people. And so that means that when you translate science into technology and use technology to inform people's views of the world, it becomes quite reductive and it becomes quite dangerous in that sense. International relations is much more expansive because.

It was a global view of the world. And so, yes, I do have a bias towards a global view of the world. I couldn't put everything into the book that I've learned along the way. But I tried to pick some indicative examples. But, fundamentally, when you take away the politics and you take away the constructs and you take everything, you strip everything back, we are all people, and people have this need to survive.

It's about self preservation, but I don't mean that in a selfish way, in the way that, homo economicus is depicted, we're not rational agents, self interested, there are a few people who behave like that, but that's not everybody. Most people would like to have a nice life, and experience some quality [00:32:00] experiences in their lives, to put it in this way.

And they can be very simple things. And I think, you mentioned earlier about COVID. COVID was a huge wake up call. And the hope was that we would all wake up to reality and say, We need to do this differently. We need to grow the economy different. We need to grow and we need profits and we need, capital growing because otherwise there's nothing to finance public services or healthcare or whatever it might be.

So we need to grow. But the point is how. Which is where I begin, vision.

Vision in human terms is a very complex system, and it is sensory and and all of those things, and it connects our whole being to our environment and to others within our surroundings. Computer vision is different. Computer vision, pattern matches, a cat looks like a cat, but to learn that a cat looks like a cat, it takes how many examples?

Whereas human beings get it instantaneously. And we know that if a cat is a cat, it's a cat. Yeah. Through our

Andrea Hiott: [00:33:00] development, which is a completely, like, I think you are showing that learning is different between a human and a, what we think of as learning in a machine. And also that learning is what we need right now, a cognitive shift as humanity. But this international relations, the reason I keep bringing it up and thinking about it in this group dynamics is because I feel like there's a connection here you show this historical way that things have moved socially and it's almost as if we've become more internationally related and connected, also locally, but there's been this, I don't want to speak linearly, but it does feel like, it could be nested, a nested way that this happened too, but we've become more, obviously more aware of one another, more able to connect.

There's more information if we want to look for it. There's more international relations, so to speak. And in, in that kind of, it feels overwhelming, I think, for humans, for all of us, if we're in a company or if we're teaching at in a fifth grade class, or if we're working at the mall, like all [00:34:00] these jobs now, it feels so overwhelming that we do look to technology to help us because. We're all humans still, that's the point, but being human is also being fragile and vulnerable, and in the same way we can't model everything we also can't be everything that we see around us, but we feel like we are supposed to, or we have to be, whether we're a corporate leader, a CEO, a board member, a teacher, Whatever we are, there's that pressure now.

And I'm trying to relate these because I feel like that expansion of relation, which is wonderful the acceleration and the, and it's also overwhelming and I can understand why we have gone to try to, to technology to, to deal with it and why there is value in that, you said we're not creating anything of value and I'm not sure.

I think we are creating things of value, but we're not using them in a way that's valuable or something like that. Yes, exactly. It's

Maria Santacaterina: not, I say in the book, I think beneficial ends. It's very difficult to articulate very quickly the complexity of the subject, but but what I mean when we're not creating [00:35:00] anything of value, what I mean is that everything's about market price.

I'm thinking more in business terms here it's determined by the market price rather than the. intrinsic value of something. So a company has extrinsic value, which is measured by market price, arguably, but it also has, and this is what I'm arguing in the book, intrinsic value, and that is its people.

So if a company, if an organization knows how to leverage the full potential of its people, It will be any company hands down. And the overwhelming part that you mentioned is because we are being pressurized and forced to absorb all of this knowledge and all of this expansive opportunity that we now have through technology.

In a way that we don't have time as humans to absorb it in our human pace. And that is something that is worrisome. Not everybody has the capacity to think through that. quickly and act quickly and absorb it and take it in the stride. Some people do feel overwhelmed, but the [00:36:00] solution is not a machine that tells you how to answer in an, in a culturally appropriate manner.

The answer is you allow people to discover through their own human interactions, what the right thing to say is and do. That's how we evolve as people. So international relations actually does have a big part to play in this. At the macro level, we would like to see some sort of internationally agreed governance structure.

Which says that, here is the limit. Here is the red line, for example, in the military domain or genetic engineering domain or other problematic areas which are questionable from an ethical perspective, but also from a moral constraint and restraint perspective that must be agreed internationally.

So international relations have a big part to play on the micro level. It's really no different. If you go in an organization, whether it's a for profit business or not for profit business, it is made up of many different people who now have [00:37:00] through technology access to many more different people around the world.

And so that's why I is a global phenomenon, and you can't localize it and regionalize it the same way you might have done with other technological revolutions. So, I think, one of the biggest changes was when we started to have steamships, right? And people started to travel, and then you started to get something called mass tourism around about 1870s, I think it was?

So that vortex of change was something that we managed to navigate. But now. the pace of change in that vortex of change is in fact exponential. In that sense, yes, technology is moving so fast, it's very hard to stay on top of all the developments.

But for instance all of the studies I've seen with regards to alignment or misalignment, Therein lies the danger, and that's, I go back to the point I was making earlier about acculturation and enculturation. Culture is not something that [00:38:00] grows on trees, the same way money doesn't grow on trees.

Culture takes time, it's a process, it's an evolution, and as you, to your point, The more access we have to many different types of realities, the more time we need to think things through and to be critical about our thinking and our reasoning processes so that when we do absorb the new knowledge, we can absorb it with cognizance, with sentience.

Yeah, the

Andrea Hiott: reflection and the Huxley, you talk about this reflection and consciously and deliberately, this is very fascinating because in a way it seems like technology. In early years freed us to have time to deliberate, to think, to create, to write books, to do philosophy, these kinds of things.

And when I say us, I'm speaking in a weird way, but in general, we could think of technology as opening up space for consciousness and deliberation. But to your points that you were just making, I was thinking, okay, but somehow we've gotten in this. So, it's not just that technology is one part of society [00:39:00] anymore if we look at late 1800s to now, the focus, it's not that technology is only a tool, a thing that we use sometimes.

It's become so ubiquitous and plays a very big part in the way it's sold to us from the companies who are making money from it, Not even one person or one like bad person, but there's an inertia that has developed where you try to create technology such that it almost takes the attention away.

And. takes the space of consciousness and deliberation away. We have this attention economy. And so there's a weird thing that's happened where the technology itself is now being designed to fill the space of the consciousness and deliberation, which is exactly the problem that we have with technology.

Maria Santacaterina: Exactly. It's a vicious cycle. It's a feedback loop that has been orchestrated by some detrimental, Ideological stances. Let me put it like this. I think that the acronym that is used is the acronym that is used is Tez Creole. So you hear of things such as [00:40:00] transhumanism and you hear of things such as effective altruism and super alignment and such things.

That is dangerous territory. But to be

Andrea Hiott: fair, that too is very complicated, because a lot of people come at that from many different paths, and it's not meant only the way that it's portrayed in different ways. So that too is very complicated, but I see what you mean. There's definitely a lot of movement towards something like overcoming what is human with what is technological. All that is messy too, but Even within just, like, even if we set that aside, just that we no longer know how to be with ourselves in a room consciously and deliberately, right? Because of all of this, in a sense it's hard, for a lot of people to be reflective in the, and be with themselves and the technology is, it doesn't help, but it feels like it fills that space.

Maria Santacaterina: Exactly. It's a bit like a drug in that sense. So, it is very complicated. It's this, none of this is simple. But let me just say this. I don't know of any technologist or [00:41:00] scientist who set out to create something harmful or something bad. That was never the intention, but there is my point. Human values and human intentions are complex and they're very hard to distill and, and envelop. In a computational environment. That's what I mean about, you can't model everything.

And so when we talk about meaning, even meaning emerges over time. For example, I will read a book today, and I'll read it again tomorrow or the next day, and I will get something different, because my thinking is evolving. I'm absorbing something and I'm thinking about it and I and I'm developing at the same time.

The art of writing, the physical act of writing is in itself a formative experience. If all I do is press a button and the screen tells me something, that is not the same. So, We have a situation now where if I look at children today, their handwriting is underdeveloped. I think when I was seven or [00:42:00] eight, which is probably showing my age, but I was doing Shakespearean italics, perfectly, beautifully crafted italics.

It was developing a part of my brain because we had to do calligraphy. Now, all of these things are gone. It was, I had to, it was part of my education today maybe I would choose to, in the sense that it's a formative art. And, if we don't have let's say cognitive development and we don't have manual dexterity as part of our learning experiences, and I include work in that.

I don't say tasks. or jobs as they are defined in the, AI. I say work because work is an edifying experience. It is part of our human dignities. Part of our identity is part of our self worth. You can't take that away from a human being. You can't replace that with universal basic income. If we want to talk about it in economic terms, that is detrimental to society and to human progress.

And I would even say to human survival if [00:43:00] you think about it. Absolutely,

Andrea Hiott: because it is to your point too about the difference between a embodied human intelligence and what we think of in an artificial intelligence because the body develops as you come into the world and you develop your patterns and that is your intelligence and it's all this embodied action it's doing the calligraphy for example but it's having the conversations it's reading the books it's you're you're learning these patterns so as to make your way through all that you encounter and if you're doing that Somehow just relying on the technology to, generate those patterns. There's not a survival. There's not a resiliency that, that builds up through your development. There's, what is it about that process? That's so important. What is it? Why do we not? You just gave a great example of just, if you haven't read the books or whatever and you just get the answer, you're not really learning anything, but why does that matter? I know it matters, and you express it in the book in different ways, but

Maria Santacaterina: it matters deeply and profoundly. We, we wouldn't be who we are if we [00:44:00] didn't write, create art, invent tools and, In whichever way we express our humanity, our essence, we are able to move beyond the limitations of our immediate environment and surroundings. And what technology is doing in the adverse sense is somehow restricting, constraining, limiting our experiences because If we rely upon the patterns that are generated, remixed, regurgitated, shall we say, by mechanical means, then we are not participating in an expansive universe.

Arguably, the universe is an expanding universe. entity for want of a better definition. It is infinite. We don't know, but we think it is because we continually want to explore it. And, through the ages, we're trying to understand it. And so that's why we invent all these things from religion, right.

The way through to new philosophies. But [00:45:00] fundamentally, I don't think our quest for meaning will ever be satisfied or our thirst for knowledge or curiosity will ever be quashed unless. we deny ourselves the opportunity of exercising human agency and human autonomy. And then you take that into the legal domain and it's called civil liberties and human rights.

And, it takes on a whole new dimension in this world because we are now talking about protecting the human mind. So the exploitation of human data, the commodification of human beings is plain for all to see. The problem that we have with technology because of its technical limitations is its application in critical settings, ranging from healthcare, education, employment to criminal justice.

So when we take it down to real everyday living and lived experiences, it is very [00:46:00] problematic due to the manner in which it is constructed, the form that it takes, the uses that are made of it. and the competencies that are lacking. They are lacking mechanically in terms of abilities to do something and technically there are limitations and so that's why we have this endless form filling going on in absolutely every aspect of our life.

But this oversimplification This binary codification of a human being. Some, somehow that's what's happening is very reductive and it's very dangerous for humanity's own survival because we'll end up making the wrong decisions and our space for choice is being reduced. All the time from the surveillance economy, as it was called by Shazana to, quite pernicious, I would say, and restrictive practices in the workplace where, a Uber driver is, they play games on the psyche of, human vulnerabilities are exploited to tell the driver to go to this corner or [00:47:00] that corner.

And then there's no customer there because they want them to work longer hours. That is exploitative to the extreme. It doesn't cost the company much, but it costs the human being very much. So then you look at the knock on effects. Ill health and mental problems and so on and so forth, and we haven't got enough resources, and there's not enough money for healthcare, and all these sort of things.

So, if you think about it as a whole, and here international relations come into play as well, I don't think we can resolve the so called intractable problems at a national level any longer, or regional or local level any longer. when we are dealing with something called artificial intelligence. Because, this arms race that is going on, it's not just in the military sphere, it is in every sphere.

And I think that's new. And we don't quite know how to deal with it yet. We're trying to learn, we're grappling with all these kind of different challenges. I don't think we fully understand them yet, [00:48:00] either. And so maybe on the one hand, we have a simplistic, dichotomy between the superhuman computer and us little beings that are so inefficient that we can't compute as fast as.

But what is the superhuman computer computing exactly? I asked myself that question. Is it relevant? Is it pertinent to me as an individual or to society collectively? And where are we actually heading? And I think you've got to ask those questions. Those difficult questions and you've got to try and figure it out to find your purpose and see how to use the technology in a good way.

So I'm an advocate of creating good technology and using it in a good way. Yeah, towards

Andrea Hiott: flourishing and transformation in the good sense that you talk about.

Maria Santacaterina: Yes, transformation, going from one place which is maybe suboptimal to a much better place. Right. I hope.

Andrea Hiott: Not like transformers. Not like transforming into

Maria Santacaterina: a cat.

No, that's not my idea of technology.

Andrea Hiott: I agree with all of this [00:49:00] and at the same time I know some people are hearing it a bit differently and it's so I guess You know, what you say is so true that there's, that we can't rely on artificial, let's just like, we can't rely on artificial intelligence, we can't rely on, even though we've made these wonderful tools which can be helpful and there's a lot of value in them, when it comes to something like, let's, you said healthcare, I think, so if I'm a doctor, or I'm in, I work at a hospital, I, of course, need technology now.

It's, there's a lot of value in that, in keeping track of a lot of data and finding patterns in the way that AI does, which is really just looking at tons of data in different layers and finding the patterns in it, which is all very important. There's nothing superhuman, wonderful, coming out of that.

It's just regurgitating from a really big pool. So we can think of it as like a really, like a water source. But if human activity were to stop, that water source would also be finite and would disappear. [00:50:00] would be gone. So there's not some kind of thing feeding the AI that's not feeding, that's not human.

Okay, but if I'm a doctor or nurse, whatever, I need this. But what you're trying, what you're pointing out is that if we just start thinking, okay, well that limits that finite, Source is being fed by something other than humanity. So I can trust it to, I can just trust that and not think of it in human responsibility or in terms of that, that needs to constantly be questioned and improved and that there needs to be checks and balances on that system.

And that system needs to evolve. If I just trust it, then I'm probably going to use some data that's been sourced from. A particular pool that includes a very small percentage of what is actually the case, because as you were saying, we can't model everything, we're only modeling, those are just data from a very small pool, and so when I'm gonna try to help someone and prescribe them some medicine or say what's wrong with them, I could cause real harm if I'm not taking into [00:51:00] consideration all the new variables that are unique to their situation and who they are, which aren't going to be in that AI, right?

So it gets so nuanced, but it's so important, and it's in every little decision of our lives, that there's so much nuance to every human coming into that situation that the data just cannot represent, because no one's like you and no one's like me. We have our own development, and that has to be accounted for, and it can't be from this finite pool, right?

I'm trying, but, it's what is that? This is the

Maria Santacaterina: point. So when you have aggregated data, it's exactly that, it's aggregated. So you lose the signal, if you like, of visual. So if you take that in a healthcare context, which is, apropos at the moment you can identify trends.

For instance, you can identify which environmental factors might influence the development of cancer, for instance, or diabetes. That's one thing. It can give you useful background information [00:52:00] of a population, provided that you have good sources of data, qualitatively, verified, not just quantitatively, but qualitatively.

Andrea Hiott: The

Maria Santacaterina: point is, the way that we are collecting data is qualitatively deficient. It is only quantitatively efficient, arguably, I would say. It's

Andrea Hiott: Metrics, but it's not the actual experience for example, I might ask you a questionnaire of different things about your unique experience.

That's not in the data. You don't, that's not quantitative.

Maria Santacaterina: I'm only given a choice, a binary choice. It's black or it's white, but I'm not black or I'm not white. And when I say nuance, I also refer to a deeper level of meaning.

That detail is important when it's a life and death situation. So for example, if I am a medical doctor and I have data to substantiate my. Giving somebody drug A or drug B, [00:53:00] just because I have data to substantiate my selection of drug A or drug B, if it is not relevant to me, to the person's condition, or the data is wrong, then I will cause harm.

And it's happened already. So, also, if you go one step back, the development of drugs it's really good that we got the vaccine so quickly for COVID. And it's really good that we can now jump through hula hoops, which we couldn't do before. And now we have the technology to help us. But You still have to have checks and balances, and you still have to look at the individual.

So, I think we risk losing the art of communication, which is a human form of the use of language. And it's not a mechanical form. So, if I'm communicating with you, that means I am actively listening, hearing, understanding. Trying to grasp the nuance of what you're saying what you're driving at [00:54:00] and how you're coming across and so forth.

I'm thinking a ton of things whilst we are speaking, and it's all instantaneous. That is not computation. It's something more than computation, and you can't really exactly explain it. That's why we have such a hard time with what they call model interpretability or model explainability. So, where I know how the machine works inside and where I know how the output is arrived.

Those are two different things, but we can't explain it. So there's all this scaremongering going on about loss of control and superhuman computers are going to take over the world and all the rest of it. I don't believe that because first of all, they don't have intelligence. I think Lacoon came out and said it, it's no more intelligent than a cat.

No disrespect to the cat. But we haven't figured out how human beings learn yet. There are many theories, of course, but. Look at the way an infant is born into the world arguably doesn't know anything, but all of a sudden starts walking, talking, absorbing information, like, the most efficient computer that [00:55:00] we could ever imagine, if you want to put it in that frame.

Or

Andrea Hiott: nothing like the most efficient computer that we can imagine. That

Maria Santacaterina: is magic in a kind of way, because having a child suddenly learn multiple languages, understand the environment make simple questions, whatever it is.

Andrea Hiott: That is the kind of, pre reflection that then becomes something like Deliberation, but it's what the body does.

It's what life is doing that is life and that's not what a computer or a machine is doing But we have confused those

Maria Santacaterina: and so if we have these amazing tools alongside us imagine what more we can do So, you know when I was a child I would go to the library and maybe read books and that sort of thing then I would ask my teacher and so forth but that human contact is what we risk losing and that human connection and to your point of earlier that reflection that moment to it ask your own questions after you've read something, and then maybe go and clarify that with your teacher or your classmates.

We lost that during COVID, and it has hindered the development of many children. And you hear university students say it, you hear [00:56:00] higher education, people going through those formative years say it, you hear younger children say it. They were very disoriented. Call it a social experiment, not one that we want to repeat.

So we don't need to live our lives in some sort of synthetic multiverse environment. We need to live our lives. whereby we can connect with others, our environment, our surroundings in a natural, organic manner, on the one hand, and at the same time, when we need to, how we choose to use these amazing tools to help us carry out particular tasks.

There, I would say tasks, because the machine is good at doing some things, but not other things. So if, for example I'm a researcher and I would like to try to identify some trends, and I have been able to verify the source of the data validate its quality and so forth with particular let's say methodologies.

They're not foolproof, there are some good methodologies out there. Then certainly it can help me advance my [00:57:00] thinking to advance my research.

Andrea Hiott: But

Maria Santacaterina: what it can't do is substitute for my thinking and my reasoning and the hard work. And we don't

Andrea Hiott: want it to, do we? I think we've confused that too, is you, I don't know about you, but what are the meaningful moments in your life?

Are they when you got an answer from AI? No, I'm sad to say no. The meaningful moments in our lives are, you said the word connection just a little bit ago, and I'm really glad you brought that up because we, I think the AI and the technology often substitutes. Or we, we use it to connect. But the real thing that matters is the connection with our world, like with living.

The living world, living humans and the technology is pretty meaningless if there's not something on the other side of it that we're imagining is living that we're connecting with. Even if we're imagining that thing is living, what we've done is confused all the living that went into the algorithms making it with the thing itself, which is understandable, but, What we want is the [00:58:00] living connection.

Those are the meaningful moments in right in life. I don't That's the fun stuff. Yeah, that's the

Maria Santacaterina: fun stuff. We want that, right? Or new ideas and going to new places and meeting new people and whatever it might be and learning new sports or become an athlete or becoming a great cook, whatever it might be.

Andrea Hiott: I'm feeling like you were challenged and you went through that challenge that you've, all that richness. Not just having it answered, I'm satisfied with an answer. Yeah, there are no final answers. That's part of it, too, is that journey,

Maria Santacaterina: just, what is face value?

But that's my point. We are being reduced to taking outputs, data outputs. They're not even answers. They are data outputs at face value. And unless we understand how those data outputs are arrived at, then we risk making the wrong decisions. And that is a real danger and it happens every single day.

I'll give you a silly example, well actually not so silly, but a young girl went into a store here in London and she [00:59:00] was then barred from going to any store. She was a case of facial recognition gone wrong, i. e. mistaken identity. She was accused of being a thief and she was none of those things. If we have to prove our human identity to the machine and not vice versa, we have a problem.

That's where I say we need to re conceptualize. technology and our use of technology and how we construct it and where we deploy it and so forth. It needs to be thoughtful. We can use technology really well, even just the technology we have not, even if we were not to develop it any further, which of course we will, and we'll also always try to find better ways of making better technology.

That's, a given. But even if we were to stop at what we have today, and we were to use it in a better way, in a more human way we could perform magic and miracles in the sense that we could start to solve some of the most intractable [01:00:00] problems that we have in society. We could, but we're not doing it.

And that's, very regrettable. It's a missed opportunity. So, the opportunities that we have are huge. But we're not using them, and that's really silly. So, I go to conferences, I chair meetings and I hear, the technical teams the senior levels struggling to identify use cases for generative AI.

Because, How long does it take to put in a prompt and put it in the right way to get some sort of an output that may or may not be right? It's a laborious process. So what we have at the moment is one gigantic human experiment. It's a laboratory. We are all guinea pigs. We're all being observed. We're all teaching the machines, but what are we gaining from it?

That's an existential question. Very important question. So I, I just think we need to be more realistic. We need to just put things into perspective be more pragmatic [01:01:00] about it if you wish. But I think also a little bit idealistic in the sense that we need to recreate the shapes. A setting value also from technology or derive it from technology, I should say.

But not substitute with technology. That's the point I'm trying to make, which is a nuanced argument, and I know that not everybody is going to be in agreement, but that's okay. We should debate the point too. And I think a lot, to be honest, I think a lot of people are, and I think that Even though we have this thing called ICT, Information Communication Technology, we have lots of information, we have lots of technology, and I always joke, the C is missing. The C has gone, and taken a holiday somewhere.

The communication isn't there. We need the C. We

Andrea Hiott: need to bring the C back. Right. Where's the C? Well, that's this values based approach that you, one way to bring the C back is to get your book and read it because, that's what you're,

Maria Santacaterina: And to create a socio technical system with the emphasis being on the social because if you start with the social need or the human need or the [01:02:00] environmental need, that's what I mean by social, the big S, life, in one word, life, you could create a technical system that facilitates aspects of life that we'd like to be facilitated, but you've got to identify what is desirable in technology, what is not desirable, what is acceptable, what is not acceptable.

That is something that, can be decided in a local level. So within an organization. A company can decide where to deploy AI, where not to deploy it, is more important, actually, in today's world. Because we need to understand that we are losing our sense of safety and security at a rate that is, is is something that we can't quite fathom.

And that's why there's all this scaremongering going on about loss of control. Because arguably machines can do many things. Do they always produce the right output? No. Hallucinations, but what is a hallucination? It's a pattern mismatch, to put it in a simple way, but it's a bit more than that.

It can be very misleading. And if people take [01:03:00] that output at face value, it's dangerous.

Andrea Hiott: Yeah, if you Google someone and the AI tells you that they did something, you believe it. But it, some, a lot of times now it's a hallucination. They've created some kind of book or paper that fits very well with the person, but they didn't actually make it.

And if you think of that kind of, Exponentially, what could happen with that? And yeah, for me it feels like you're saying we need a shift. We're at a tipping point. You should, you say that. And it's very important that we take a minute, deliberate. What are we doing? What do we really want?

What are the values? This becomes incredibly crucial for all of us. And it's not that the machines are going to take over the world so much is that I think for me, it's the same human problem of that we're. We're allowing things to be easily manipulated. And if we don't take a moment to realize that we can be manipulated, we're going to be manipulated.

By whom and what, with what sort of [01:04:00] intentions could be many different things. But again, it's going back to the human kind of potential for both harm and creation there.

Maria Santacaterina: Yeah, and deception is going to be a big theme. It's coming up now. They're talking about it in terms of antitrust as well, and competition and so forth.

On the one hand, some, investigations going on, big tech companies also in America, that's a bit of a kind of a blurred field, shall I say, but but deception or the manipulation of information by algorithmic means which is beyond our comprehension, because we don't know how the machine is.

Yeah. derives certain outputs in some cases, or the idea that you can have super alignment, which means an AI trying to become a researcher of human values to educate another AI another algorithm, to me, that's just not the right way to approach it. my view. I don't think we can code ethics.

I don't think we can reduce ethics to some sort of finite set of variables or set of rules or algorithms, no matter. [01:05:00] And if you talk to some of the people who are very much in the field, they will tell you, we do not have. the mathematics to complete the system. So we have incomplete systems and we need to recognize that.

And if we recognize that, then we can start to build systems that are useful in certain tasks and deploy them in certain ways that we can control them. We are in command of the system. try to align the system towards human values and towards human intentions, but also intervene at the right time. So you have a complete monitoring governance, it's continuous monitoring of the model pre inception, preconception, if you like, you need to go through the hard thinking, then you need to ideate the model and design it and so forth.

But it's a continuous process and you're continually checking to make sure that the model doesn't break or that it doesn't bring in erroneous data or whatever it might be. But then you've also got prompt injections, you've got malicious actors, you've [01:06:00] got, a ton of things, a ton of variables on top of that you need to think about.

So it's not an easy process. So when I hear that the machines are going to be more productive, they're going to increase productivity, or that they're going to increase value I question that because value can be increased if you have human input without Whom you cannot increase value and you have to start with values.

That's why I say we have to start with purpose, cynics talks about, start with why, but I'm saying, let's start with the purpose of what is the reason why. So let's make it a bit more complex. We need to do something. And then once we understand that, and we are aware about that, then we embark upon our journey of discovery or exploration as to how we can create the tools that we need to help us achieve.

a certain task within a defined space. But it can't substitute for our intelligence because if we allow it to, then we are in real problems as we've been discussing all along, I believe.

Andrea Hiott: No, I think it's a [01:07:00] really important message that we really need now and that people resonate with if they can think about it for a second of, it's not saying we don't need technology.

It's saying, let's think about how we can best use this, connect this technology to do these things like connect there are levels of this on our personal human everyday interactions that this is all connected to, how we've thought about our values and how our technology and our interactions together and the way we use the technology are shaping our everyday lives and feelings and experience of life.

I guess just to go towards the end, I wonder about, like, how you find your own poise, and how you, you stay curious and keep learning, even though you also are one of these people who has to manage so much stress and data and information and ideas and people coming at you.

Maria Santacaterina: Oh yeah, I mean in the good old days you would have an idea and then, you'd be asked to go away, in a business setting, for example, and okay, you have [01:08:00] an idea, substantiate it, come back and tell me how you're going to do that and why it's going to work.

And that was fine. Today you have data, so you sit at the boardroom table, you have an idea, but the data sets. And so you're constantly fighting against this thing that is called data. But if the data is wrong, how can you prove it? It's very difficult. And a lot of people don't challenge it actually.

And this is really where I'm coming from. So, you need curiosity always. That's, I think that's just a part of us. That is, adaptive resilience in each of us. You need courage too. You need courage to say, I'm not sure that's right, or I don't know this, or I don't understand that, or whatever it might be.

You need courage. Courage is something that is being lost. In organizations, because you have all of the pressures, stock market prices, deliveries of impossible goals and targets. And they're all tactical things. They're not really strategic actually. And that's part of the problem.

And it's because there's a vision that it's [01:09:00] lacking. So when I talk about purpose, I also mean vision, you can't think about, the quarterly return. You need to think about, I don't know, the next 100 years. How is your company, how is your organization, whatever it might be, profit or not for profit, government or otherwise, it doesn't matter what it is, but how is your organization, meaning your group of people, your potential, your talent, your intelligences, and put it like that, human intelligence, how are you going to use it to make the world a better place to make people's lives?

worth living, and give them a better quality of life and give people hope and inspire people and motivate people. Because when you do that, you get the best out of people. So that means leadership. That means true leadership and you establish trust. If I'm a leader and I say, I know everything, I don't think I'm going to be very credible.

If I'm a leader and I say, well, you know what? I'm not sure about that. Let me think about it. I think I start to earn trust and it takes time. And then you take the company with you. On that transformation journey, but it's not a binary [01:10:00] decision. It is complicated. As we said earlier, it's also complex. And so I think as business.

Complex

Andrea Hiott: is a better word even.

Maria Santacaterina: But we need to understand what complexity is as opposed to complicatedness. And I think that's where the cookie crumbles, is that we have a lot of complicatedness going on. So you've got a lot of systems upon systems and applications upon applications, but nobody knows how to get out of the spaghetti junction, right?

It's a labyrinth and it's getting bigger and bigger. And people don't have a sight of actually what is, worthwhile, what is meaningful, what is purposeful, and that's where the value lies. So if we orient technology towards the human values, the meaningful, the purposeful, the useful in that sense.

And it's aligned with our intentions and that's fantastic. That is the technology I would like to see in the world where it is expansive in a good way in a beneficial way for all of us and the environment because you can't separate the two. So you mentioned [01:11:00] earlier taking more of an ecological view and an approach which is actually a relational approach.

The relationship between natural systems and human systems. And that's what I'm interested in. And that's why I looked at it from science and humanities, art, philosophy. I brought all of these things to try to understand what was happening. Because I was writing my book in COVID, but also when ChutGBT came onto the scene.

And so trying to get your head around that whilst you're, you're putting the finishing touches was not an easy task. I will admit, but hopefully I was able to make some sense and put it into perspective. So that business leaders particularly the audience I was aiming at C suite boards and, senior execs, that they can think a bit more broadly, a bit more deeply.

about this because we all contribute to a system. It's a much bigger system than our own organizations. And if we think about it as the whole system, then we can start to make steps towards taking the right choices, making the right [01:12:00] decisions and so forth. We can't just work in isolation. So, I spent my life breaking down silos in businesses.

So that means creating communication flows. I don't want to call them information flows, but communication. and instigating communication, which is necessary for change, right? And getting people to work together. That is so important. That is leadership. And you learn those skills by understanding human relations, by learning about human relations every day.

It's not a given. And I think that's where many people make a mistake. They think that you manage people like you manage things, objects, You lead people, you inspire people, you guide people, you help people, you support people. In that sense, that's the role that also technology should be assuming if it is to help us.

And the tasks that we assign to it should be deliberate. They should be conscious choices, not just, Oh, let's have this foundation model and let's use it in any setting that we wish to [01:13:00] use it because it's going to be fine. It is not going to be fine because it wasn't designed for that purpose. So I think it was Wiener who said don't look at what the model could do or might do or would do, look at what it does.

And that's really what I'm trying to say about adaptive resilience in the technological or the infosphere or the modern world. And it is, can we please think about how we are going to make ourselves more adaptive and more resilient in the process of creating this technology? To some beneficial end that is desirable for everyone.

And I think that we need to move away from destruction. That is the bottom line.

Andrea Hiott: Yeah, to realize we're moving towards it.

Maria Santacaterina: Which is the most powerful, for me, the most powerful emotion. And it's very much aligned with love too. But hope is something that never dies. And that's how you

Andrea Hiott: end with hope.

I was gonna go there. The book ends in a very hopeful place. It feels like there's a kind of, something's cracking or opening, or maybe we're turning a [01:14:00] corner and there's some light coming in, you mentioned it in these, there's a kind of refrain, even in this conversation a bit, where you do talk about how there's so much more possible if only we shift our vision a bit.

If we can connect differently together as humans, and if we can raise up what humanity means with each other, which is what we all want. Let's, if we take a minute, that's what we want. We want to connect. We want to feel loved. We want to feel love. And it's in all these sectors, right? It's not just, Yes.

At home or by yours, it's in business, it's in everything you're doing with your life, so that's how I feel like you're cracking that open by the end of the book. Is that, how does that strike you? What do you, and this big word love,

Maria Santacaterina: it's the quest for meaning, which for me equates to the quest for beauty, aesthetic value, all of those good things that makes us human.

And I think that we need to embrace. and own our responsibilities and our accountability towards each other, towards the world, the environment, if we start to become human again we, we [01:15:00] can achieve so much more. So if I'm running a business, I always take a very wide, broad lens, explore avenues that they didn't even think of and possibilities that didn't, and then you start to narrow it down.

But if you don't begin with a blank canvas, And if you don't explore all these amazing things that could be, then how do you know what it is that you really want to achieve or where you actually really want to go? So it's about creating a direction rather than a destination. And the crafting of that is the most wonderful process.

And the most exciting process and exhilarating and fulfilling process because when you get everybody on side and we're all pulling in the right direction, the same direction that we all want, which is what you've just said, that is the best feeling on earth. That opportunity. So we shouldn't let go of that.

Andrea Hiott: I always think about way making and paths that we make and as we were talking about this, the way we develop all the things we open ourselves to in this deliberative way that [01:16:00] we've been discussing, that we read, that we study, that the exciting things that turn us on and we follow those paths.

And I feel like if you're a young person today, you want to cultivate that. Don't you? You don't want to just give it to the AI.

Maria Santacaterina: I don't think we should be afraid to fashion our own paths and craft our own paths in life. And I think we're in danger of losing sight of that opportunity. I think this conditioning I referred to earlier, we all do STEM, nothing against STEM. Why do we have to separate everything? You don't know,

yeah, understand human language, context of the evolution of science or mathematics or, philosophy or religion or whatever it might be.

We need a much more multidisciplinary approach to education, per se. So teach children about I don't know financial management and ai coding and computing and things from a very early age. So they understand it and they can put it into context. Don't bring them up. Only on some sort of connectivity, [01:17:00] or, let's say artificial connectivity which is different than human connection.

That's why I tried to different terms. Yeah, very careful not to anthropomorphize myself. What is no, I like that. It's difficult because everything is so, but connectivity, through an artificial tool is one thing. And that's. That can be a good thing, can be part of the learning process, but it shouldn't substitute for the human connection, the real dimension which is where we transfer knowledge.

So I think in the 80s, there was a lot of talk about knowledge transfers and everybody got excited that IT, was going to solve it or ICT was going to solve it. But in reality, we haven't solved it. As I said, lots of information has been generated, created, gathered, stored, whatever. There's lots of technology and it is evolving very fast and it's fantastic.

But the C we lost sight of, which is the human part, the communication part. And so, if we can get back to that and we can re imagine education, re imagine the enterprise, re imagine [01:18:00] government. It's institutions and the social contracts, constructs and contracts. Actually there's two sides of the coin there.

Then we can build a better future. I really do believe that and we have to believe that because if we don't have hope then, it's a very bleak and dark world that we'll be living in and we don't want that. We want to see the light. So you go back to Plato and the Allegory of the Cave, we don't want to be dancing in the shadows.

We want to be outside enjoying the sunlight. Don't watch the cave walls, go out the door, look at the beautiful earth. That's the human mind. We have that ability and that's what I will always advocate for.

Andrea Hiott: Well, thank you for doing it and thank you for your time and energy today, but also that you put into this I know right now at times it might feel like quite a fight because You do have to be very strong and come at it in a certain way And there is a lot of talk in the book about the tipping point and what we what's wrong and the problems But you really do there is hope and optimism actually is the [01:19:00] message I think and I thank you for that, too Thank

Maria Santacaterina: you.

It's been wonderful to, to speak to you today, Andrea. And yeah, I hope that yeah, there'll be other opportunities for us to discuss further,

Andrea Hiott: Is there anything that you wanted to say today that I, that you didn't get to say yet or that anything that's on your mind, just whatever that might, that you want to say before we,

Maria Santacaterina: well, just to leave people with this thought that technology is neither good nor bad?

But nor is it neutral. And that is not my phrase. That is a phrase, I can't remember who said it, but I think that is quite meaningful and quite powerful. And if we start to reflect on how we use the human form of natural language, choose our words carefully, use our words carefully we can communicate and we can change the world.

Andrea Hiott: All right, let's get to it. Let's do it. Well, thanks for everything. Thank you for today.

Maria Santacaterina: It's my pleasure.

Thank you very much for inviting me.

Andrea Hiott: It was beautiful.

Next
Next

Disrupting Expectations with Skye Cleary