Brain GPT and Rethinking Neuroscience with Brad Love of University College London

The Birth of BrainGPT, a Large Language Model tool to assist neuroscientific research, with Brad Love, Professor of Cognitive and Decision Sciences in Experimental Psychology at University College London (UCL) and a fellow at The Alan Turing Institute for data science. Brad and Andrea discuss the intersection of Artificial Intelligence and neuroscience, focusing on a groundbreaking project named BrainGPT. It discusses the role of AI, particularly large language models like GPT, in predicting scientific research outcomes and addressing the challenges posed by the vast volume of academic papers. Highlighting the transition from summarizing literature to forecasting research developments, the video emphasizes AI's capacity to enhance human ability in managing and orienting large datasets, specifically in neuroscience. Furthermore, it explores the ethical dimensions of AI, including biases in AI models, and its role in augmenting human capabilities, offering a nuanced perspective on AI as a tool for knowledge discovery rather than a threat. The dialogue also covers the evolution of AI research, the potential of neural network models, and the philosophical and practical implications of integrating AI into scientific investigation and understanding the world. Concluding with personal insights into the adventures of navigating careers in science and AI research, the video navigates the complexities of creativity, scientific discovery, and the pursuit of knowledge in an era marked by rapid technological progress.

#braingpt #neuroscience #models #hippocampus #artificialintelligence #bradlove #llms

00:00 Exploring AI's Immediate Concerns and Astonishing Capabilities 01:27 Introducing Brad: The Mind Behind PageRank and Beyond 03:09 Journey Through Academia: From Undergrad to a SUSTAIN 04:33 The Quest for Understanding: A Dive into Cognitive Science and Modeling 12:32 From Music to Models: The Creative Process in Science 19:21 Unpacking the SUSTAIN Model: Concepts, Memory, and the Hippocampus 31:37 Challenging the Status Quo: General Learning Systems & the Future of Modeling 41:48 Exploring the Complexities of Neuroscience and AI 44:18 The Shift to Modeling and AI's Impact 45:06 The Evolution of AI and Its Astonishing Progress 46:39 Brain GPT: A New Frontier in Neuroscience Research 51:08 Large Language Models: Understanding and Utilization 54:08 Brain GPT: Bridging Neuroscience and AI for Future Discoveries 01:06:14 The Role of Benchmarks in Advancing AI and Neuroscience 01:16:52 Looking Ahead: The Future of Brain GPT and AI Collaboration 01:18:55 Exploring the Future of Scientific Discovery with AI 01:19:54 The Potential of AI in Enhancing Creativity and Scientific Reasoning 01:20:24 Evaluating the Predictive Power and Limitations of AI Models 01:21:29 The Role of Large Language Models in Scientific Research 01:22:33 Addressing the Ethical and Control Concerns of AI Development 01:25:28 The Evolution of Science and Technology: A Philosophical Perspective 01:36:05 Concluding Thoughts on the Intersection of Love, Philosophy, and Science 01:41:18 The Personal and Collaborative Journey in Scientific Innovation 01:43:21 Reflecting on the Impact of Life, Societal Changes and Concerns

BrainGPT: https://braingpt.org/

Team at BrainGPT: https://braingpt.org/team.html

BrainGPT paper: https://arxiv.org/abs/2403.03230

SUSTAIN: https://bradlove.org/papers/love_medi...

Video where Brad discusses BrainGPT in more detail:    • Bradley Love | Advancing Neuroscience...  

Subscribe to the newsletter at: https://substack.com/@andreahiott

TRANSCRIPT:

Andrea Hiott: [00:00:00] this is called love and philosophy.

It's not a word that said in science much in a way with this beyond dichotomy. Does it feel completely separate from your science and your life and all of this technology and model building?

Brad Love: Gosh, it's so hard to answer that. Because first we talked about like creativity and passion. And so at that level, it's definitely part of my life. Just I mean, when people say like, why are you working so much?

But no one says that to like someone playing guitar or writing poetry. They're just, they're not like stop doing it. So, so at that level, it's infused, but there's like that maybe like another level of your question, I think that you're getting out of like how you see yourself or reality.

And yeah, , maybe back to the ants realizing they're ants and so yeah, that That definitely is at play too. Like, I feel like when you think about how the brain works, how machine learning systems work and listen to people debate things, start thinking about what's the nature of like every person in myself and like how we're [00:01:00] actually put together I think it does turn in on itself and I'm sure that's probably like what drew a lot of people into these fields is that sort of

interest in themselves at some level, but there's another level in which it doesn't. I think some people try to make science everything and kind of replace all other ways of understanding the world and make it all encompassing. So yeah, I definitely don't subscribe to that.

​so when I say scary, it probably was not a good word to use around AI because I mean, people, talk about like existential risks from AI and. I'm more worried about the concerns that are already affecting our lives.

Like, these models have biases they're, they're affecting people's lives today because they're being deployed in the world. So, it seems like we should focus more on that. When I said scary, I think I was just more, maybe astonished would be. The right, right word. It's more that like this thing that [00:02:00] wasn't even made to do this task, it's, it's better at predicting um, neuroscience results. And it's pretty trivial to make it better yet by training it on like 20 years of papers. And so if we could do this now with these pretty limited resources and This whole project too, we just got in this whole game like less than a year ago. So it's pretty rapid

 There's just so many obvious follow ones like we talked about combining like humans and machines teaming in the future. And so, we actually had some analyses that if you combine a person's predictions and the machines predictions, it's better than either one or two machines or two people.

So that means you could actually combine a person and machine together to get like a better prediction than either one alone

 Hi, Brad. I'm so glad you're here. Thank you for doing this [00:03:00] today.

Brad Love: Oh, it's my pleasure. Yeah. Thanks so much for having me. I'm really looking forward to it.

Andrea Hiott: Yeah. So first thing I was thinking of is it must be really hard to have invented PageRank, uh, three years before Google.

Brad Love: Oh yeah, it's a real, real burden every day.

I'm amazed you saw that. Yeah, I guess I wrote a little blog about it and it's made it into Wikipedia, but yeah, it's funny, that's actually where my joy of modeling came. It was a undergraduate project. And I guess to be fair, like a lot of people have had this idea, uh, And under different guises, but you know, I was interested in how different elements of basically concepts relate to each other.

Like, why is it hard to imagine a bird without wings, but you could imagine it being a different color or something. And so I ended up just like how the web is a graph of hyperlinks. I was like, Oh, concepts could be that way too. And I had this, uh, background that was kind of mixed up between cognitive science and computer science and neuroscience.

[00:04:00] And so I was taking some applied math courses and just realized, Oh, we could do this iterative computation that converges to the eigenvector with the biggest eigenvalue. And yeah, it's funny. It didn't really get me far though. Like, I think that got, uh, second place at the Cognitive Science Society conference for the Moore Prize, but yeah, what's the Too far ahead of your

Andrea Hiott: time.

Brad Love: Oh yeah, yeah. Still waiting to catch up.

Andrea Hiott: I don't know about that, because we're going to talk about Brain GPT and I think that's actually you're ahead of your time again, but maybe at the right time. But we'll get to that, first I want to try to understand models and before we get to large language models and, and Brain GPT, this very exciting idea.

 First let's go back. So where were you? Were you in Austin

Brad Love: yeah, so, when I was, I got into, uh, two universities, an undergraduate at Brown and MIT and for various reasons. I went to Brown because you could take whatever you, I don't know if it's still like this, but you could do courses in anything.

So it wasn't because I didn't really know what I was doing. I just really had an interest [00:05:00] that spanned different fields. And I think a lot of people are this way. So yeah, it really gave me the freedom to do topics in mathematics and computer science and in more cognitive psychology, cognitive science, and a little bit of neuroscience.

And yeah, I always had that split and I had advisors, both in computer science and cognitive science. And when I was, I knew I wanted to go to graduate school because I really liked doing research and that back then, that was the way you do it. Maybe there's other paths now. But my talks in the

Andrea Hiott: 90s or so or what?

Brad Love: Yeah. Yeah. Like literally last century. Yeah. And it was a really different world, like intellectually because topics and I mean, maybe like neuroscience, cognitive science has changed a bit, but machine learning, computer science has changed tons. And in some ways, it wasn't as interesting back then.

So there was a lot of, like, good formal work, and like, was it, like, PAC learning? And it was like, the rage was like, support vector machines, and Bayesian networks were just starting to get popular. So there wasn't really this, like, deep learning [00:06:00] work. There was work, connectionist work was kind of out of favor.

 But to me, the really critical thing was you, You couldn't like make models in computer science world that process naturalistic stimuli and so like all these like core problems in cognitive science about representation and how we construe the world you couldn't really like solve them and I mean it just didn't seem like it was set up to make tons of progress anyway so I talked to my advisor and I don't know if this is the best advice And it probably doesn't apply at all now because the world's different, but it was like, well, if you want to do science and learn about the mind and brain, you should go get a PhD in psychology.

If you want to like make clever machines that do amazing things, go to computer science. And so sort of framed as like science versus engineering. And I don't think we're, I think we're beyond that dichotomy, but, uh, that's what, but yeah, I went the science route, but kind of kept up with. trying to keep like relevant skills and computation and work in that [00:07:00] area because my interests really have never changed since I've been here.

18 years old. What was it?

Andrea Hiott: Tell me, like, what was the 18 year old? I mean, I think you said somewhere that you wanted to understand how the human mind did problem solving, or you wanted to understand the human mind, but is that, like, too broad? What were you, what was driving you back? Yeah, I mean,

Brad Love: yeah, I don't know if it was anything, like, super deep, but it was just, like, like, most people that just spend too much time, thinking about weird things.

I was always like, how do we construct reality? Almost these questions that are philosophical questions, but thinking, oh, and so you could make some perhaps make progress on some of these through scientific means and computational studies. And so, I mean, that just that just really interests me.

And just the general idea of how you could have I mean, this isn't really that reflected in my research, but how you can have a bunch of small interacting elements. That could give rise to something more interesting and it has a aggregate behavior of interest. So whether it's like, I mean, so things that quickly lost interest in it, but you know, like economics is kind of like that.

You have a [00:08:00] bunch of people and then you get an iPhone out of it or. But the brain's like that too what were you thinking

Andrea Hiott: about when you were a kid or when you were a teenager? Yeah,

Brad Love: I mean, gosh. Was it like birds

Andrea Hiott: flocking or was it books you were reading? Were you in sci fi?

 What was going on in your context?

Brad Love: Yeah. Yeah. I mean, I should have a really exciting origin, uh, story all queued up for you, but you don't

Andrea Hiott: have to. I just wonder if what No, I mean, what got you interested?

Brad Love: I mean, it's probably like a lot of people probably had this same epiphany, as a child.

Like, I just, it just, I mean, the most basic thing that there's, there's reality and then there's what I perceive and that, like, I'm, I'm making it at some level and just like, I, even though it doesn't feel like that I'm a biological machine from some, perspective that one could take. And there's just like, how the heck does that work?

And just cause I mean, it's kind of like, this is the core of like when it's being, it's like a whole different perspective that we don't, it doesn't, it doesn't feel like that way to us, but from like some perspective, but [00:09:00] that obviously is true. So yeah, I mean. Sorry, I'm talking about this in a very child like way, because you're asking me, this is sort of like No, I think it's good.

You're reminding me, like when I was a kid,

Andrea Hiott: too, how there's these moments where you kind of think, like, how is this possible? Actually for me, you just reminded me, it was my grandmother passed away, which was very sad. But it was the first time I realized, oh, we're like beings in the world creating something and then we're not.

And it was kind of this, how does that work? I don't know. Yeah. Yeah.

Brad Love: I guess kids

Andrea Hiott: come at it many different ways.

Brad Love: Oh yeah, definitely. Definitely. I mean It's just an obvious, like, lesson, but still, even as a, like, young adult, I'm still getting hit with, by that. Like, maybe I shouldn't tell this story, but, like, when I was an undergraduate, I had a friend that was a medical student, and I can't believe I'm telling the story.

Like, she snuck me into, like, where all the cadavers were that they work on, and just I thought it would be super scary, and it kind of was with all these, like, instruments on the walls to, like, kind of dissect people and stuff. [00:10:00] But, like, when Shocked me was what you're saying. Just like there's There's just like no life there and just like a kind of like a machine that just sort of like lost it's like Um, I don't know life force or whatever.

Yeah, and you kind of go like what is that? Yeah Yeah, exactly. Like, but

Andrea Hiott: last night I was watching this thing it just came on next. It just played next. And it's a, like a hiking trip that goes wrong. The father is buried under the snow.

It's a true story. And then the son is kind of trying to understand this whole process. But the reason I'm bringing it up is after something like 20 years, they found him under the snow and he was dead. Still sort of together, but of course not alive. And I was actually thinking that too, that's so strange that even the body sort of stays the body.

 Recognizable, uh, for 20 years but that, what you were saying, that life kind of essence isn't , isn't there anymore. And yeah. I mean, who knows if we ever figured this out, but it's, it is intriguing from a scientific point of view and from an emotional and personal point of view.

Brad Love: Yeah, no, it's great. Yeah, I'm surprised your [00:11:00] question's taking me back to like my mind decades ago. Thank you.

Andrea Hiott: But how did you start to think about models? Because, you created the sustain model, which I want to get to, which, I mean, that's been making waves for decades since, since last century almost.

So like, what, what was, where was that link of the modeling? Because I was interested in these questions. I didn't even understand about. Modeling and stuff. So were you naturally into computers and coding or I don't know, what were you? Yeah, no,

Brad Love: I, I mean, I think some people get into modeling because they're into like the mathematics and the computers and all that.

And like that interests me, but it's not at all why I got into it. It's more I, I just felt like it was really great for creating scientific understanding scientific explanations and just an intuitive things. And just also the interactivity that you could try out different combinations of principles that are like manifested in some computer code that like, the code follows from the principles [00:12:00] and you could see how they play out.

And, like any model like, So like, I guess, I try things and it wouldn't work. Then read some papers, try to identify some empirical phenomena that I thought was relevant, get ideas, look at what other people are doing and just keep kind of going and iterating almost like writing a song or something.

And then it finally like. works and that it, it, uh, so that's sort of a different way. I mean, I know a lot of people, there's different kinds of modeling. So, if you could start to, from like a higher level, like computational level, abstract understanding, and then just try to like formalize like what somebody should do.

And I always like models that were more like process algorithmic models that, uh, were intended to go through the same steps. That we think people do and so that naturally, naturally captures. Behavior if the model works, but the models also have their own internal representations and components that could be related to, other measures that you could get later.

[00:13:00] Like, we did some work using these models to try to make sense of fMRI brain imaging data. So yeah, to me that just it felt like just a good way of just kind of knowing what your theory predicts, because you're instantiating the principles and something you could, plot, see, see if you make predictions from it.

Sometimes the models surprise you because people, like no, no humans really that smart. So like you put together a few interacting parts and almost anybody could be surprised. Of course, after The fact you're like, Oh, now it makes sense that you could trace through and see what's going on. So yeah, I really, yeah, I really liked that.

I really enjoyed it. And yeah, it just seemed like really creative. So a lot of people don't think science is creative, especially things that are more technical. But I mean, you have like an infinite, it's like, again, it's like writing a song or a poem, you have like this infinite space of possibilities, and you're trying to do something interesting in it.

And it sounds like a

Andrea Hiott: scientific method to this trial and error, and there's something addictive and rhythmic about it, uh, in a [00:14:00] way that also reminds me of music. Were you trying to write songs and play music? I feel like there's some kind of music.

Brad Love: Yeah, gosh, I wish I still did. But yeah, I played, I used to live in Austin, Texas, like 12 and a half years ago, after graduate school, I went to University of Texas to be an assistant professor and left in like 2011.

But, uh, yeah, I played music there actually a little bit before and I maybe it's a little like how I did the modeling. It was all like, kind of mostly self taught experimental. Like, I purposely didn't learn skills or anything. And so, I'm not saying it was good, probably, hopefully my modeling is better than that, but it was, it was just sort of like, uh, getting out of the comfort zone and other things.

Yeah, so something's basically something creative that was not as analytical, but but yeah, I could actually see commonalities even in my own amateurism. Well the funny part is actually I did get paid money several times to play, which is hilarious, because if you're so off the wall people think it's like You have [00:15:00] some deep insight or doing something unusual and it's just like, you just actually don't really.

You got paid

Andrea Hiott: to play music in Austin?

Brad Love: Yeah, not a lot. I mean, there's so many people wanting

Andrea Hiott: to be paid to play music in Austin, so what were you doing? Were you like, did you have some performance piece? Yeah, yeah.

Brad Love: It was like a kind of like a, again, I'm not a very musical person at all. And I think like the academic playing music is kind of almost like cliche I'd like to avoid.

But, uh. Yeah. It was basically like an experimental post punk band with just guitar sort of singing and a drummer and yeah it was, it was good but it was, it was like interesting and different and there was a really good, I mean I'm sure there still is, but there's really great music scene and lots of little bands playing and so yeah I think the pinnacle was like not not a big deal but yeah like playing there's this club that's relocated but like Emos was this classic club on Red River and Sixth Street and got to getting to play a [00:16:00] show there and this I like

Andrea Hiott: how humble you are about all of your accomplishments.

I mean, this is something some people would like live on for their life, but okay. But I do, it's great that you bring up this, uh, kind of going into areas that you're not comfortable with or something. I wonder But if you were doing that when you were doing the Sustained model when you were working on it, because here's how I understand it.

You tell me if it's not right. Sure. I feel like you were doing something that was kind of edgy at that time or not, maybe edgy in a way that people thought it was too simple even to kind of look at things in this way. I don't feel like you were doing what everyone at that moment would have said, uh, is the right orientation, but is that right or not?

Brad Love: Yeah, I mean, it's so strange because back then, I mean, you're so right, uh, for maybe a slightly different time period, in my opinion, like, so, at the time, it's hard to imagine now, but modeling was really not very popular at all then, and it was just considered, like, this kind of [00:17:00] boring, pointless thing, a lot of time, for a lot of communities, obviously not all, and so, from that, Perspective from people in psychology, it wouldn't, it wouldn't be simple, but then when I started thinking about what these mechanisms might have to do with the hippocampus and prefrontal cortex, then you're totally right, because there was this real split between psychology and neuroscience.

There wasn't really like a lot of multi level thinking. And so like, yeah, it was like you'd have one would have to like ape the details of, uh, like, oh, like just, Exactly what you're saying, they just make something that doesn't really have any clear principles, they just dump in tons of details that refer to this or that biological detail, or just have some, some face validity, so like, just making a model that, since it's like an incremental clustering model, it was really surprised it would make its own sort of representation unit cluster.

And, I mean, I always saw that, well, of course, that could just be, [00:18:00] It's like you can imagine in the brain just there's a bunch of cells and they're being like, like, kind of recruited to ensemble. You don't literally have to have like neurogenesis or something doesn't have to work like that. But people, I think, in neuroscience really want those details.

And when something was so simple, like, I, even though it would make good predictions about the time hippocampal activity or representations and all that work was obviously done with, like, key collaborators, like Allie Preston, Mike Mack, and Tyler Davis and others. But yeah, you're totally right that I think there's a sense still in which people just don't take things seriously that have tons of detail, which is really weird because like the scientific fields at the same time of all this envy, physics, envy, and towards these really simple explanations.

And you look there, everything is like. Yeah, so I really think there's a place to start to have models that are simple and of course they won't do everything and then you could expand them. And you mentioned, I won't go on too long, but you mentioned [00:19:00] the flocking work before and so like a recent model with Rob Mock is trying to make this point that you could take a really simple model, like related to like the sustain model and you could, Decompose an aspect of that to create like, arbitrary, detail for that, if you need it for the scientific question of interest.

And so I don't think that, I think that could be a good strategy of, instead of getting for explanatory models, I mean, we're going to talk about brain GPT later, which is totally the opposite, but you're making a model as a theory, as an explanation. It's probably not a bad idea to grab the basic. Components that you're interested in instead of just diving into like the minutia and not really knowing where the explanation is.

And then when that model is too simple for accounting for certain phenomena, then you could drill down deeper while still retaining like this higher level understanding of what's going on.

Andrea Hiott: Yeah, I think we live in these environments that are so rich. I've heard you talk about this before too, and I, I really like it because you [00:20:00] often bring up the context and the situation and the space and it seems like when you start to look into it, kind of as we were doing when you saw the cadaver, cadavers, is that the right way to say that?

Yeah. Yeah. Or when I was talking about with my, my grandmother, this, it can seem overwhelming. I mean, to try to understand that it almost seems, of course, this is how people become feeling mystical. And, this is why we go to science to try to, to look into things. And it can seem like everything should be really hard to figure out.

So there's a weird blindness there in terms of starting with a simple model. Also, I have to say, it's not that simple. I mean, you had to do a lot of learning and there's a whole huge amount of skillset that goes into being able to do something simple. It's again, kind of like, can be kind of like a song.

Yeah. Or some kind of art form where you, it does take a kind of discipline, even if you have a natural talent or whatever. I do, I want to get into this sustain model a little bit because I think maybe we can unpack some of this stuff. So that's a conceptual model. It's about sort of concepts. So [00:21:00] I wonder like, and you're dealing with, I think, prototypes and stuff like this.

And for me. Trying to understand it and also people who, who haven't heard of it. Maybe we can explain it. You brought up the hippocampus and I love the hippocampus. So let's kind of go towards the hippocampus. So if, if the sustained model, if you're trying to understand something like how we make memories, which I think might've been part of it, right?

Uh, you can tell me, then I feel like there's a kind of prototype theory or a way of thinking about like the world as these, uh, the, the body understanding the world through these big categories or chunks or, I think you talk about birds a lot, so there's just big category of birds, uh, versus the details, where we need every little detail and every little thing.

Those are the memories and they're stored somewhere. Or how can you, can you help me understand? Yeah, yeah,

Brad Love: sure. Yeah, sure. So, I mean, this is really taking, like, this is literally like last century, though, recently I've done things with this, but yeah, I want to move to that too.

Yeah, at the time there was this can these, I mean, other people also did related work. So when, [00:22:00] not claim this all for myself, but there's this, this sort of these two extremes, like you mentioned, and prototype theory. The idea was we just like coalesce all our experiences of some relevant, uh, category, like all the birds are represented as one node.

And like Eleanor Roche is pop is renowned for popularizing this in the seventies and Providing some evidence. And then there's this other extreme of exemplar model where you leave all these traces and it's sort of like a lazy inference at the time of decision, you activate the model and that's how you kind of get the abstraction.

Yeah, so this model kind of just like a lot of cluster models goes between those extremes, but the key is like, when do you, how does, how does it work exactly? And so this model basically assumes the world's really simple. And then when it falls on its face and something is really, really, some really surprising happens that throws it off.

So yeah, on papers, I would talk. So he's just an example of like, maybe a child being told a bat isn't a bird and they're like, why it's small, it flies as wings. And then you have to do like using the [00:23:00] language people do. They talk about the hippocampus as a kind of pattern separation or. If you thought of it as time and event boundary, we were like, basically just, that's how you would basically create something almost like an episodic memory at first.

So this model kind of says that all these more semantic ideas of like categories come from an initial episode. That's just really startling and surprising. That sort of anchors it. So, if you see, so you, you know, maybe remember all the details of that first bat you see, but then of course you'd see others and they would intersect with it and become kind of like a bat prototype.

And so. The model just basically says the world's really simple, that When it surprises you then you kind of store these, these details away that could go on into their own regularities. And one thing I like about it, in contrast to I mean, I'm not anti Bayesian, even though some people might think that, is that for this model, it really depends on what you're trying to predict, because what's surprising depends on what you're trying to do, what you're paying attention to, what you're, Basically, what you're [00:24:00] trying to predict.

So it's just not about the structure of the world matters, which people again, like, all the way back to Eleanor Rosch have emphasized, but it also matters what you're trying to do in the world, like, what your goals are. And so it's kind of like this interaction or play between these 2 things ends up shaping how you think about things, which could also tie into, like, even how maybe how different languages affect how you think, because they flag up different distinctions.

different things you need to predict when you're communicating with people. Yeah, so I think there's this nice interplay between like, they're actually, you can think of there actually being like information structures in the world, but then they're overlaid with our goals and tasks we have to do. And we come away with maybe like a world model.

People use that term a lot in AI in our heads. that might be like generative, but it's like really tailored to what we need to do in the world and the kind of predictions we have to make. Yeah.

Andrea Hiott: So we have, we're in the world and we have memories, like we can just kind of agree on that. [00:25:00] We don't need to define it all.

And we're trying to understand how they happen. And. It kind of, it makes sense, like, from many people, including the way you just presented in the model in Rosch, to think we're somehow, there's a lot of regularities. The world comes at us in a very regular way, the bird and the bat have

they have a lot of things in common. So those are kind of regularities. So it makes sense that we would somehow with our bodies and brains be aligning, fitting to doing something, representing these regularities. So the model that you built, is it a way? The sustain model, is it a way of understanding that process?

Like what, how does it do that? I guess what I'm trying to get at is what is a model really in this cognitive sciences? How are we testing whether that's happening or not?

Brad Love: Yeah, I mean it's really, it's funny when you're asking your question, I was just kind of, maybe one reason why this, this model's worked out well or why I like it is there's this It seems like it does a good job.

How we actually test it [00:26:00] is, is really like so boring. It's like these lab studies teaching people, these novel categories that are so divorced from like human experience and background knowledge. And I've actually kind of fallen out of love with running studies like this, because there's just like the classic issues of external validity.

Is this what you're studying have anything to do with these larger issues? But maybe one reason why, like, this model's stuck around, or the ideas, at least, have stuck around, is because, like, all our discussion has been, like, these really bigger issues about how you structure your conceptual universe, or how does different languages affect what you think.

So, if you're, it's a Maya kid in Yucatan, a bat is a bird, so, like, maybe, You, that affects how you structure, your representation. So I think it's like, this model, like when you actually evaluate it and, and lab studies does a good job, but then it seems like you could still get inspiration to try to understand these things that you don't really apply the model to, we're talking about like [00:27:00] this big conceptual issues.

And then it was also good to try to make sense of maybe how different systems in the brain support. That kind of learning. So, yeah, so it's a really interesting question. Cause like, yeah, maybe that's actually what I never really thought about this, but maybe that's what makes something like interesting, at least to me that you could, it has some actual empirical support in ways that you could carefully kind of poke it and prod it and do like, like, science as usual, good internal validity, but then it could still help shape your thinking about larger issues.

And even when I was a kid, like making it was really just like, was thinking about those larger issues, but at the same time, thinking about like a stack of pretty boring psychology papers, behavioral effects that it could pick up as well. So yeah, I guess there was always this, Duality going on that I never really explicitly thought about until now, but it was definitely going on and part of the process, maybe it was so implicit that that's how it should be.

I never realized to now that [00:28:00] that's what I was doing.

Andrea Hiott: So I think that makes sense even with the more recent work, which I want to get to too relative to the hippocampus. But if we think about the hippocampus in memory So when you were creating this sustained model, and you've done so much more, we don't have to stay on this too long, but just thinking about the hippocampus and thinking about something like concepts and these, this way that something's happening with the body and brain in terms of, uh, recognizing, aligning with these regularities, and then trying to model it inside in the brain and that helping us to understand the process better.

That's kind of. One way of coming at, like, if we're trying to understand what the hippocampus does, what memory is that's a way of coming at it to, to, to understand that that's how we're remembering or we're recognizing where learning is very connected. And that's a lot of your work too is, is that way.

But for people who don't know what we're talking about, do you, by concepts and concept learning, do you [00:29:00] just mean these big associations or do you mean something like words? Because I think a lot of people think concepts are words.

Brad Love: Yeah, yeah, it's good. It's funny because Yeah, I think people, especially in psychology, cognitive science, neuroscience, like, kind of confuse the labels we give for our models and their laboratory tasks with the actual larger, richer thing.

And so, yeah, people, like, when I came, came, into the field doing work and these, like, category concept building models, they must have been like, oh, you really, you must be really into conceptual representation of that. But the way I actually thought about the models was like. Uh, the things that we call concepts or categories usually just, like, confabulated with words or labels.

And so I just see, like, more like words as action, or almost maybe like in a Wittgensteinian way, like, it's just, calling something, like using the bird example, that's just [00:30:00] like, That is like an action you have to take. You have to say, call the thing, the right thing. Otherwise people won't like you, in your culture.

Oh, they won't understand you or. Yeah, exactly. That's not the concept to me. To me, there really isn't a concept or there is, it's just to the extent that it is, it's basically just these clusters, these clumps of like useful bundles of information. Like you say, like there's different ways to partition the world.

And to like basically regularities and clumps and the, and like we were saying, how you do that will depend on the actual structure of the world, but also how you have to make your way through the world, including like using words in your, in your culture. So, yeah, I think of these little regularities that support generalization, so it's all kind of action oriented, even though people wouldn't think of my work that way.

I always did. I just thought that, turn left or right, say, bird or mammal. It's just, these are all just actions you could take and you try to come up with [00:31:00] representations that support those, those actions. So yeah, so there's not like, so I'm not really, like, positing any, like, heavy conceptual machinery, really.

It's just to support our behavior, basically, and be able to generalize and make correct inferences.

Andrea Hiott: Yeah, generalization is an important word, and I think people also think of it as like patterns, but I don't know if that's too abstract for you, but let's just, this might get messy and I'll probably try and just edit out some of my stuff, but I really want to hear your thoughts on it.

So when we have ideas, I feel like already back then you were imagining what, uh, is it. becoming more obvious now in the study of the hippocampus, which is that something like memory and something like navigation are kind of the same thing. Just in the same way that when you, you, you often talk about, it depends on what you're looking for or it depends on how you've set up the experiment, or it depends on what all this come before.

You can't, [00:32:00] Uh, separate what you're looking for and finding and labeling from this whole trajectory of the way the experiment is set up or where we are in the world or what we're, what the goal is. That, I think a lot of your work shows that in different ways, but the reason I start with sustain and going into the hippocampus is because I feel like the model itself is kind of set up to be agnostic about all of these labels, in a way that's becoming very important now, it feels like, because just to give a brief, uh, recap, at the time people thought the hippie campus is all about memory.

And then of course the place cells, grid cells, border cells had all been labeled found, uh, by this time too. And so it was like, Oh, the hippocampus is about navigation. And so there's been this, what is it? The hippocampus is memory. It's knowledge acquisition. It's a GPS and it all seemed very different.

But if you just look at your model and what it's doing, all those things can be the same process, which I gets to, later, later papers that you've done, but I want to throw that out there and [00:33:00] see what you think. No, no,

Brad Love: no. This is not a digression. This is spot on. Amen. Yeah, really early on in my thinking I, I always saw it exactly that way.

Like, that is, there's just this one general, like, learning procedure. And like you said, you could apply it to different domains, things with different labels. Uh, different basically subfields and neuroscience and it's just the same procedure. And yeah, even when I was writing up stuff, I wanted to say, say this like more strongly, but I thought it's really funny.

I noticed one of your guests recently was Lynn Nadel and I, this is years ago, but I was giving a talk at Arizona, I think, where you, I think he still works there. And and I had a personal meeting. Okay, good. We have more time for fun still. And yeah, I brought this up, like, very sheepishly, that, oh, isn't it all this, and I thought he, I don't know why I thought he would hate it or something, but he's like, oh, no, that totally makes sense, and yeah, I mean, I hope I'm not really, this is decades [00:34:00] later, I'm sure what he said was way more subtle, but I just took it as, like, I'm not a crazy person but yeah, but what you're referring to, And what were you

Andrea Hiott: saying to him exactly that, Space is similar to concepts or something?

Brad Love: Yeah, it just seems like they're all the same operations of what's going on in these different domains. And yeah, yeah, obviously people know that the hippocampus is implicated in these, wildly different on the surface behaviors, like navigation, episodic memory. But, yeah, it just it always seemed that way.

And it's just, I think. Yeah, I think the field got off course because there are things even from like higher level cognition when I was starting out going way back, there's all these ideas that like space is a primitive that structures higher level concepts. And it kind of makes sense you use language like I'm feeling down today, but then it also cases where it doesn't make sense like you're closer to me that I'm closer to you than you are to me like doesn't really make sense.

But but yeah, there's a lot of evidence that that's That can be true that we could form [00:35:00] representations in one domain, like spatial ones, and apply them to others. I mean, we make analogies all the time. But the question I'm really interested in is, what's the machinery under that built those representations in the first place?

And I think it really is something like these sort of incremental clustering and whether you're talking about There's a whole other dimension that we're not even discussing, which is time. And I think it's the same thing once again. I always saw this way, like in event perception, where Why do you segment off an event?

It's because something surprising happened. People call it a boundary, but it's just basically like prediction broke down and then you noted it and then it all just kind of blurred together after that, stored all wrapped up and integrated into one bundle in your head. But but yeah, 100%.

So they get more recent work, like you indicated. has pushed that saying you can use these same simple clustering mechanisms to look at why you get grid cell responses and also like you picked up on like very astutely is that like when you're going to see these kind of responses or not see them [00:36:00] it's like not something it's built in it's this interplay between the mechanism the model and the environment and the task that's going on so it's I think it really is.

all the same. Like, not because, like, I'm just naturally, like, a bundler as opposed to a separator. I think in this case, it's just, like, I mean, it's not crazy. There's this brain area, and there's these related operations across these domains, and why, why shouldn't it be that way? Yeah, I

Andrea Hiott: think that goes back to that simplicity that's complicated.

Yeah. Because basically, I think what you've even said, it's a general learning system, which makes sense, and we can use that for what we talk about as memory, we can use it for what we talk about as GPS. I think what gets, is that right, the idea of a general learner? Yeah. But I think what gets hard is and we should be careful, it's not either or, right? Because I feel like this could be just saying, oh well, there's no such thing, and in some of your papers too, I wonder about it if you've gotten pushback. It's like saying, okay, we don't really need to, this idea of a concept cell or a place cell or a [00:37:00] grid cell.

You could almost, and it has almost become clear that we could name cells by almost everything, right? There's cells that fire every time you see Jennifer Anniston, which are called Jennifer Anniston cells, these kinds of concept cells, or, and, but actually it's very important that that was found.

And those are very important discoveries. And actually they, they add to all of this. So it's not an either, or like that we can now understand it, begin to understand it as a general learning system comes from all of that. And it doesn't mean all of that is now not there. It's just that we are going to kind of zoom out and see it a little bit better maybe.

Right. Or yeah. Yeah. I

Brad Love: mean, yeah, I mean, there's so much to say. I mean, I guess. One thing that I never really resonated about, just like, discovering cell types, or like, concept cells, Jennifer Anniston cells, place cells, and so forth, it almost just feels like that's not at all an explanation.

It's, it almost, like we're talking about being children, and it feels like, [00:38:00] because it's going to get me in trouble, but it almost feels like a very childlike view of what science is, like, Like someone like going through the jungle, oh, I found a new kind of butterfly or this, or I went fishing, I put a mix of bunch of chemicals together and I discovered something, or eureka.

Or like, I've been fishing around in brains and I found this cell that does this, so I solved it. Like, it makes, it's like, Other things in science and psychology like that too, like even the things that look totally the opposite, like the Gibsonian direct perception always seemed like that. I mean, I really liked that in like linking to the environment, ecological psychology, but it's always like, oh, there's no representations.

You just, You just, you just know this because there's an invariant. I'm like, okay, but something has to compute that and process that. And like, just like the cells, you get a cell that lights up when you see Jennifer Aniston. That's just, it's not like someone just put that cell in your head. You don't have like a, or a scale of the brain's GPS.

It's like the brain doesn't have a GPS. Something is computing the stuff and there's things being transformed. I mean, even if you don't [00:39:00] take the computational view of the brain, there's just a lot of intermediate steps that are, Leading, uh, to that outcome. And to me, that's like the interesting thing to explain how it comes about.

Maybe this is like a little too, cause you're doing a very good thing of like trying to say, Oh, we could integrate all this stuff. Everything's valuable, but just to be negative, say what you think.

Andrea Hiott: I mean,

Brad Love: yeah, yeah. It's just, so like this, yeah, just like the recent paper that's under review now where it's really actually questioning whether there is a lot of scientific value to identifying.

These cell types and this wasn't the kind of process explanatory models that we were discussing earlier. This is really just taking an existing deep learning model and showing that if you put in the kind of enclosures we put like rodents in to find these kind of cells in the VR environment, and you basically, even in a random network, you'll see these kinds of cell types, and they look a lot like the ones in the brain.

It's not [00:40:00] saying they're not real, or they don't have value, or you have no causal efficacy, or that the brain can't have interpretable don't think the brain cares about us, kind of, whether it's interpretable or not interpretable. To us, but it's just sort of saying that the discovery alone of these things that might not actually have scientific value because they could come about because it's actually doing what you think it's doing, or it could just come about doing a completely different function as a network.

It does something completely unrelated to what you're studying. So that kind of goes back to what I was saying before to make it more positive that, you need to kind of do the figure out what the underlying mechanism is in the role. of the cells and like a larger computation. And so I think it's like the priorities of science are a bit screwed up because the discovery that to me, it's not a discovery.

It's just sort of like it's like finding a butterfly versus understanding how, it works or its life cycle or something, more, that's like the science. Yeah. [00:41:00]

Andrea Hiott: Yeah. I mean, I think we get confused sometimes about a couple of things, I think, I mean, One is that, of course, we want to discover new things and that can seem like science where like with the butterfly, you discover a new species and then you put your name on it or something.

Yeah. And, and I think there's something motivating about that. That's not bad, but you can go too far to where that becomes the focus and as you're pointing out that's actually not, I'm not sure that's going to help the butterflies, uh, live longer or be healthier, have a better environment or something, like, what are we really doing when we, when we talk about that? We could say that about the brain too, but also I think if I go back to like the 70s or when, when there was all this going on with the hippocampus, with HM, with the first kind of discovery of the, Actually, I feel like that was a very important shift in terms of the same thing that your model is helping us understand, which is, there are regularities in the world that are, I mean, this is hard because it's not, they don't map to, people always say map to or map onto, but, [00:42:00] There's a kind of representation or map of it as the body, like this, these things aren't separate.

So they're both showing us that there's some really amazing relationship between the way the body and the brain are and the way the world is as regularities that you can actually study and understand. This is like kind of a huge thing, right?

Brad Love: It's really exciting too. Yeah, I share your passion. Yeah.

Yeah.

Andrea Hiott: So I guess it's. I still see all that discovery, place cells, GPS, memory, HM is really important, but I feel like the story's got to widen a bit. And I, I, I think you're being, you, you would be harsher than I'm being about the importance of that stuff, which is, you would have more of a background of how to understand it.

But I guess what I'm trying to, to get at is like, can we just think of this as like a general learning system and something like or these models start to help us understand the more nuanced we can get, the more the technology gets, can we start to do what I think you've been trying to do? [00:43:00] And that's take the context into consideration, take the experiment itself and all the ways it's been developed into consideration, into the metrics is, are we, can we do that?

Does any of this make sense? What is it? I

Brad Love: mean, I. Yeah. Yeah, I don't want to sound like all negative because, uh, things are progressing, and I think a lot of people are trying to do what you're saying. And it could just be like, so maybe, maybe one reason that I was seeming more negative is because like labeling these things by like their function almost like shuts down the scientific discussion.

Cause like a lot of what we've been saying, what you just said was about like how the different tasks contexts affect things. That like, the place style does a lot more than, it's affected by so many things by reward and, you know, every, they're not, you need to look at the actual, I mean, they're just messier too than they're shown in the paper, like, when, how things are actually done, and that's setting aside all [00:44:00] the, Steps that went through and kind of before something ends up being recorded and reported in a paper.

But but yeah, so, but this goes on like all over neuroscience. Like, so, and again, I'm not really against these research areas, but, it seems like it's bad to label things by their function. Cause it's kind of assumes, you know, the answer. So we have, you know, areas in the brain, like the fusiform face area, like it shouldn't be called face area.

Cause that's sort of like assumes what it does and that the stories. over, but that seems kind of obvious because then you could have like all kinds of debates. Is it, is it really just faces or is it sort of just fine grain perceptual distinctions as a visual expertise? And so like, there's a lot of, that's not my area of research, but there's like a lot of rich questions that would take decades to resolve and subtle things.

But in a way it's, so it shouldn't be called that the same way. Like, I don't think it's just semantics either. Cause you'd call something a place. So a grid cell and this, it's almost like you're just, assuming what it is and how it works and, and maybe like closing your, your [00:45:00] mind off to like the richer, deeper explanation that will inevitably follow, that is following, but maybe it would come along faster if we didn't have these viewpoints and attitudes.

Towards, Eureka, I found this thing, or that thing, or Yeah, and

Andrea Hiott: as you're talking, it's also, it, it, even if, if the person who's presenting it doesn't mean for it to be, it becomes that it can only be that thing, which is also not really true about the brain and body. The same cell, or group of cells can have many different kind of firing or let's say, names or labels depending on the context or the way the experiment is set up or what the body's doing.

Brad Love: Yeah, definitely.

Andrea Hiott: Yeah. But to kind of try to go away from the hippocampus a little bit, although I could talk about it forever. So what we're trying to think about is how models help us understand being in the world.

And, these things of what mind is and so on. Maybe that's too grandiose. I don't know. You could tell me what you're trying to understand, but there's a many, many different, uh, models. So if we're, if we're using it to understand this generalization and [00:46:00] patterns learning system, how has, how have you seen models?

Cause you said at the beginning, right? Uh, these weren't, yeah. It wasn't like now I feel like it's, everything is about modeling and especially AI kind of, we're going to get to large language models now. And even like everyday people now know what that is. And it's like a huge part of just life. So I'm interested from your perspective, what that shift has felt like, if you saw it already way back the way you've seen these things or yeah, how that, how that's felt.

Brad Love: Yeah, no, I mean, Yeah, I want to get on to the new stuff too, but yeah, going all the way back to the graduate school times, I remember just sitting around talking with professors and fellow students, having discussions, because there wasn't any speech recognition that worked. There was no vision systems that worked at all.

And we were all like pretty high level cognitive people. People in some ways, I was like the lowest level person just trying to make these implemental models, but we sit around like, Oh, when do you think we'll have this? When do you think we'll have that? And it'd be like, maybe in 200 [00:47:00] years or something.

And so, I mean, now, I guess. Just having, just having thought about it and talk to smart people about it way back in the nineties. And now there's these things, they're not perfect, but, and they, there's all kinds of adversarial examples that both for the vision and language models where you could fool them.

And I think people like just aren't impressed enough, actually, like not with that underlying technology is pretty simple. But just how far things have come and how rapidly it has, it's just it's just, it's really astonishing. And I think in some ways people focus too much on the, the shortcomings.

And so like, even my own research, it's really like opened up doors because I'm, I was always interested in learning and representation, but if you just write out what the representation is, like, this is a small. square, and it's blue, like, as a feature vector, it's sort of like you skip the hardest problem of how you code this thing up.

And not that these models are perfect, but at least you you need something to just get in the game to start making them better and iterating, like we discussed [00:48:00] earlier. And so like, yeah, to me, this opened up a whole new world. And but also as a tool to so that we've been talking about models for explanation to basically implement aspects of theory, but yeah, the brain GPT project.

If we're going to switch over to that, that's not a theory, it's like a really valuable tool for prediction that can help make theory. So I always see models, even the explanatory ones, or maybe even especially the explanatory ones, as offering a kind of compression of theoretical literature, because if you really understand a model, and a model covers a bunch of different studies, it's sort of like a stand in for those studies.

It's like an imperfect, like, compression of it. Like, so if you understood like, like really simple model, like for school, the Wagner's like model, like conditioning, it's not the best model of that area, but you'd probably have like 80 percent synopsis of like, uh, hundreds and hundreds of studies. And so that's like a kind of compression and.

We need that more than ever. And that's what got me into the Brain GPT project. Cause there's just thousands of papers coming out all the time. [00:49:00] And it's clear that for certain sciences, like neuroscience, that they're very, you have to not just look at your own little silo, but integrate information more broadly.

So it's just like, not really, I'm not sure it's going to be humanly possible, or if it even is now. Won't be in the future. And it's just, it seems like clear already. Things aren't progressing in science as fast as they could because of this flood of information. So yes, I started just thinking, I'm not really a tools person, like for all the things we discussed and talking about childhood, it wasn't like, I want to make tools, but I think this is the future.

Like whether. We like it or not, that it's going to be people and machines teaming together to do science. And so that's one reason that it got me into this project that I'm happy to talk more about just. Yeah, let's

Andrea Hiott: connect it to the other, because as you're talking, I'm thinking it's, it seemed, these are kind of different models.

I think you talk about them as like theoretical or process models. Brain GPT is more like an analytic model. Is that the term you use? Yeah, it's

Brad Love: just [00:50:00] like a tool. So like all the models we discussed. previously have been intended to go through the same steps people do to basically be like, there's like some principles of theory and then you turn that into equations so that you could, capture aspects of the theory and evaluate it and see, and see how well it does with the actual data and make predictions.

So it's really kind of like, and it does the actual task, so if we're making a learning model, it learns stuff, like hopefully like people and forget stuff and makes mistakes. Whereas you have other models and there's tons of neuroscience like the example I've been using in talks is like the mind reading brain decoding where you have someone say in a brain scan or fMRI, and you want to know if they're looking at cats or shoes and so you put this classifier on and it could be an artificial neural network to, and it predicts whether the person's looking at an image of a cat or shoe but that's not a model of object recognition, it's just a tool to help you characterize the information available in those brain voxels.

So that's very [00:51:00] different than someone saying this is how people recognize objects and how you would evaluate the model would be totally different. So yeah, the, the, yeah, the brain GPT is completely just a tool. So there's no it's not, it's not a, there's no science in it. There's no, it's not intended at all to have anything to do with how people, uh, it might have something to do, but it's not intended at all to have anything to do with how people.

predict things about neuroscience or how the brain works. But what we're interested in is, can we actually predict results from neuroscience studies from methods? Because if you're going to automate any aspects of scientific discovery, that seems like a prerequisite that, if you're like, Ooh, should I run this study or that study?

Or how should I design a study? Or, or you need to have the ability to know if I do this, this will probably happen. And maybe like put some probability on it. So something like that would be a really valuable, uh, tool to help direct, scientists empirical investigations. Yeah,

Andrea Hiott: let's unpack a [00:52:00] couple things first just for people.

Because, okay, let me just be kind of stupid for a minute. So those models we were talking about before, they seem magical, right? Because they learn. the way that the body and brain, are aligned with the regularities of whatever experiment is going on. And then after they kind of learn that you can use them to, to see how the brain is going to react in terms of blood flow or something like an fMRI.

And that seems really astounding to people. That's kind of one thing. And then you have this large language models now, right? Which maybe you can even explain a little bit, just, What that is. But that also seems really astounding for a very similar reason in a way, even though I see why you're saying these are different and they are, but it seems like, something like, uh, open AI or the chat chat GPT can do the same thing.

Right. It can read the regularities and go ahead and show you what the person, what a person would do. But it's different, right? Because it's, it's actually more forecasting or predicting or something, but maybe [00:53:00] you can better explain. Yeah,

Brad Love: yeah, I mean, I think you, you said it beautifully. Like, yeah, so the really simple models before we discussed, they're intended to go through the same steps.

as, as people. They're really models of people. But of course, these models from OpenAI, these large transformer models, they're, they're, they're trained mostly, on more text than any human would ever see in, in a thousand lifetimes. And they're trained autoregressively, mostly just to predict the next word in a sequence.

And with a little extra tweaks to make them conversational, yeah, they, they distill these patterns of human experience, but you know, they're probably doing that in a way that's only distantly related to, how we make sense of things and, and learn and whatnot.

Andrea Hiott: It seems like it's accelerated whatever we're doing to some, yeah, it's

Like reflecting

Brad Love: back the, I think like the structure of our own knowledge and culture and how we interact with each other and.

It's really, I [00:54:00] think it's, it's really amazing, but yeah, I think it has a very different flavor, but it could be an incredibly valuable tool. So like a lot of people, this is not what I'm focusing on, but a lot of people are trying to use these to summarize the literature and stuff, make it more, more, uh, tractable.

I'm trying to make the literature more tractable, but I kind of think these models have, are amazing, but have problems when it comes to kind of looking back and summarizing things too. I mean, you and your listeners probably have come across like this term that I don't like, but. use like hallucination where the models make up stuff they basically confabulate and it's because they're it's it's because they're actually the models are cool that they're doing this in a way there are these generative models that seize on the patterns of everything they've been trained on and so when they like hallucinate what they're doing is just giving you something that maybe should have happened that didn't you know it's almost like Like, when they make up citations of papers

Andrea Hiott: They make up paper names or make up movies or something.

Yeah, but they're like things that probably

Brad Love: shouldn't have existed. Like, they're not Like, if it's a [00:55:00] famous chemist, they're not going to say, make up a psychology paper for them. They're going to make up something on their topic. That is like almost the triangulation of everything they did. I mean, maybe I gave him too much credit, but these things they make up aren't random, but it's, again, it goes back to generalization.

All these models are doing besides, despite people are obsessed with them memorizing things, but they're really like generalization engines. And to me that suggests you should use them not to look backwards, but to look forward to predict things. And that's an important part of science, but all of it.

So that's what we try to focus on in that project is can we give these, basically like a little bit of background and methods and see if the model could predict the correct pattern of results in a neuroscience experiment and again that's interesting in and of itself and it could be good if you're like want to run a study and say you know where the odds this is going to work out like i think and if it's 99 probably you shouldn't run it because it's boring but if it's like 1%, you probably shouldn't run it either, unless you have some deep insight, how the whole field and [00:56:00] literature has got it wrong.

And like, why the model is predicting that and like, yes, this is going to be my high impact, paper. But but yeah, but I'm thinking about more way into the future for scientific discovery of like helping people design the most informative experiment. And for that, you need to predict the future, basically.

Which sounds like, like very, like, you can't predict the future. We can't really. I mean, it's a stationary distribution. That's the brain's not really changing. There's so many papers being published. So like no real study is really unique. So it's, to me, it's a very minor form of predict, not to us, but to the model of predicting the future.

And it seems It's

Andrea Hiott: not only one future, you could predict many, many multiple ones in a way and see what, yeah, which is fascinating. I mean, yeah,

Brad Love: go, go on. Like, what are you thinking? Please.

Andrea Hiott: Well, I guess I'm thinking about tying this to the, where we've been to because, so we've been trying to understand learning systems, the body as a learning system.

The body has, being in the world, and there's no [00:57:00] choice, you have to learn to survive, you can call it whatever you want to call it if you don't think learning is the right word, but you have to align with these regularities and deal with them and all these levels and that's become, as you were pointing out, we then developed languages and that's kind of not not anymore.

Separate from this. It's part of that process that we develop the language. And so now, in a weird way, the language has become so important for humans through all these things like papers, where if you're in neuroscience community, you can't, as you've said, it's incredible how many papers are published.

Even on one, like just hippocampus stuff, you have some talks which I'll link to about brain GPT so people can really dig into it, but you talk about hippocampus papers and there's like 20, 000 and Some small amount of time that are published. I don't know. It's crazy. So now that language is kind of, we're developing objects with that language, papers, all these things that we try to keep up with and we can't.

So we've developed tools to figure out how to do that. And I feel like up till [00:58:00] now, it's been about, as you were saying, like trying to clarify all that. Uh, information and give us a little spit out of it. Like for example, this conversation, I could ask AI, what was this about? It'll tell me a little paragraph of what we talked about, that seems really different from what you're doing with brain GPT. And it's still hard for me to really understand, but I feel like it's closer to Understanding the patterns and the generalizations as already moving us in a certain orientation that we could, we can't keep track of because there's too many possible ones, but maybe AI can help us in a forecasting.

Yeah,

Brad Love: no, definitely. I mean. Again, all of us are siloed in so many related fields just use different terms for similar phenomena and like, I mean, just think of neuroscience, like all the different imaging technology, different species, everything from like DNA to like social behavior and And I don't think it like neatly encapsulates as much as we'd like to.

It's not like a artifact that a human [00:59:00] engineered where there's these distinct layers that are separate. Like you said, it's all kind of interacting and information's leaking from one thing to the next thing. And that's probably also going back to a conversation where you shouldn't label a brain area as doing X.

Because it's going to be, in reality, it could be more complex than that. It's all interdisciplinary,

Andrea Hiott: but once you try to hold that information, it becomes so overwhelming and impossible.

Brad Love: I don't think it's possible, like, in, like, many lifetimes, and even if you had it, just, brains aren't, we're just, I don't, I just really think We're not built to do this, we

Andrea Hiott: need tools, but that's why we get siloed because we get overwhelmed and we're like, I'm just going to focus on this particular action potential situation right here.

And then we get siloed even though we're interdisciplinary. So it's always both. Yeah.

Brad Love: Yeah. No, it's really, I mean, other things, I mean, it's fine. Like people don't get uncomfortable using a calculator to crunch numbers and this, but somehow this, I think people feel like this is too intimate, like nothing I would predict the outcomes of experiment.

And I, I think a lot of it and seeing people's reaction is maybe this sounds again [01:00:00] harsh, but I think people don't realize what being an expert is. It's not really, yes, you should be able to predict something, but it's really, I think, more going back to the explanation, understanding domain and how it's structured.

That's can help you make explanations, but. I think like a good machine system that can process all the information, maybe it can't give you good explanations right now, maybe in the future it can, but I think it could give you good predictions already, but that's not the same. Yeah, that's like a tool I mean it's like does the calculator have a deep understanding of a structure of mathematics or something like probably not, but yeah

Andrea Hiott: it feels too new for us to understand it in that way. Maybe that it feels like it's doing our job or something that we should be able to do that.

Brad Love: Yeah. Yeah. I think it's threatening. Whereas I guess I'm most threatened by being lost in this deluge of papers and. I think that's the real threat.

Andrea Hiott: And I think everyone feels it, but it comes back to why we want to name cells things. And like, there's something about science too, that we're motivated by our [01:01:00] personal contribution, which at the moment still feels like should be being able to predict what the experiment is going to do. But actually, if, if we could see it a different way, then we might have much more of a role to play if we delegate some of that to the tools.

Brad Love: Yeah, yeah, no, it takes kind of a, maybe a different view of where the value of science lies. Maybe that's. Yeah, it's going to be going to be difficult. But I think these messy biological science domains, I mean, whether it's protein folding or neuroscience experiments, I just think we get bigger and better data sets, I mean, I think in the future, these models hopefully won't even be reading the papers, they'll just be like reading the data and the paradigm, because then, you probably get even better prediction in some sense, because you wouldn't have this, like, It's leveraging human language now, but that's also taking all our biases too.

And maybe it would be better to, [01:02:00] I mean,

Andrea Hiott: I don't know. I think there's, there's a role for that stuff too, but I see what you're saying. Maybe it was

Brad Love: definitely a role, but it's sort of like, I'm seeing it, like seeing it in the most like simple terms. If you have like very little like data, then you should have having sort of the bias is helpful because it could guide you and fill in the blanks.

But if you have like a lot of data, like how, like how these large language models, they are trained on like English and other languages there, these models don't really have a lot of biases put into them about the structure of language. And there they thrive because. They have so much like more than any human could consume and they could pull it out.

So I was just trying to make that analogy to science. If you actually had, of course, the experiments we run aren't at random, so like, that's already like the most important, like human intervention into the system. Uh, but But if you had tons and tons of data, I, I mean, I wouldn't be surprised if, if you first, we could find out first, we could remove the review papers, then you could remove, that's all [01:03:00] just like, right.

That

Andrea Hiott: is the review in a way that is reviewing.

Brad Love: Yeah. But I don't have to be careful,

Andrea Hiott: right. In the same way that you said, it's not summarizing it. So

Brad Love: yeah. Yeah, I mean, I have this as a result of predicting, uh, predicting results from methods was like really astonishing. We got it. But then I started thinking about just how like neuroscience works.

And every time scientists, including me, publishes a paper, they think it's really insightful. And it's like pushing everything ahead. And it's a real contribution. How could that be true? Like 100, 100, 000 times a week, like all these papers. Like, if that were true, why aren't we done in a month, two months?

Why isn't everything solved? So obviously our papers really aren't, most of them aren't really advances. There has to be so much redundancy and just so many connections between them and everything has to already, so much has to already be out there kind of. It's latent, but solved in a way.

Andrea Hiott: We're all [01:04:00] so focused on doing this work and fitting it in to wherever little niche we're in that maybe we've kind of lost the vision. It's hard to say, but

Brad Love: yeah, yeah, yeah. So, I mean, the fact that this range of to model could predict results.

For methods probably says like a lot. Yeah. About the field. I don't really think it's like some, uh, startling technology. It's just, it's like good at picking up patterns and it's been trained on a lot of, uh, the information. Well, I think you're being

Andrea Hiott: too humble again. So actually we should, this is, this can.

 I mean, when you hear brain GPT, it's like chat GPT or something. And those kind of projects are huge projects that take a lot of money and have taken, lifetimes of work and stuff. So how could you, like for people who don't. know what BrainGPT is, he has like, how in the world did you start this?

Like you're actually close to having a kind of prototype, right. Or some, or maybe you have it already.

Brad Love: Yeah. We got something you could download now, but the short answer is like that ecosystems really changed in the last few years. So you, [01:05:00] you could build off what other people do. So some, like a lot of papers that come out in cognitive science.

It's doing this experiments on GPT 4 or something. So to us, a model like that is completely like, does not have that much value because it's not open source in any sense. So most of these models aren't really open source, but a lot of them are open weights, which means you could download them into like the cloud.

So like Microsoft gave us a little bit of money and you could, you could run them. And so it turns out like these models without even training them, without spending the millions You can get really good results and prediction. Cause they're trained on a lot of science and they're trained on all of Wikipedia and stuff.

But what we did is we did additional training on 20 years of neuroscience, which is, again, it's not going to be like that, that much. It was only like, but my memory is bad, but it's probably like war modern GPUs running like 80 hours or something. But so this is like within the realm of like a lab could do this easily.

And yeah, so that we did that. So we, we did this fine tuning. I mean, I could go into details. [01:06:00] of interest, but basically there's this model, like you said, it costs millions to train, but in our case, we took a llama two from meta and that already does really well at predicting neuroscience results. It's really, really scary.

But you could freeze that model. It's 7 billion, weights or parameters and you could train a little pathway off to the side of it to augment it with domain knowledge and neuroscience. And so we just basically just trained on 20 years of neuroscience that does a little bit better yet on the benchmark of predicting results.

So, but yeah, I guess the amazing thing to me was That these models, even off the shelf, are, like, better than probably me and most of their colleagues at predicting things. And I, again, I think it's just because there's just, there's just so much they pick up on these, these patterns. I mean, I don't, it doesn't mean they're, they're, they're, like, smart or experts, but they're really good at predicting things.

Andrea Hiott: So [01:07:00] you said a minute ago, it's kind of scary. And I, I, I know you don't really think that large language models, at least as of five years ago or three years ago or two years ago, were that scary.

I don't know how you think of it now. You, you seem to think of it as a tool and it's going to extend. What's possible in a good way and, and so on. I want to talk about that a little. And also you mentioned this word benchmark, and I think that's an important word in this project and the brain bench, or I'm not sure.

And so I want to hear about those things.

Brad Love: Oh yeah, sure. Sure. So when I say scary, it probably was not a good word to use around AI because I mean, they're people, talk about like existential risks from AI and. Like I'm more, I think like most reasonable people, I'm more worried about the concerns that are already affecting our lives.

Like, these models have biases they're, they're affecting people's lives today because they're being deployed in the world. And so, yeah, it seems like we should focus more on that. When I said scary, I think I was just more, maybe [01:08:00] astonished would be. The right, right word. But it's more that like this thing that wasn't even made to do this task, it was just trained on a bunch of stuff that happened to contain a lot of science stuff, just already, it's, it's, it's better at predicting things neuroscience results. And it's pretty trivial to make it better yet by training it on like 20 years of papers.

Uh, and so if we could do this now with like these pretty limited resources and This whole project too, we just got in this whole game like less than a year ago. So it's pretty rapid, like progress turn around game

Andrea Hiott: like, Oh, just like, cause you were doing large language models for a long time, right?

Brad Love: Uh, no, or no, just neural net models and yeah.

And models does interest again, like learning and trying to learn from visual stimuli. So like, yeah, I mean, I've been. Playing around with like language models, like back before they were any good, like all the way back to the days of like latent semantic analysis. This is [01:09:00] like, did that exist in the late nineties or early 2000s?

I think it did. But like, so people have been interested in this forever. It's just really, a little bit more recent. The results of just as they've scaled up and they switched to this transformer architecture that things have just taken off more. And so like, yeah, it just. Yeah, it just dawned on me.

So yeah, so I never really got serious about it or people in my lab. So less than a year ago. But the thing is, like, if we are already working with these kind of deep learning models. You, you kind of, it's kind of an easy move, move into it and already working with language models in the past. Yeah, so it wasn't, it wasn't, it wasn't that, I mean, I shouldn't say that because if you talk to, uh, Shelly and Ken, the first author, it'd be like, oh, it's been a really tough several months of dealing with, uh, Yeah, again, it

Andrea Hiott: depends on

Brad Love: where you're coming from.

Yeah, yeah, exactly.

Andrea Hiott: For you, it was okay, though. It fit to your way of understanding. Yeah, exactly. It

Brad Love: fits into things, and [01:10:00] yeah, and I, I feel like, the way we set up the problem, too, was informed by being a working scientist all those years, because even talking with Some computer science collaborators, they're quite pessimistic on the project and whether it could work, and a number of funders were too.

I think it's because they had a different view with sciences, so I kind of see it like a mess of, we, each paper by itself isn't that valuable, and you know all these weak signals that are intermixed in. I think a lot of them, like, almost like we're talking about this, like, naïve view of like, Eureka! I found the cell, was more like, oh, it's almost this logical deductive process, and you could think through, and the scientist is so smart, and everything has a reason, it's a simple explanation, and I think in biological sciences, there's, there's, It's going to, there's going to be some storytelling that will make us feel good and some actually legitimate explanations, but a lot of it is just going to be for purposes of prediction, a complete mess, and it's going to involve like 10, 000 variables at once coming together that is just never going [01:11:00] to make sense because I think that's what reality is.

The underlying reality is like, but yeah sorry, I think you had another question you want to focus on. Well, the

Andrea Hiott: benchmark, but before we get there,

Brad Love: yeah, so

Andrea Hiott: yeah, the, that goes back too to this overwhelming, like blooming, buzzing confusion that is the world that we then try to understand. And like, there's always this weird kind of in and out between these reduction emergence with neither one of them really ever being right or true.

But when you're in one space, you feel like it's true. So like, These scientists probably feel that they're used to a certain way of dealing with the world, and it can seem like that can't be changed by something like brain GPT. It couldn't help them. It's really just a scaling up or a different scale because the same things that are motivating them could motivate them if, I mean, wouldn't it be wonderful if they can just look, they still have the eureka moment if they want and yeah, great idea, but then they can simulate it in a way.

It's almost like, let's simulate these many possible ways. And what if I tweak this, what would happen or what, [01:12:00] to me, this seems brilliant.

Brad Love: Yeah, I mean, it seems like it'd be a great knowledge discovery tool. You're like, what if I run this fMRI study as an MEG study or I run it with like elderly people or teenagers or like, what?

Yeah, I'm just, and it could be like, cause you could, get insights, or you could be like, Oh my gosh, that prediction is completely wrong. And you could think through why, why is the model saying this? And you're like, Oh, because there's all these papers that use this, have this artifact that they didn't control for this.

And so the whole literature is contaminated. And so, so if I read my paper, I could, you could actually use the model saying like, this model predicts this, but I predict this because of this. And lo and behold, I'm correct. This is a major discovery. And yeah, so I think it could be. really useful for directing people's efforts in providing, a kind of forward looking synthesis of the literature.

But yeah, to get to your benchmark question So like this, all this whole deep learning revolution kicked off really with this Alex net [01:13:00] model of object recognition and all the, that, I mean, that was huge for the whole world, but even with the neuroscience, all the work in computational neuroscience, using deep learning and object recognition, all kind of followed from that, trying to relate these engineering models to the ventral visual stream.

And that's how originally got into deep learning. I got into deep learning. But uh, yeah, what made that even a discovery was like, how would you know something works or doesn't work? And they had this kind of limited, but really good at the time benchmark called ImageNet. And it was just a a million images you could train on, and then like, 50, 000 you could test on.

Were like held out. So you could say, say, so you train the model, like know what a car an ostrich is or a batch, and now you show it one a, a new car or a new bat that it's never seen before, and you could see if it could get it right. And so that's how the models are scored. And benchmark. Can we think of it as like a

Andrea Hiott: reservoir or something or, I mean, how would people think of a benchmark if they've never Oh, yeah.

Brad Love: Yeah. I think it just like a, an exam that's like a, like a [01:14:00] test of Yeah, exactly. They couldn't know where you stand, sort of like. Like in the Olympics, we have world records and stuff and you could measure like how fast someone runs and that's like, you could compare people and see how good they are based on that.

It's basically a way of keeping score. And of course, if you have the wrong metric, And everybody works to maximize that metric that could lead the whole field astray. So like benchmarks can be dangerous in that way. But another way that they're good is it makes everything comparable. So like in science, you want to not just science engineering, you want to compare different proposals and see which one is the best one, which one does the best job for the accounting for the data.

And that's what we compute like model selection statistics and stuff like that. So it's basically like a yardstick just to see it. Which model is the best and it's, it's objective and that you're just gonna get an answer out of it. That's a number. But of course, there's subjectivity into how it was constructed in the first [01:15:00] place.

But I think it's, it's, it's critical when you're moving into a new area. And like, we're saying, like, if we just said these models could predict neuroscience results. And we just gave a couple of demonstrations. People wouldn't, I mean, I don't even think people believe it now, but they wouldn't believe it at all.

But if we come up with this benchmark of 200 items that were carefully constructed, we show, like, human experts get 63%, the models get like 80 some percent, then it's like, it's a little harder. You have to kind of take it seriously, or figure out what's going on, and When we do additional training of the models, like say on the neuroscience literature, we could say, oh, look, it gets like 3 percent better, like training on more neuroscience makes the model better, or things we discussed before, if we remove the review papers from the training set, because they, oh, no, look, that human insight was important, now the models are worse, or, or psychology relevant to understanding the brain, like I think it is, so can we train on a bunch of psychology papers too in the future and see Does that make the predictions better or worse, so it's sort of a [01:16:00] way to deal with things that are really nebulous and make them measurable.

And I think it's wonderful with the one caveat that. The benchmark is never going to take into account the whole situation. So if it's not representative of what you really care about or if people just work to master the benchmark in some sense, it could be like overfit and, and self limiting in that way.

Yeah.

Andrea Hiott: So you got a bunch of volunteers or people to help create all of this? I mean that shows there's a lot of interest, right? I mean that, this within itself feels like a very important project, just creating the benchmark.

Brad Love: Yeah,

Andrea Hiott: it was

Brad Love: amazing. Yeah, yeah. There was like 75 active volunteers to help make this benchmark, because it took tons of human labor.

Just to get it right and do a lot of quality control and yeah, a lot of the people that contributed are authors in the paper, but other people just helped out a little bit. And yeah, so it's amazing. There was like a waiting list of like a hundred people too, cause it was just not enough actual work to go around that I could manage that many

Andrea Hiott: people.

Well, that shows there's a lot of interest.

Brad Love: [01:17:00] Yeah. And there's a lot of interest too for the people. I mean, it's, it's really like kind of tedious to complete the benchmark, but we had over 200 people. Yeah. Do it like in a week that the study was open. And so we needed those experts just to see, like, provide a baseline to compare the models to and yeah, all that data is publicly available to already if people papers under review.

But people want to play around with the model or human data. I think it's

Andrea Hiott: interesting.

Brad Love: Apart from just the models, like it's interesting study of how predictable our field is to scientists within it. And, uh, there's a lot of probably nuances in the data that we didn't analyze because, our focus was elsewhere because, I mean, I think that's just an interesting question.

And apart from the models, that's really the main story. I

Andrea Hiott: think it's fascinating. There's, it's really rich. There's probably a lot of potential ways, new patterns and generalizations can be noticed using that, uh, some way, but I guess it's kind of, I'm going to link to, more about brain GPT, but [01:18:00] I want to ask you kind of what you see now where you are now What's next and then I have just a few like quick little questions sure.

Sure.

Brad Love: Yeah Yeah, so there's just so many like obvious follow ones From just sticking with the brain GPT, like, uh, like we talked about combining like humans and machines teaming in the future. And so, uh, we actually had some analyses that if you combine a person's predictions and the machines predictions, it's better than either one or two machines or two people.

And it's really interesting because there's a, like, we talk about diverse teams being good at science, but, for prediction, that's really important too, because the models don't see the world the same way as people. They only correlate like 0. 16 with how difficult they find items versus human experts. The models and people see things a bit differently. And what they find difficult. And the models impressively also know when they're going to make a mistake.

Just like people do. They can give their version of confidence. And it's really good. higher their confidence, the more likely are to be correct. And that's true of humans too. So that means [01:19:00] you could actually combine a person and machine together to get like a better prediction than either one alone because they don't have slightly like non overlapping knowledge.

And so that anyway, that's a really easy thing to do next. And another thing we're doing is we're exploring all different ways of, fine tuning these models to incorporate neuroscience knowledge I mean, people are going to want to specialize these for different domains, and, uh, we want to make them, better than what just comes off the shelf.

What's off the shelf is pretty good for a benchmark, but maybe, we could even come up with better benchmarks or tougher ones or just make it even better at ours. Uh, so we're looking at all kinds of ways to efficiently train the models.

Andrea Hiott: Or some kind of wiki almost of Yeah,

Brad Love: yeah, yeah. And then we want to actually get try to figure out how to get these models.

So you can already download it, but try to figure out how to host it on the web, so that we could put some like rudimentary tools up or people just want to even chat with it. So we didn't evaluate these models actually in chat, but we just got. [01:20:00] Like the probabilities out of the weights, because it was a lot more precise than trying to talk with the thing.

But,

Andrea Hiott: gosh, there's still so much possible.

Brad Love: Yeah, yeah, it's really, it's, yeah, but yeah, the stuff I mentioned before, I think the, what I'd like to do, like, more in terms of not just weeks or months, but years, is, it, I mean, it sounds out there, but, Yeah, get into a lot of people are interested in automating aspects of scientific discovery.

So really, like, if you put in a few key questions you're interested in, like, does the hippocampus do this? Does it do that? That you could actually use these models and their ability to create a hypothesis, but also to evaluate how likely different patterns of data are to come up with an experiment, like a rough suggestion for an experiment that would actually, answer the questions you're interested in.

Cause I, cause I think that's, I mean, to me, that would be like, if people are threatened by prediction, this would be like more, cause this is like almost like getting towards like some kind [01:21:00] of almost, scientific reasoning and creativity. I mean, the model would just be doing this kind of search.

But I mean, maybe that's all, like, asking a lot of questions. Well, maybe it can

Andrea Hiott: get to a point where it's like coding or something, where if you're really specific about what you asked the chat GPT to do, then it does it well. But it doesn't mean, you have to know those parameters, so that benchmark becomes really important.

But then, it could be a kind of aid in creativity, I guess, because, The hallucination could become the creativity. It doesn't, because there's already some potential there of it's doing it, it's coming up with new stuff, but it's just, how do you control that? It's coming up with factual, true stuff.

Brad Love: Yeah. Well, I guess the great thing about prediction is nothing's factual. So you're kind of off the hook, just need something. And it's also the models, like again, like they're already calibrated in their accuracy when they predict something, we can get a measure of confidence from the model itself and it kind of knows when it's, it's predictions bad or unlikely to be, [01:22:00] true.

So yeah, I, I think this is a way to use these models is to, like you said, like they're really, I mean, I mean, I'm sure everyone's played around with them, but you know, they could write like kind of silly poems and generate all kinds of things. Then you ask them, if you give them something really high, it's memorized, like, uh, like, like who is the president in the U S in 1984 or something, it'll get that correct.

Because it's been like, so memorized, but most things aren't really memorized and they're just general patterns stored, shared across weights and that stuff, but it'll just always just make stuff up for like, if it's not attached to another system that like grounds it in some. Specific source. It'll just I just think it's it's nature to generalize.

So I think it's perfect, though, for scientific discovery because they're It's not bad. That's what you want because you want it to try to anticipate what will happen by connecting up all the things that have happened. So, yeah, so, yeah, it is kind of creative in that sense. And yeah, so, I mean, I think it'd [01:23:00] be exciting if there's so many, if you could just write what you care about out a few statements, like, does it do this or that, this or that, then the system would give you probabilities of those, and they could search for it.

the study that would basically reduce your uncertainty about that. It sounds

Andrea Hiott: really exciting to me because it's like a, it's another one of those leaps, right? Uh, thinking about your story of sitting in the room and not being able to imagine 200 years, like this, this would be 200 years to 500 years from now.

Like right now that feels like it's happening to the, whatever students are imagining now is probably not close to what It might be possible if, if we can do this, but you said they a lot. And I do wonder, do you, what do you think? Are you, do you think of them, the, the AI, the tools, how do you think of them?

Do you, are you scared of them at all? And I know we already touched on it a bit, but do you see, not scared? That's nobody's, but do you worry that they could be used, that it could get out of control and they, we can't understand anymore [01:24:00] or have these constraints or parameters for understanding the difference between.

Hallucination and probably something that's probable, like if we're going to build a spaceship or something,

Brad Love: right, right, right. Yeah, it's funny because when we put a satellite, or try to land something on Mars, that's, that's the prediction, but it's Yeah, it actually works because yeah, it's like that's true physics and equations.

Yeah. Yeah, am I scared? I mean, I'm not, not, yeah, not, not, not terribly. I mean, I think people need to be responsible. I think the one thing that makes these models like a little bit unpredictable or difficult to deal with is like a lot of them. Models have like some objective function, something they're trying to maximize.

And so you could kind of see what they're doing. And if you put the wrong one, maybe it'll do something bad. And there's all these like ridiculous thought experiments, like the paper clip experiment, like, Oh, a lot of paper clips and it consumes all the resources on earth to make it. But that's a little bit.

ridiculous, but you could but when you have an objective [01:25:00] function, in some sense, it seems like you kind of know what the thing's trying to do and you could change the objective and make it not do things that are like ridiculously bad, but the large language models. I don't really have that in a simple way because most of their training is just like autoregressive to predict the next word and that's not how we use them.

We don't use them really, we use them that, ask them to predict the result of the study or how to, how to bake a cake or what should I do? Or people probably use them for online therapy or everything. For

Andrea Hiott: me, the large language models don't seem too scary, but when you start thinking of large action models and large world models, I don't know how plausible you even see those things.

But it does. Start to stretch my brain a bit. Like, wait, what if this kind of capacity could be in all these other realms that aren't conceptual in terms of linguistic, then it, I'm not sure what, what, what to think about that. What do you think?

Brad Love: Yeah. Are you, are you worried about them not performing well or doing bizarre things like educate, like how, I just don't dunno

Andrea Hiott: how to [01:26:00] imagine it.

What it even is. It's, uh. Like what's, if you're trained on words and generalizing from images and things like this, it seems I can get a grasp on it, but if, if, if we make this internet of things and everything is sort of, all this data is coming in from literally, uh, again, it's the map territory thing where we've made it, if we make a map that fits perfectly to the world, I mean, there's something about that that's already like, is that how, what are we, are we confusing map and territory?

Can we do that? But if, if the world is smart in that way, which people are talking about now with world, world models, so that it's not only language it's trained on, it's somehow training on everything around. I don't know how to get a grasp on that yet.

Brad Love: Yeah. I mean, yeah, in some sense, I'm not surprised these models, even just language do as well as they, they do.

Cause I mean, yeah, I mean, this paper a few years ago, Ancient Machine Intelligence with Brett Rose, in which we just looked at different spaces, like the linguistic space, the visual space. I [01:27:00] think it was like an auditory space, like how similar, it turns out that things, the same patterns like repeat in the different spaces.

So like if two objects are visually similar in some way, they tend to also like be talked about the same way, I guess like a, Yeah. Anyway, so you could kind of a

Andrea Hiott: sustained thing, a general general learning system, right? It doesn't, yeah, yeah,

Brad Love: yeah. So I think the more of these modalities or systems you have that you could sort of, this is what you were hinting at, that you could combine together, probably the truer view you get on the underlying reality, but already, I think even just having language, you're going to get really far, a lot farther than I think most people thought was possible, but yeah, you're right.

The more you incorporate, of course. We're going to have robots walking around and factories and elsewhere, like you can see there's more. Yeah, I just. Yeah. Is it a robot?

Andrea Hiott: I mean, I think that's, I don't know what the vision is because it is, I feel like it's a [01:28:00] bit at where you were in that class, like you couldn't have really imagined this, but it was happening.

It was going to happen. It was already kind of in the works and there's something kind of in the works with all of this, but it's very hard for me to envision what it means if it's something we're wearing on our bodies, if it's an actual robot walking around, if we're just literally interacting with the world.

To me, I think about haptics on the phone and how. That was unimaginable. And then it just became the most normal thing. I feel like we're in the kind of moment like that in terms of how we interact with our context, but I don't know what it's going to be.

Brad Love: Yeah. No, it's really, I mean, yeah, some people talk about it like like your, your AI will just be talking to your friends, AI isn't like setting up your date for you or something, but I don't know.

Cause there's other things too. Like, I mean, we're still Like, people have made a lot of progress on self driving cars, that's something that has to deal with, but then it's not solved, and that's something that has to deal with the complexity of other humans in the real world. And so, like, there is this, like, strange thing where there's always these [01:29:00] edge cases that might make people not fully trust these things.

I mean, a lot of ways, like, what's informed this, like, Brain GPT project is try to think about domains where, like, a lot of these weaknesses or challenges really don't pose a threat or problem. But this is like, obviously like going to change the whole world. So there's thousands of people working on all this stuff.

So hopefully somebody smart will get things a little bit. Well, I

Andrea Hiott: mean, your team is already doing something because this could be a template that it doesn't have to, as you say, it doesn't, it's not just brain science. Like if you figure out how to do this, it actually does start to change. I mean, you can see it scaled and nested, right?

Not only would it change different fields, but you could also start to understand how the fields are interacting more, because for me, what's often very exciting is trying to find connections between all these fields that we don't see already that lead to actual new understandings. That might be more possible with something like this.

Brad Love: No, I know. I think so, too. Yeah. So, I mean, yeah, I [01:30:00] think again, people always focus on the shortcomings of these technologies, but they're going to progress and progress a lot because there's just so much effort and resources being put into it. But even if they don't, I mean, I think it's a lot like in the past where we had the internet and then before that personal computers and they didn't really change the world like the first 10 years they were around and they didn't have that much economic impact and even less social impact and societal impact and then it's like It just, it just became part of everything and that's like hard to imagine the world without the internet or personal computers on our phones now, but so I kind of think even this technology is going to improve so much, even if it doesn't, I think like there's already a revolution and how it's going to affect our lives that's cooked in to the rest of our life, even if nothing changed, it's just.

Yeah, part of me thinks it won't be

Andrea Hiott: about robots and cars driving themselves and these kind of, so much as somehow the way we connect to one another changing, like the internet, right? It's not about the computer. It's about the internet.

Brad Love: Yeah, we have no idea. Like nobody [01:31:00] knew, like when the internet, I remember like when it started taking up a little, it was like brochures from companies, and stuff like, like just being like, Oh, we could scan it and put it on.

It was like, it was like, pets. com, Superbowl ad or something, and now it's, everything is, now it's really, so, so like, so I, I just think it's like, yeah, I mean, a lot of, everyone's trying to imagine what the future's like, but I think the lesson is, it's like, People are going to come up with applications and uses, and you mentioned, like, ways it could go, and that'll be, like, so much different than what we imagine now, I bet, and, like, a lot of companies betting billions will just, like, go out of business because they'll get it wrong.

I think it'll be, it'll be interesting. It's definitely going to affect our lives, hopefully, mostly for the better, but yeah, there's just, it's going to be, it's going to be interesting and strange. It seems like the pace of change is going to accelerate for both better and worse.

Andrea Hiott: Are you still excited about it?

Do you still feel like you want to try and understand how the mind problem solves? Or have [01:32:00] you, have you answered that? Yeah, yeah,

Brad Love: I mean, yeah I do. There is a little bit of this feeling though of, that I didn't have before, that uh, gosh, the really, so like, the things I say in talks, like I, I am like internalizing them too, so it's like, So I am still trying to do those things, but sometimes I do like feel like a little futility is too strong of a word, but just like, gosh, there really is so much here and like, I'm doing this and working hard at it.

I'm going to continue, but is this something just in five, 10 years, even that just, we're going to just do science in a completely different way and start like automating a lot of these aspects of discovery and is the, our paper is going to be for humans to read, even like when we put them out like findings or they're just going to be these like.

Massive databases that machines read through, or like, I mean, usually things don't, I say things change fast and fast and fast, but I mean, look how slow things change in science. Like even the publishing models that just don't make any sense, and we still have [01:33:00] them. We probably will still have them in five years.

So maybe maybe I'm thinking maybe I'm getting too far ahead of things, but I, yeah, I. I could see the basic practice of science being different. So I'm still developing ideas and pushing things and trying to do explanatory work. But one reason I put so much time into this more predictive tool work is I could just imagine waking up one day and just realizing, like, why am I even doing this?

It's just not how, this is like, this is like practicing arithmetic, when we have calculators.

Andrea Hiott: And by this, you mean the old kind of science. Oh, like,

Brad Love: Sitting around thinking about how learning or the hippocampus works as opposed to working some machine system. Yeah, I'm really glad you

Andrea Hiott: brought that up because I think that's, that feeling is something a lot of people start to feel now is like, you do, as we were saying, everything is changing and you start to kind of, Yeah, I don't know.

It makes you wonder, is it worth all the energy you're putting into certain things, which if you're not careful can start to feel like maybe nothing [01:34:00] has, I shouldn't put my energy into anything, which is actually not true. It's just more shifting it into stuff. I think it goes back to where we started in a way that, That mystery of still trying to understand what's happening here is still there.

And like connecting with however that motivates you and maybe shifting into another way of exploring that instead of the ways we've been taught we should go. I think that can be really hard and scary. And I do think that's shifting somehow.

Brad Love: Yeah, yeah. I mean, I definitely don't want people shouldn't be discouraged by a conversation because there's going to be so much to do and like people, everything, There's always going to be a market for, like, explanation that other humans could understand, too, because we just crave that.

But, I mean, maybe it's just going to be the natural thing where you just have to give up on some things. We already have, like, machines could go faster than us. They could fly. They could calculate things faster. No one, I mean, people still play chess even, and they're [01:35:00] no good at, machines crush people at that.

I don't know. That's true. But we

Andrea Hiott: still enjoy it.

Brad Love: Yeah. Yeah. And so, yeah, maybe we just have to be open, like you said, like to the change and it'd be easier said than done. But yeah,

Andrea Hiott: I don't feel like you lose control or something.

Brad Love: Yeah. Yeah. But I mean, maybe it's off topic, but I don't really think, I mean, we're all like sort of embedded in these.

Huge societies anyway, we're not really in control of much of anything and everything around me. I can't make myself, or like feed myself. And so, I don't know. We're already like ants anyway, but a lot of ways, uh, but maybe it's just. I think we are, but

Andrea Hiott: we're kind of becoming aware of ourselves as ants and that, that's something, yeah. In a way, we have to stop thinking of ourself in the same way too. It's more like, maybe we're kind of the ant that's becoming aware of itself as part of the colony, but that's still, that, and that's weird, like, and that changes the colony.

Brad Love: Yeah. No, a [01:36:00] hundred percent. Yeah. So. Yep.

Andrea Hiott: I'll just edit that out, I don't know what I'm talking about.

But I have, for some reason I was thinking of this book by Christopher Alexander, have you ever read it? Uh, Pattern Language? Oh,

Brad Love: oh, no, I'm familiar with it, but, uh, yeah, but no, I haven't read it.

Andrea Hiott: I was

Brad Love: just wondering. But it's like it's Osmos from, yeah, the, the ideas of that, of,

Andrea Hiott: Generalization and the way he talks about it, it's kind of somehow, yeah.

And also category theory in math, are you interested? Yeah.

Brad Love: Yeah. I, it's funny. Yeah, I did. I mean, it was so many years ago, but the closest I did to that was abstract algebra with the rig theory and this, and just for fun, I just looked at Category theory just seems like it's sort of the next level of abstracting things away.

And you could even study category theory with category theory. Yeah. I mean, a lot of people seem excited about it and had some discussions on potential projects with it, but yeah, nothing going on with it now, but yeah. Why, why do you bring it up?

Andrea Hiott: Just other conversations [01:37:00] I've been having about, it reminded me some here of, more or less, like this forecasting, it might even be more of the benchmark stuff, benchmarking and that, on that level, uh, when you said you were trying to find a way to get more people involved. It just somehow came up. So I'm just putting it out

Brad Love: there. No. Thank you.

Andrea Hiott: I do have one question, which I can edit out and you don't even have to answer, but this is called love and philosophy.

So I always at least try to ask like that word. It's not a word that said in science much in a way with this beyond dichotomy. I'm trying to push out it a little cause I don't know. But I, I just, when I say it, does, does it feel completely separate from your science and your life and all of this technology and model building?

Yeah, I just wonder, I wonder like. If it just feels like something you bracket off from all of this or if it's infusing it.

Brad Love: Yeah. I mean, gosh, it's so hard to answer that. Because first we talked about like creativity and passion. And so at that level, [01:38:00] it's definitely part of my life. Just like, I mean, when people say like, why are you, why are you working so much?

But like, no one says that to like someone playing guitar or writing poetry. They're just, they're not like, Stop, stop doing it. Just like, so, so at that level, it's infused, but there's like that maybe like another level of your question, I think that you're getting out of like how you see yourself or reality.

And yeah, sort of like, maybe back to the ants realizing they're ants and so yeah, that That definitely is at play too. Like, I feel like when you think about how the brain works, how machine learning systems work and listen to people debate things, start thinking about like, what's the nature of like every person in myself and like how we're actually put together and why do we, yeah, so yeah, I think it does turn in on itself and I'm sure that's probably like what drew a lot of people into these fields is that sort of.

interest in themselves at some [01:39:00] level, but there's another level in which it doesn't. So I think some people try to make science everything and like kind of replace all other ways of understanding the world and make it all encompassing. And yeah, so yeah, I definitely don't subscribe to that.

Maybe, like how you have these incompleteness theorems, informal systems. I kind of wonder if there's like a single way to, uh, ever, that there'll ever be a single way to understand the world and everything in it. I kind of think there, there won't be. And I think a lot of scientists they, they think they're being like like opening up and being very open about integrating everything together.

But to me, it almost seems like it's at some point, like one way of thinking or way of gaining knowledge to rule them all. So in that sense, no, but in every other sense, like as a passion, as a perspective on what's going on in myself and other people around me, [01:40:00] definitely integrated. So, uh, maybe not in one way to rule.

like all experience and understanding and meaning in the world. Yeah,

Andrea Hiott: that's a good answer. Do you ever feel this kind of sense of flow or, I mean, I seem to gather you like music. I mean, you played it well, but it seems to motivate you or make, that feeling you have when you And I don't mean in a mystical way, but just feeling kind of connected or even having, like, we talked about these Eureka moments and I mean, there's something built into science too, that where you want those moments of transcendence or transformation or feeling at one, I don't know, without getting too like sentimental, but do you ever have that in?

Oh yeah,

Brad Love: definitely. I mean, I think it relates to like feelings of self efficacy where you put in. some work, some time you think about something and you have like a real, real insight and maybe it doesn't really matter in the big [01:41:00] scheme of things. It could be really esoteric, but you're like, sometimes you're like, wow, like I probably understand this weird little thing better than anybody on earth, but usually when that's true, it doesn't matter because it's like some weird little offshoot of your little super siloed topic.

But it is, I think that is like, It's not just like relative to other people that makes it special. It's more like, yeah, that feeling that you just got somewhere, got something. And now it's like, yeah, like you said, like wonder or flow or transcendence, but it is, it's just like all the, cause everyone has to deal with so many problems and just so much like BS and day to day life.

And you're like, ah, that's something that's real that I did. And none of that stuff could screw that up. And I think, yeah, I think that's. Maybe a way to, like, when things are going wrong, whatever, to stay a bit happy.

Andrea Hiott: Yeah, I think that's really good to bring that up. That's actually a good point. I mean, even doing something like going for a run can give you that kind of, like you did something and [01:42:00] this kind of scales up when you're working like that.

And also you do a lot of collaboration. I think that feeling like you're part of a group that's working towards something and when you see that it's really done, that you've together really created something, it's, there's something about that too.

Brad Love: Oh yeah, definitely. Like, if it's like a peer, it's really great when you affected each other and you kind of see things.

In a different way that's more compatible, or if it's someone that's more junior, it's really exciting to see like them grow and get better and better over just a few years. And yeah. You have

Andrea Hiott: your own lab now. So you, you're really. Yeah. We didn't even talk about it, but you have a lot of this going on.

It's

Brad Love: cool. Like just seeing, seeing people like, yeah, develop and then go off and do their own things. Yeah. It's good. It's like,

Andrea Hiott: yeah, go ahead.

Brad Love: Oh, no, no. I was gonna say like people, yeah, people like, like make like analogies to like parenthood or something. I [01:43:00] don't have kids because, and, but like, it's, it's, I think it may, I, so I have no idea, but this happens a lot faster.

And it's also like, like maybe like, Like luckily my parents didn't do this, but some parents make their kids like pursue their passion. But here it's like people are coming to work with you, uh, because like they share that same passion. So it seems like it's Yeah. It's kind of a, it's kind of unusual and like a rewarding process when it, when it works well and you work with somebody and they develop and better and better, you have to fulfill their goals.

And yeah, it's, it's cool.

Andrea Hiott: Yeah. And you're still pretty young, so you'll see those people go out into the world and do things and it's quite beautiful, I think. And I don't think that's going to change with our technology and AI, that side of all this humanity stuff. Yeah.

Brad Love: Yeah, I know. No, I, I hope so. Yeah.

And I also think there's

Andrea Hiott: a lot of different kinds of reproduction. You know, I've always thought that, that it's weird that we only focus [01:44:00] on, I mean, having children is wonderful, beautiful, but also like we are creating and reproducing things like all the time and all these other fields. And I think it's also like very important in terms of.

Whatever's important in, in the world for, for the world.

Brad Love: Yeah, yeah, so we, we might be ants, but we're strange ants because we could also sort of decide what we value. Yeah, I

Andrea Hiott: think we're changing what it means to be an ant. Let's just say that.

Brad Love: Yeah, yeah.

Andrea Hiott: But, it depends on the colony. It depends on the context.

We're just ants after all, but still there's something happening.

Brad Love: Yeah. I'm with you on this.

Andrea Hiott: Thank you so much, Brad. I really, really appreciate your time. And I really appreciate the work you're doing, truly, like I have for many years.

Brad Love: Yeah. It's a real, real pleasure. Thanks so much for having me and thanks for your insights.

It's really great. With all talking about nobody reads things or understands things, it's great to talk with someone who actually has done that and it's, yeah, you really like made this time for me. So thank you. [01:45:00]

Andrea Hiott: Thank you. And yeah, I will continue to do so. So is there anything that we didn't say or that you want to make sure I put in?

Uh, no,

Brad Love: gosh, I mean.

Andrea Hiott: It's only been two hours, so I

Brad Love: mean, yeah, it didn't seem like it. It's funny because I have the kind of personality, especially since the pandemic where I just can't sit still for a half hour because I just, yeah, I watch online talks. I usually exercise while doing it. Oh, well, yeah, I should have told

Andrea Hiott: you, you can get up and walk around some.

No, no, I guess

Brad Love: I didn't. I didn't feel that way. For once, I was just like, not stir crazy. So yeah, thanks. It must be really good conversation and questions. So yeah, thanks. Well, I really

Andrea Hiott: care about these things and I really see a lot of value in all the stuff you've done.

So it's coming from that place. I think that's, that's the Well, I mean,

Brad Love: this is tremendous value. Mean, this must be, yeah, like you said, I mean, it sounds like you've built up like a really good community around, around your, your show and your channel. It just started.

Andrea Hiott: I mean, really it's [01:46:00] research. I was just trying to understand things and someone was like, Oh, you should post it. Cause you have such good conversations. And I was like, okay. And then, yeah, it just kind of. Happened. So, but it is, it's, it's wonderful. It's like you were saying too, it's just to see how cause there's so many little worlds in this world and it's fun to see those and how they connect and people really care and they're really trying to, you know, understand the world and connect.

And yeah, I find that very beautiful. So that's the best part. So anyway, that's all to say thanks for that. I probably needed to hear.

Brad Love: And

Andrea Hiott: any advice, because I really don't know what I'm doing.

Brad Love: Yeah I would solicit advice from everybody that then maybe not follow it all. Like yeah That's my advice But isn't it kind

Andrea Hiott: of like for you too? I think there's something about the more you put it out there like with brain GPT once You Like this could become a very big thing and then that's wonderful, but it comes with a lot, right?

Because then you have a lot more people wanting something from you or ants or criticizing or not You know, there's a lot more stuff that comes I guess. Yeah, it's back to being ready for it, right?

Brad Love: It's like a [01:47:00] series of, getting thicker skin or just, also just realizing sometimes when people say really nasty things, it's not really about you, it's how people react, and if something gets attention this is always going to happen, and it's just part of, it's just part of how social systems work.

Yeah, but it is hard.

Andrea Hiott: Science can be really hard in that way. For me, just staying, staying true to that thing we were talking about, that intention or that motivates me because like you said, it's easy to get, I mean, you could get it too, right? That someone comes and wants to buy your ideas and try to turn it, like, we'll turn it into a company and maybe that's the right way, but you could also easily get lost from what's meaningful for you.

Brad Love: Yeah. That's really hard. Yeah. Trying to figure out what to do next or focus on just, yeah, just like you're saying. Yeah, I take a lot of long walks just being like, okay, what do I actually really care about? Or what's the best move? Yeah. Walking is so

Andrea Hiott: important.

Brad Love: Yeah, [01:48:00] no, I used to, yeah, be completely honestly like the really misadjusted, like, Oh, if you're making money off of it, then it can't be. Your passion. It's like, I grew up in that time where it was still possible to sell out. You're talking about like artists selling. That is really,

Andrea Hiott: that sellout thing is really, I think, detrimental to, has been to my mindset a bit.

Yeah. I mean, I

Brad Love: think, I think it's kind of an outdated notion now. Now it's, it seems like, yeah, if I could find something where I could do what I want to do, what I believe in, and there was money involved, That would just make it better. Like, I mean, it just seems, it just seems obvious. It seems like a weird subculture that I grew up in when that was sort of, yeah, I mean, that's like human, like music that people would just like, stop listening to bands because they started on a major label or something.

It doesn't really make sense unless they actually lost control. And like, big

Andrea Hiott: time with that whole Nirvana and that. Yeah. Like, so [01:49:00]

Brad Love: Yeah, that was like, so it was like everyone was like dumping bands left and right because of them.

Andrea Hiott: Selling out.

Brad Love: Yeah. It's curious. I've

Andrea Hiott: really thought about it a lot. It's because it affected me too.

And yeah, only recently I feel like I can be healthy and say, okay, money is good. And like, it's okay to make money as long as it's, the tension is like, it's in a way opening up scales the way we were talking about earlier. Yeah, exactly.

Brad Love: Yeah. Yeah. It's just like, you have to just figure out. Your goal is, and I think both of our goals are clearly just not to make money.

So that's like, I mean, it's kind of simpler if that was the goal, but that's not the goals. And the weird thing is sometimes it's not even a trade off.

Like it could just be. Like the better option for what you want to do and maybe it's

Andrea Hiott: even what you need to do at some point. Maybe that's been a obstacle. Let that happen.

Brad Love: Sustainable. Yeah. So I don't, I don't know. So I, I wouldn't, I wouldn't be scared to go for it, but yeah, but

Andrea Hiott: yeah, I think that's been a balance [01:50:00] because I'm also a little bit restless and always working and thinking and stuff. So I don't, I won't make time to, create like a set or even put makeup on and look all, like that's not going to happen because that is just not me.

Anyway, maybe we should go for a walk.

Brad Love: Yeah, yeah, we should go for a walk.

Andrea Hiott: Alright,

Brad Love: it was really good. It was really nice talking. Yeah, thanks for doing

Andrea Hiott: it.

Yeah, it was really great. And good luck with everything. Yeah,

Brad Love: you too.

Andrea Hiott: Same to you.

Brad Love: All right. Thank you. All right. Have a good one.

Previous
Previous

Is anything objective?

Next
Next

Memory and Navigation As it Happended with John Kubie