(Lovely. Wow, amazing. Thank you. Thank you. All right. Let's see here...) Hello. Hello. Ah, Good. Okay. Managed to get through whatever thicket of permissions they were there. Finally. Excellent. Glad you're glad you're here. Glad to be here. I somewhat unadvisedly chose to install a full range of updates about 20 minutes before we started, which was probably not the best idea. Okay. Now at least you're fully updated. It seems to have made it to the wire anwyay. Yeah, fully updated before the next one. Yeah. Nice. So I'm just seeing if I am recording things on my side. So you're getting sound okay from me? I am. At some parts I hear your breath in the microphone. Oh, okay. Is that something that you can do anything about? I was lifting the, I was lifting the mic up to my mouth. So it's probably unnecessary. Ah, okay. Great. Yeah. Now it's not there. Lovely. Well, more sensitive than I thought. Yeah. I see a friend of mine, Nikki, who's joined us in the audience. Thanks for being there, Nikki. Yeah. I believe the space itself is being recorded. So we'll have that and then I can share that so that it could be listened to in the future as well. Okay. Great. So up for starting. Sure. Well, AI. So what was the comment that provoked it? There was a, that was Peter, Pedro Domingos, post saying that about power requirements. Yes. For AI as opposed to the brain. Yeah. And how that indicated that something's wrong. Yeah, that's right. I've gone so much into your various different things that you've written in so many places that I was like, yeah, where did this start? So, but yeah, one of the things when it comes to power that I saw you wrote was about the efficiency difference between current models and the brain because the brain can run on a sandwich. That's like, yeah. That's a pretty good way of thinking about it. Exactly. So you can see why I was triggered by Pedro's post because it was pretty much one of my talking points. Yeah. What was Pedro's post? He was saying that, he was saying that the power requirements of, of AGI of, of current tech... I know we can find it actually. Let me just go back to some notes I made here today. Which, right, cool. And where was his post? Actually, when we were chatting the other day by text, X, Twitter was incredibly slow. So I was kind of hoping the update might speed it up. And let's see if I can find it. Well, if you go off of your own memory of it, because I think you know the topic really well... Was he saying that the systems, like they don't require that much power and therefore it can just scale to AGI? No, completely the opposite. Yeah. No, I understood it to be pretty much the same as my talking point. That there's clearly something wrong. Because, I mean, it's just absurd. We're talking..., what was Sam Altman talking about? I don't know what happened to his request for seven trillion to build new..., I mean, and, Trump is talking about doubling the power generation capacity of the US, which is nice in itself, ...but for the purpose of AI, which is all right too, but it's just like, you know, doubling the power requirement of the country or the world, because AI requires as much power as everything else combined. It seemed, you know, rather than spending seven trillion on that, I would rather that perhaps like, you know, point zero one percent of that be spent on a little bit of research to understand why AI doesn't actually, why why intelligence requires so little power. Right. And in researching your work, one of the common grounds I found between your perspectives and my own was this notion of power not determining truth. That seemed to be a very foundational baseline of how, I'm not sure if this is the wording you would use, but like how thinking happens or how processing reality happens. Well, the power, I mean, so that's it's kind of a different tangent, you know, I had an interesting path because, you know, I wouldn't, I grew up in New Zealand, but I spent a large part of my life in Asia, and I was studying, I was studying language. So I've been studying language from a long way back. When you say language, you mean like literal like human language, right? You're not talking about like programming language or something. So I mean, like, I think growing up in New Zealand, you're so isolated in New Zealand, I was fascinated by the idea of language when I was a child. Because I couldn't imagine what another language would be. You know, it's like to have the thoughts in my, you know, the thoughts in my head to me were as part, as much a part of me as an arm or a leg. And the idea that anybody could exist without those echoes in their mind, I was like, well, what's in their mind? What, you know, what's it like not to have, it's like, what's it like not to have these thoughts but to have thoughts that are another shape. Right. And so I was, I was fascinated by it from an early age. I mean, if I'd grown up, grown up in a sort of a polyglot community, I probably would be like, oh, yeah. You meet people in Europe and they say, oh, it's just a pain in the neck, all these different languages. But for me, it was fascinating. So I was motivated by that. And so I was studying language. I was motivated just to know what that felt like. And also having a technical background, I was also interested to know how that worked. So, and then I got a job working on machine translation. So that kind of crystallized that, that, that, that, that, into something palpable... When you were, when you were a kid and you only had English, I presume, right? That's in New Zealand. That's what the, well, I mean, I mean, the, the, the, the, the Polynesian language of New Zealand, Maori, is, is, has been pushed a lot since then. So you get a lot of that thing pushed. But yeah, I mean, it's, it's still mostly a monolingual society. Yeah. So when you were a kid though, so you only had this one language, what got it, because I was imagining that there's so many people that grew up in societies with only one language, but I've never heard of people have this thought or curiosity of, oh, what must it be like to think in these different shapes? Right. So it's like, I've never heard that before. Have you heard that? Are there, are there that many other, that many cultures that are, that are so... Uni, uni... Isolated. So isolated. I mean, most people grew up in a, or used to grew up in a monoculture of some kind, but not perhaps so isolated. You think of people growing up in, in England, well, maybe they, you know, French people, might, um, come into contact with French people more often. It's, it's, they're quite close. I mean, I suppose going back, living in villages when nobody went from one village to another. I mean, you had, um, other people around you, right, who were also in the same culture as you, same type of, uh, language structure as you. Did, did you find that they were also curious about this sort of thing too? Ah, well, I suppose not everybody's the same though. So no. Yeah, that's what I mean. Yeah, like, there's more to it than just the environment that, um, that would be, that's sort of a, that's, it's a very specific curiosity actually. It's super specific. That's right. I mean, but you know, people grow up and they become curious about all sorts of things. For sure. I mean, and I'm, I'm a curious nature. I mean, I, I, um, when I went through formal study, I, I focused on physics and physics, the reason for studying physics is because you understand how the hell the, the world works. So, so I guess there was this like how the hell things work. Uh, gene that was working. Yeah. I guess I was just wondering, was there like someone specific you met for the first time that like you were like, oh, or you saw on TV or something that spoke a different language? You're like, Oh, I wonder if they think in this different way or something. Um, I mean, I, I obviously became aware of other languages. Um, otherwise I wouldn't have been fascinated by it, but, um, I mean, I, I don't think I met anybody who, um, um, I had an, an uncle who was Dutch heritage. Um, and, uh, he, he was reading stories to his son in, in Dutch and that sort of thing. And I thought that's cool. Um, but I never met him. I didn't, I mean, I clearly remember the first time in my life that I physically met someone who could not speak English. Right. And that was, I was about 18 years old and in Japanese tourists, Japanese tourists were just starting to visit New Zealand at that time. And, um, and I just, you know, some Japanese men asked me a question and he, you know, he, he, he, he, the communication was difficult. So, you know, but, um, My fascination was earlier, was earlier than that. Yeah. I mean, I can't push it back. I mean, but I must have been about eight years old and I was just, I was fascinated by what it would be to speak another language. Yeah. Very, very interesting. Yeah. So the, yeah. So there was that. So I'm saying it's both and then, and then I, um, I mean, I was then became interested in the, um, uh, the, the, um, how it worked. Yeah. Originally, originally I wasn't because I remember having somebody telling me about a, um, a talk on the radio, um, Noam Chomsky, I've all people. Um, and he was, um, talking about his, his theories of the theory of grammar. And I was not interested. I remember clearly at that time thinking, well, I don't really care. I just want to know what it feels like. Um, but then, yeah, became, um, I became interested in that too. So there's this, um, so I, yeah, I had the fascination of language and then I was, I was, um, was working for a company. I got, um, employed by a company that was doing machine translation. So that was something I was, I needed to do. Um, and so that, that came down to then, uh, learning grammar. And, um, so, you know, what are the nuts and bolts of language? Well, I guess it's grammar. So what's grammar and how does grammar work? And I was trying to learn grammar. Um, and anyway, so that's sort of a, a long story to how I came to, um, truth equating to power. Because I became convinced that grammar was, um, contradicted. Grammar could not be learned because language was full of contradictions. And when you, when you learned patterns of grammar, you found that you, you had multiple, you had these patterns that would, uh, one way, sometimes, in another way, another time. Yes. Um, and it was just not possible to, um, wasn't possible to codify it, uh, uh, completely and precisely. Yeah. Um, and so this became my big thing that, that I was like, well, we can't learn grammar. We have to, we have to, uh, accept that that's, that, that truth actually is, um, contradicts. It's some, that in one way, it's like this and otherwise, and that fitted, that fit with my physics background. Because I was like, oh, that's a bit like an uncertainty principle in physics. And that's funny that we can sort of the same way. So I was like, oh, maybe some of the maths is the same. Um, but it became my big thing that it's like, we can't learn grammar. We have to accept that, that truth contradicts, right? And, and so, um, that's got to be a fundamental part of how we're trying to do, um, um, model intelligence on, on computers. Um, and that this was in Asia. And then, um, I would, I would come back to the West and, um, notice odd things happening in the, in the West. Um, in, in different ways, but in, in terms of the truth is power thing. Um, especially when, uh, woke started to, uh, sort of impinge on everybody's consciousness. And then I heard this, this, this, uh, this connection with postmodern philosophy that, that, um, truth is contradictory, that there is no, uh, that it's constructed. Um, and that, uh, because anything can be true, that just depends on the power of this group or that group or that group to impose their, their interpretation. So, so it's like, oh, I was like, you know, this thing has been my thing that I've been pushing for so long, that truth is contradictory, is actually destroying society. Um, so it was, uh, it was just that I'm using a collision of, uh, something that I'd come to privately, but from the opposite direction, I was coming from a direction of saying, oh, I can see how this, how this happens. Yeah, so can I, um, um, like flesh this out with you? Because like how you got to it is not like a logical step in my eyes, but clearly you, like you, uh, I've gotten to a similar place, but from a very different, uh, angle. So I actually would love to map out with you how you got to where you got to. Sure. Um, that will mean I, I, I got there bottom up. I mean, I got there because I was looking at language and saying, well, you know, what's the structure of language and how does, how do the rules of language work and I'm going to learn the rules. So I learned the rules and then I said, well, this rule is different from that rule. So I was actually creating contradictions from first principles. There was no mystery about it in a sense that, that, that, that you, you, you learned these things and they contradict. Um, whereas, um, that what I ran into from the philosophical point of view was just the, the, the, the post modernists were coming at it from the other direction. They were saying, well, we're, we're getting that we're, we're seeing that we can't really pin down the truth of this or that. Um, we don't know why it would be a contradictory. We can't, we can't see any system at all. So it must just be dependent on who has the most power to impose their own, their own, um, design. Yeah. Truth is determined by the person with the biggest stick. Yeah. Yeah. Which was, which was sort of consistent with what I was saying, but it was like from a completely different perspective because, um, I was, I could see that it clearly was not. Did not depend on how big your stick was it? Yes. Was like it depended on the context and, and, um, and what, uh, what, what was relevant information that given moment. That so, um, if we were to have some kind of a first principles approach to grammar itself, because grammar is a, uh, it's like a couple layers up from physics because it's got to work through humans. It's got to work through history. Um, the reason why they're contradictions, as far as I can see in, in grammar is because grammar is like a thing made up by, like, people. And when you have people from different tribes who use language in different ways, and then they, um, mesh, they literally would, it would make sense why there would be contradictory. Um, like basis for their rulemaking. Uh, does that make sense as a saying it would, would contradict because, uh, the different cultures would contradict. Yeah, like language is, is in, in some ways very arbitrary, uh, as far as I understand it. Well, it proliferates and changes and mutates. Yeah. I see it the other way, actually. I mean, you're saying it's two, it's two stages away from, from base reality, because we've got the physical, and then you've got the intellectual, uh, but I, I see, I see it differently. I see that, that we don't have access to anything except through our own cognition. So cognition is at the bottom. And, and above cognition comes physics, actually, because it's like, what physics? Physics is our observations. And then so we're constructing a model for the world on top of what we observe. But we've got, if you don't have a model for how you observe things and you can't observe them. So, um, for me, um, it's, uh, it's absolutely fundamental and the contradictions are, um, if we have contradictions in language, it means that we have contradictions in our thinking. Um, so it puts, uh, the contradictions. Well, I wouldn't say that it puts them right at the very, at the very basis, but it puts them below everything else that humans can do. And it doesn't necessarily mean that the, that the universe is contradictory, but whatever humans try to apply to the universe, it gives us to involve contradiction. Yes, absolutely. Yeah. Cause, right. So humans are trying to structure it in some ways. The humans are trying to simplify, to simplify the universe so that they can benefit from it and not suffer from it. Sure. And, yeah. And those, those, those constructions, um, do appear to contradict on whatever level I can, I can see. Look at them. And that's it. And that becomes most clear when you are, when you're looking at language. Because I think language, conclusion I came to is that language is kind of special because it's, it's something. It's a perception. So, I mean, I think this happens for all about perception, so provision and sound, that's a thing. But, um, languages is, um, is a perception that we create. So I think that it kind of, it's, um, it's a window into the way the way the brain is, is, um, is putting things together. Yes. Absolutely. It, it, um, and that's why I think that studying language is, is where you come to this, uh, most, uh, these contradictions most easily. Um, and that's one reason why I think that large language models, the LLMs, have, um, caused this sudden leap change advance. I don't think it's by accident. I think that the language problem sort of. Pulls you there, unawares. And I think the field is, doesn't really understand what's happened when it started studying language. Yeah, that, that was a repeated pattern of the things you had written in so many different places that we just don't understand why there is a sudden leap now in our ability to make progress. Yeah. No, I mean, I think we, we've stumbled around for decades. Um, you look at the, the, the history of AI for 70 years, really, but look at, you know, so neural networks and neural networks is ridiculous. You know, I mean, they just, they stumbled around and then, um, like, uh, Nvidia made, uh, good, uh, GPU cards for people to play video games and then it's like, oh, we can use this for, and then, you know, just turned out that by accident, that was useful for people modding things on your new, new networks and all your networks seem to do interesting things all of a sudden. Oh, and then it's like, oh, I saw we've got neural networks and it's like, oh, how do you apply it to language? And it's like, oh, you apply it to language and something really interesting happened. Um, nobody knows why they're all just sort of playing around and, you know, putting a computer game hardware, um, to use and things happen by accident. And I think the, and now that we've got this, this big surprise when we suddenly, you know, when we, uh, sudden surprise when we started using, um, uh, studying the language problem with, with large language models. Um, as soon as it, as soon as it works, people say, oh, that was, that's good, but, uh, you know, the next step is obviously not to you, not to look at the language problem. So everybody trying to do something new, they're saying, oh, well, we, we need to look at other data. We need to stop, we need to stop studying language to make the next advance. Um, and it seems crazy to me, because I think that this advance has been because you started looking at language and it's led you to the solution. You don't see the solution and you think that the way to solve the next set of problems is to stop looking at language. So, um, yeah, that's it. I, I described that as the, as language worked. So let's try something else. Yes. We found something that works. So we need to move on to the next thing. Yeah. Let's not do that. What is the next thing that is being moved on to then? Is it just increasing the computational power? Other data. Other data. People are saying, um, so I mean, uh, Jan LeCun is, is sort of prominent in pushing for, um, real world data. I don't know what he's, what he's calling it. Um, physical data, uh, world models. I think it's world, I think it's world models that he's talking about. They're saying, you know, like language has all this tacit knowledge and does not have all this tacit knowledge. I mean, I didn't tell you what's connected. So in order to, you know, I mean, it's, it's obviously everybody that we, they can't reason very well. Um, and so people are saying, Oh well, to make them reason that we need to give them proper knowledge, right? And so, so, I mean, that's crazy because they don't realize that language is what. I mean, what language is teaching us is that you need to move that we can't structure things unambiguously. You know, I mean, A.I. started with attempts to structure things unambiguously with simple. with simple or symbolic systems and logic. What do you mean by unambiguously? Can you define that word for me? Well, so that there's a single truth. There's a single, you can abstract things so that you have a rule which applies in all situations. You make a law, or you make a mathematical model. So mathematics is unambiguously. If you have ambiguity in mathematics. Yes. Then people will say, it's a disprove of whatever your theorem was. Right. So from the beginning, we've been trying to, you'd see the general opinion is that you don't want ambiguity. You want to have certainty. True. And so people started from certainty with symbolic systems and logic. And then it's like, well, this is certain. So this is going to give us a way to see to apply this. And we'll figure out how our intelligence works. And well, that didn't work. And then it's like, they took a step back from certainty in the statistical systems because it became the main stream. So in the 90s, you move from symbolic systems to statistical systems, which have probabilistic models. Or if we don't know for sure, but most of the time, this seems to work. And then through the noughties, the 10s, Bayesian statistics was the default model of artificial intelligence. I mean, the first, what do you call it? Massive, open online course. Which was teaching about AI, which came out, I think, 2010 or 2011. Stamped, like somebody. It had an AI course. And it was all Bayes. It was all Bayesian statistics. It didn't mention connectionism at all. So this was 2010, or so. So you've got AI as symbolism. And then in the 90s, it sort of moves to statistics because it works better for things like speech recognition. And you actually get you've got some speech recognition sort of working about 1997. Some companies started. Bill Gates gave a theme, nose address, talking about how computers would be talking, I think, in 1997, I remember Condeck. So statistics work a bit better. And then in the nought Bayesian statistics, it was all Bayesian statistics. And then with GPUs, you had neural networks explode on the scene about 2011. But it's like, well, why? So you go from symbolism to statistics to networks. And at each step, it's a movement away from abstraction. But people are certain. People are still convinced that they're trying to abstract everything. So what are they trying to do with the neural networks? They're trying to learn abstractions. So it's like, well, we're finding these abstractions. But that's funny that the more distributed and the less focused our systems are, the better they get. We're going to learn those abstractions. So neural networks were all about, they started with supervised learning, where you would show your system a lot of examples of the abstraction you wanted to learn. And then there was unsupervised learning. So it's all about learning, all about learning abstractions, where the system would automatically learn some of these abstractions, which were going to be universally applicable. And then you come to language models. And I mean, so I think what happened with language models is that they are the final step away from abstraction. Because you not only give up on symbolism, there's no symbolism in a moral people complain about that. And it's not probabilistic. And you don't even have, you're not trying to supervise the learning in any way. You don't provide it with examples. Because with language, the problem moves completely away from abstraction. It becomes prediction. And so it liberates. I mean, from the way I'm understanding it, it's the final step away from abstraction. You've got no abstraction anymore. You've just got to know. Can you define abstraction here? Well, the same thing. Yeah. So abstraction is compression, or it's taking a lot of things and unifying them. So it's like saying, well, you've got a lot of individuals. They're all humans. And it's like a human being is an abstraction of individual humans. Or you've got planets or an abstraction of all these lumps of rock around and that sort of thing. So you try and reduce and infin- infinity of variation to get to the essence of it. Got it. And so that's what people are convinced we're doing when we're doing artificial intelligence, is getting to the essence of meaning, abstracting the mess of a messy reality to be an abstract. And the function of abstracting is that it lets you essentially talk about very large amounts of information using very few words. Yeah. Yeah. Well, I mean, it means you can interact with an infinite universe with a finite mind, I suppose you'd say. Yeah. I suppose the word infinite is the biggest abstraction then. Well, that's right. I mean, if we didn't have that word, we wouldn't be able to certainly can't conceive it. You would have the longest run on sentence ever. It's like, well, there is the paper and the rocks. Yeah. Everything's in it. So this is, I think this has been the lesson of language. But I think it's been ignored. And what's happened after that? So you get to this final stage of completely dumping of symbolism and abstraction. And you're just like, oh, what is going to do prediction of the sequence? And we're not going to imagine there's any kind of abstraction there at all. And it was best. And so what's people's reaction to that? They're saying, oh, yeah, but we really need abstraction in this. And where are we going to find the abstraction? Oh, the abstraction must be in the world. It must be in real world data. We need to have all this exposure to vision and real objects. And that's where we're going to. So there's still searching for this anchor, this single definite non-contradiction of abstraction, which is going to be the anchorable meaning. Yeah. And so all the theories about how to move forward, away from or beyond the success that we have with language models, they're all talking about using other data, mostly, I think, talking about using other data. I mean, so relevant to our conversation, so Pedro Domingo's, his big thing is, I think, he is tensor logic. So tensor logic is, as I understand it, so he wants to bring logic back into it. Because that's the thing that's missing in language models. That's what everybody hates about them, is they're stupid. They can't reason for chips. Right? I mean, there's a few patches being put on which sort of make them do rational things. But most of the big problem is that they just make these, I mean, they're terrible at doing anything requiring abstract reasoning, like terrible mathematics. And do you know how the models do, like how can they tell the difference between good training data and bad training data? Because there's a lot of nonsense on the internet as well. How does it tell the difference between something that will make sense? Because I've done a lot of conversations with chat GPT. And for the most part, it has a very, very high make sense rate. How is that there? Because the internet is not as good, I think, as the chat GPT is. I don't know. I don't know how they curate their data, or if they do. My inclination would be not to bother. And I mean, if you're going to summarize, if you're going to make your system be emergent from data, then the best thing is to assume that the noise is going to be swamped by the signal. If you don't think that's going to happen, then it is not going to get anything at all. So I would say that the easiest way to do it would be to simply assume that the signal, which is something meaningful, whatever meaningful means, but that's not defined. It's defined for language models by prediction. OK. So you would say that what's predictive is going to swamp what's not predictive. And what you think of as nonsense on the internet might actually be quite predictive. So it might not be as useless as you think it would. I mean, you're thinking of bad information as being what's a wrong statement? In a sense, yeah. But the wrong statements are probably grammatical. Grammatical, you said? Yeah, in the sense that even nonsense wrong, somebody saying the earth is flat or whatever, I don't know what the misinformation is. But they're probably saying it in a grammatical way. If you fed the language model completely random sequences of tokens, then you're not going to get much now. I mean, but I think I'm trying to make it work with you, saying that it doesn't have much ability to reason. So I'm trying to understand my experience of it. It's sounding very reasonable to me. And how does it pull that off? Well, things like, but I mean, since these things first started to get a lot of investment, they've received a hell of a lot of hacks to try and cure the hallucinations and the areas that they do produce. And I'm not up to date with all the hacks that people have applied. There's this thing called rag, which is retrieval augmented generation, where they use different, I think they basically do a Google search look up on the query. So instead of relying on the language model alone, they're also doing some kind of regular, indexed keyword search, just like an ordinary Google search has worked for years. And so that they can anchor it in things that people have said are true. So there's this rag, retrieval augmented generation. And I don't know. There's probably two or three different systems that people use to. They may also have some kind of logical, they may explicitly push some kind of maths style, or logic style, or co-processor onto it. I think that I believe that, I mean, you tried to make these things do maths two or three years ago, they were terrible. They would come up with confident statements about, in the form of a mathematical statement, but saying that 5 and 7 equal 3 or something. And so I think that there's been a lot of effort put into stopping them from doing that, probably by just diverting the specific maths questions to something which does maths well, which is not hard to do. You just pick up a pocket calculator. Interesting. So it's not the same engine, you think? I'm sure. I mean, because the language models don't do logic well. I mean, that's a big problem. We don't know what they're, how they're structuring the world. We can't see it. I think of it as like when you have human students go and memorize a textbook, like memorize the literal words of a textbook without understanding what the thing is actually talking about. And it's like, I could say the words perfectly, but then you present them the real world scenario, but slightly altered. And it's like, ah, this is not what I was trained for. I'm like, yeah, dude, you did not understand then. Well, that's because you're saying understanding is having an abstract model which you can apply to different situations. And that's what everybody sees as the goal. But it's how reality separates from that, which is the problem. I mean, that's when people started looking at intelligence 70 years ago, they thought that the highest achievement of human thought is logic and reason. And so you just apply that. And then it's all nice. But intelligence does not actually work like that. There's also this thing more of paradox, which is kind of an assumption of that. And it says that the things which you think are going to be easy to turn out to be hard, and the things that you're harder to turn out to be easy. So we can do your pocket calculator or your phone and I can do miracles of mathematical logic. But it can't see. But Tesla, well, I don't want to be sued by Tesla. But say, company X is self-driving. AI has difficulty seeing. I don't think it's Elon's ex company. It is Tesla you're talking about. Company X is now the one we're using for speech. Oh, well, I mean, so this is ambiguity. Yeah, but now there is. The fact that you can't see a firetruck that you can calculate the in-power cube root of 22 or something. Yeah, from what I know about the self-driving is that they actually moved away from a method that they were using up until now. They moved away from it. I don't think it was even a year ago where they decided that they removed. This is my very layman's terms understanding. They removed all of the code from the neural net. Me using terms I don't understand. And what they did was that they essentially had it try to match what it's seeing with what they have determined to be good driving. And then it would try to just mimic that. And then it led to it becoming more smooth because it's not going through a system of if this then that. It was just mimicking what good drivers do in that situation. I would think that they were always doing the mimicking because if they're in that problem, the problem with that is a bit that you're missing out. It's if situation A then situation B. And you can do it if they're in that. But it's the A and the B that are the problem. Yes. If A, I mean, you can have if enormous truck stopped in front of you, then apply brakes. Yes, yes. It's blinding the obvious. But the problem is when is an massive truck stopped in front of you. And that's that you have difficulty with. Yes. Or if the road is clear, then proceed. But when is the road clear? I mean, I think that self-driving, they've gone through several iterations and several complete code updates. I don't know if you came across a clip I've posted a few times of Musk talking about his issues with self-driving probably from a year back now where he says, the possible asymptote, that one? Asymptote, yeah. It seems like you're getting there and you're making good progress. And then it starts to asymptote. Yeah. And then he says, I think we're out of the local minimum now. And I think it's now it's starting to be like, yeah. And so that's this problem. So I think they're always meeting this. And for me, the asymptote problem is this contradiction. I think that there's always contradiction in the world. So any time you try and remove any time you try and learn the world without embracing that contradiction, you're always going to hit a ceiling. You hit the ceiling where things contradict, right? Because it's like, well, it's sort of both. And when it's sort of both, then you can't establish which one it is without having some way of accepting that it can be both. I mean, this happens actually with humans as well. So language is leading the way again. If you get humans and you tell them to label the structure of language, humans also disagree. Humans, you say, well, label all the nouns in this text. And they go through and they label all the nouns or the sort of grammatical categories. And they don't agree. Any kind of structure you try and put on it, then you're going to have people, some people say, well, it feels a bit more like this to me. Yeah, I know, I'm sure something like that to me. And this is, and so even for humans, trying to unambiguously categorize language, they also hit an asymptote. You simply, the problem is not, the problem is the labels you're trying to attach. The labels, there is no set of labels which fits the whole world. Yes. And I mean, I think this is coming out all over the place. But it's just we're not, it sort of isn't filtering down to what we're doing in practical terms. So, I mean, I came across this when I was trying to learn grammar and so I was pushing it and I was interested in the idea. And then I talked to people and they would give me leads. I'll say, well, that sounds a bit like something else. So one of the things that I was told was, that sounds a bit like what was happening with mathematics a hundred years ago. So there's this thing in mathematics. I don't know if you've come across that. It's pretty deep in mathematics, which is this incompleteness theorem. No, I haven't heard of it. What is it? So Kurt, Kurt Goedle. Well, what happened, the first thing that happened was that you had Bertrand Russell. Okay, so Bertrand Russell, about 1900, he was frustrated by the fact that mathematics has axioms. So, mathematics has four or five axioms, right? Five is the usual number, I think. And so you sort of list these axioms and then you can build the rest of mathematics from those axioms. But he didn't like that. He's like, why do you have these five random things? Where did they come from? So he was like, well, sort of we can build mathematics without having to sort of randomly assume at five axioms. And so he was trying to do that. And he did this big magnum opus, about 1900 and 1910 with another guy, Alfred Whitehead, I think. But he came to a problem, which was that he was doing it based on looking at mathematics in terms of sets, which was the theory of the time, I think, that's it. And he found these contradictions. And the contradiction is, I think, best summarized by an old paradox, which is the liar paradox. Have you heard of that one? In researching you, I learned about it. I'll summarize it by saying that I am a liar. Okay, exactly, well, you lie. And so this was this contradiction. I think Russell came at it, sort of the set of all sets contains itself, but if it, well, the set of all sets that don't contain themselves, I thought, and then it's like, well, have you ever been to that and it's the other? And he didn't know, he just said, well, that seems a bit silly. So we'll exclude that. And then he went and built his model, but it was just a bit uncomfortable. And then the guy could. By the way, well, just to clarify it for Nikki listening or anyone listening, the reason why the statement, I am a liar, is a paradox is because if the statement is true, then I'm not a liar, I'm telling the truth. But then if I'm telling the truth, then I'm not a liar. So that makes the statement a lie. So it's like a forever ongoing cycles of truth and untruth unto itself. Exactly. So it's like a beautiful, I think of it as a beautiful, Mobius strip of cover. Yes. And that's a nice word. Imagineer. In meaning, but meaning itself contradicts itself. And so there was this thing that Russell had, and then he said, well, it's a bit of a problem. We'll exclude that. And then this guy, Kurt Goedle, about 1930-ish, he came along and he said, well, that's a funny thing. So he said, I'm going to systematize that. And what he did was he proved that actually you were always going to have that sort of contradiction in any sufficiently powerful mathematical formalism. So mathematics will always have things in it which cannot be proven within the mathematics itself. So it basically means there's always going to be random things within mathematics. They're the true, but they cannot be proven. And this is a big thing for mathematicians. I mean, you imagine you're a mathematician. You want to prove everything. I mean, they're all about proof. You want to say, well, this is true, and why is it true? Well, I'll prove it, and here it's a proof. But it's like, what he put his finger on was that there's always going to be something in mathematics. And it's like, well, this is true. They'll prove it. Well, by cap. There's always going to be this randomness within mathematics. And that put a bomb under mathematics. And I can give you a link, actually, another one of my favorite references is a lecture given by a guy named Greg Kaiten talking about the effect that this whole sort of proof head or this observation head on him. And I mean, the way, if you want to visualize that, the way I visualize it is by looking at the axioms. You say, you know, the axioms of mathematics. And one of them, I think the fifth postulate of mathematics was that, as posed by Euclid 2000 years ago, was that parallel lines do not meet. OK. So that sort of sounds, that's a definition of a parallel line, right? Two lines, and they don't meet. But mathematicians were never completely happy with that. And they always were always trying to prove it by just from the other axioms. So they didn't have to have a separate axiom. But they couldn't prove it. And then one day, I think, in the traditional study, a Gauss mathematician, Gauss, a couple hundred years back, says, well, OK, I can't prove this thing. Let's just assume it's not true. See what happens. And he invented non-Euclidean geometry. And non-Euclidean geometry is the geometry of curved surfaces. And if you think of a curved surface, like the surface of the Earth, and you have parallel lines on that surface, they do meet. True. Right? If you got like the parallels of longitude, they meet at the poles. Yes. They're parallel as far as you can see when you're sitting on the equator. But then you go up and they meet. So this is this thing that with mathematics, you can have one system of mathematics where parallel lines do not meet. And that's the one that people have been using for thousands of years. And it's very, very useful. And you want to have that axiom that they don't meet. But you can also have an axiom where they do meet. And it's also really, really useful in certain situations. In any situation where you're dealing with a curved surface, you want to have that. So they're both true. I mean, they meet and they don't meet. Yes. So I was finding this. I mean, like I was coming from grammar, I was finding, well, sometimes it's a noun, sometimes it's not a noun. And sometimes it's ruled, there's an exception to the rule. And then it's like, well, OK, so that's kind of like what they found in mathematics. And then so I was finding these contradictions. So these sorts of contradictions. And then you find it coming out in philosophy and postmodern philosophy, which also motivated by language. I was going to say language. These things are obvious, are clearer in language. But we haven't, we still haven't embraced that in our research. So the people who are working on the systems are still convinced that they're going to be able to abstract things in a single consistent way which will encapsulate all the structures they need. And you just need to find the right set of data and it's going to do it for you. Right. My understanding of this is that and my approach, and for what it's worth, I do take a sometimes drastically different approach when it comes to psychology, literally be oddly enough because of this very thing. And which I find so interesting because I, like, at this point you were basically largely talking about, I mean, it starts from the human mind and language and then works its way into how we program, how we try to program things. Sure. So yeah, having an abstraction that can allow for variables, that I have found to be essential. So like, if I show up in a situation where there is a person that gives me this set of facts, these data points, I have, like, if this, but then I have to check what each of these facts are in their specific context to know what my that is. Whereas I found a lot of, I'll say other people who work in the psychology space, they have a much more set in stone path of, like, if this, if the surface of this situation looks like this, this is always the right path to take and it's like, oh, but sometimes it works. I'm like, yeah, and sometimes it doesn't. And it's very detrimental. So you need to have an abstraction that allows for fluidity, certainly within some parts of it, because, like, the math example you gave is like, we don't know which type of parallel this is. So is it the one that meets at the end or not? And it makes a lot of difference. Exactly. Without the context, you don't know what you want. Yes. You don't know the truth you want. Truth depends on context. Truth is subjective on context. And sometimes, yeah, that's true in one way, but it's not true in another way. That's the way you need. Yes. Yeah. And that's very, very interesting. And that's what we're not doing. That's what we're still not doing in AI. Not consciously doing. I think that's happening largely by accident in language models. And I think that that is the reason that they're large language models, because they're just sort of cataloging in an enormous and growing collection of special context dependent situations. So and hence why in their data set, they have a sometimes it's that type of scenario. Is that what you know? Well, they don't have it in their data set, but I think that's what they're building when they train them. I mean, so what with these language models, they become enormous. They require the enormous power requirements, mostly because of training. And they train them for months and months and months. And they get bigger and bigger and bigger and bigger. Yeah, yeah, significantly, I imagine. And so, it's like, why are they getting bigger? Well, we don't know. We don't know what's inside them, but it looks good, huh? But doesn't it strike you that maybe what they're doing is they're just building up a whole lot of special cases, which then unpack as a user. And which case it's like, well, look, yeah, but isn't that just like a stupid thing to do? If you're going to have a system which finds context dependent truth, just wait for the context. Wait for the context and let the system find the truth you need then, because whereas what they're doing now, I'm convinced what they're doing now is that as the systems are automatically finding truth for every possible context they can, and then your user comes along afterwards and puts in some new context. And it comes down and it's like, oh, it's just like it. It made this thing which works. And maybe they have a thing which works, because they were able to index this particular context. But it sort of means that we're trying to make these systems, it's like the old monkeys typing the works of Shakespeare. We're trying to make them type the works, like type everything beforehand, rather than waiting for the situation to arise and then typing the situation, typing the that you need or not you need. So my big thing is I think that AI needs to become smaller and cooler just by turning it upside down instead of trying to learn everything in one enormous training step, which drains all the power and the world for six months, and then sits in an enormous data center, humming in its entirety. But it's never complete, because there's always some extra case that it didn't get to. Then have a nice, simple, small, cool system, which can create all of these different contexts, situations, but doesn't until you need it. OK. High-fathhetical scenario. Let's say I have how much would you need for in terms of funding to, and as you very clear, this is extremely hypothetical. Yeah, I'm sure. Elon sends me a billion, and then he's like, all right, so you're going to go, what would be your first steps for building this small, cool, efficient AI system? Right. Well, I mean, I've been working on it myself too much of my life. And my first solution was not good enough. And the solution, basically, I was trying to, I think that it works the same as language models. It just does it constantly. Instead of having a giant training phase, it finds things as it needs it. But the trouble is that all the technology at the moment uses this back propagation, right? And back propagation, it sort of has to do everything at once. It builds a network, and then it compares the network with all the things it needs to predict. And then it fine-tunes. It fine-tunes a single structure. Right. So all the technology at the moment is based on back propagation. I think that we need, I think that what back propagation is doing is it's with language models, in particular, with previous systems it didn't. Previous systems back propagation fit a network to a supervised training set. So you'd show it a lot of faces, and it would fit the network to those faces. With language, for the first time, we're not fitting the training set to any specific fixed solution. We're fitting it to prediction. So what these networks are learning to do is they're learning to find symmetries of prediction. They're finding groups of things which predict the same thing. So it's finding a kind of a symmetry where in language you have maybe you have the word, I nuts are tasty and apples are tasty. And then you say, well, OK, so nuts and apples are both tasty. So we're going to put them in their own convenience. And they have a predictive symmetry because both nuts and apples predict the sequence are tasty. So they're like a symmetry group in the language. Right? Yeah, for sure. Yeah, so it creates a category, and it's a meaningful category. It's actually things that you eat and which are tasty. And it does it by finding a symmetry in the predictions. But it finds that symmetry in the predictions using back propagation. It connects the network, and then it compares the error between that and slight change, and it sort of narrows down. I think we need to find that same symmetry of prediction, but not by back propagation. And so then the question becomes, how do you find symmetries of prediction? OK, so you have a network which is connected. You've got a lot of sequences of language like I like apples, like I like tea, I like apples. They're for me or whatever. And where they, we've got two things that connect to the same continuation, you have a link. Well, what happens? How do you find those nuts and apples? How do you find those things which are joined by the continuation eating, whatever? And so I was looking for a way of finding how things which are connected to the same continuation in the network, how you could find that in real time. And then I found this paper that said, well, a way that you can find these densely connected sub-networks within the network is you can set the network oscillating. I don't know what that means. Well, you said that you hit it with a hammer and set it vibrating. Right. I mean, if you haven't, so you have a network of sequences. And some of the sequences go down the same path. So you think of lots of words which lead to the same, which all continue down the same path. And you can sort of think of them as being like little diamonds in a network. Imagine all the words which fit in a different context. So it's like what's the sentence could be? The wheels on the car go round and round, or the wheels on the car, the seats on the car, the paint on the car. OK, so all these things join, you can think of them all as forming a sort of a little diamond in the network of sequences. OK. And then if you were to make that network vibrate, because all of those words are joined to the same continuation, they're going to tend to synchronize. Sure. Right, I mean, because they're connected to the same thing. If you're shaking them, they're all going to shake at the same time. OK. Yeah. So I thought, well, so the way to do to find these predictive symmetries, which are being found now by that propagation, maybe the way to find these predictive symmetries would be to set the network vibrating, oscillating. What would that mean in terms of like software or the mind? Well, it would mean that your model of the mind would be a network of sequences, which is what we have with language models now. And instead of grouping those elements and those sequences by backpropagation, you would be grouping them by shaking the network and seeing which ones shake at the same or synchronize they're shaking. Yeah. Like, what does the shaking look like in a? Well, I think it's I think it's I mean, look. In the movies, you land there, by the way, with Ben Stiller. I haven't. No, it's a ridiculous comedy. He's a male model. There's a whole thing about how not intelligent he is. And there is a scene near the end of the movie where his boss tells him, the files are in the computer. And then he goes up to this Mac machine, Mac computer. And he's like, of course, the files are in the computer. And then like an ape, he begins to hit the computer with the stick, trying to break in to get there. So when you tell me you need to shake the thing, I just want to let you know what goes through my mind. I was like, all right, I'm going to take a computer and shake it. So if you can give me some context, just so you know how primitive of a mind you're speaking to right now. Well, I think I would shake it electronically. I think I would apply energy, but I would apply it not with a physical hammer, but with a electrical hammer. So I would give charges to the elements in the network. And basically, I think what happens is I think the spikes synchronize. I think that this is because when I found this idea that a way that you can, I mean, it's a way of finding sub networks or sub networks which are richly interconnected within the network is you said it oscillating. And you can do it physically as well. I mean, actually, there's a company which is doing something which I think is quite similar. And you may have also come across this because I mentioned a few comments recently and earlier. There's a company named Extropic AI, Extropic and Extropic Online AI. And they working on, well, on the basis of previous work with quantum computers, that they are trying to do machine learning using noise, heat noise, in, well, literally, and I think they're doing it in silicon. They have a prototype which is working. I don't know how it's working. It's sort of super cool, super straight, because they need to super cool it, because they're looking at vibrations. So they want to, they don't want to have a lot of heat noise bearing the signal. But they are really, they're hitting it with a hammer. I mean, you hit something with a hammer, it heats it up. So they're sort of looking at heat noise. And then they're saying which things oscillate together when you when they're hot. OK, so I mean, if you like things which when they get hot, maybe they're going to glow red and the stuff which is connected most closely, probably glows with the same color, because the heat is transmitting itself. And so heat transmitting itself is saying that vibrations are transmitting themselves or matching. And so any kind of oscillator actually, anything which has a signal which goes up and down can synchronize. And so this company is extrapic. They're trying to do it with literal heat in atoms, atoms vibrating. Yeah. I think being, I mean, I think that you could do it with something like a spiking neural network. OK, what's a spike in the brain? A spike in the brain is an electrical charge that builds up into a certain point. And then it sets off a, you know, a, it sets off a jumps across the synapse. But you have an electrical signal which is inside a neuron. And it's oscillating, right? I mean, it goes up and down. And you get masses of these things. Obviously, masses of these things, these electrical signals combining and then visible to experiment through EEG or, you know, I mean, we know that there are oscillations happening in the brain with oscillations of electrical signals. So I mean, for me, an obvious way to try to do it would be to have some sort of model of electrical charge and allow the charge on the elements in the network to oscillate and then see which ones synchronize their oscillations. And that, so, I mean, what I'm trying to do is find a way of collecting things according to the way that they're predictive symmetries. How are they symmetric? Or how are they joined in the way they predict? Yeah. And so, I mean, the way I came to was that maybe the way that cognition is finding these predictive symmetries is by synchronizing the charges on neurons. So what I would like to do, you were saying, if Elon were to bang me a cool... Yes. Because you could run an experiment that's just funded. Something which is like less than the cost of building another 200 nuclear power stations in the short term. It might be fun to... Or, I mean, look, people have built spiking neural networks, right? Intel has had one for about 10 years. IBM built one in the North called True North, but they didn't know what to do with it. And I think it hasn't gone anywhere. Intel bought what they call it, Loethi. And they're trying to figure out... But what they're all doing is they're all trying to figure out how they use... Because they like spiking neural networks because they're better in a lot of ways, a lot cooler, surprise. Because only the elements which spike using power, right? So it's much cooler than the language models currently run on, because they're all based on weights. And it means that all the blinks are always powered up all the time. It's very hot. So people like spiking networks, and they're trying to figure out how to use spiking networks, but they can't figure out a way to use them. They can't think how to use them because they've got the things which work, which use weights, and then they've got their spiking things. Yeah, so I would like to try and find the same symmetries that are now found by backpropagation, but find them in real time using oscillations in the network to find symmetries. Super interesting. I had never heard this before as an idea, and I certainly don't have enough of an understanding to be able to simulate in my head what this would look like. So I mean, there are, there are, there are quite a lot of people working on oscillations in the brain. I've come across chunks of software that people are trying to replicate. What they observe in different parts of the brain, and they've built software which, which oscillates in different ways. But they don't have any eye, I mean, they don't have a model for what, what they might want. I mean, we're, when we're looking at the brain, we're also sort of looking from the outside in, and people see oscillations. There's actually this thing which goes back 30 years or more called binding by synchrony. Okay. And binding by synchrony is an experimental observation that, that perceptual categories seem to correspond to synchronized spikes, oscillations in the brain. So they, they did experiments and people when they were looking at things, they would perceive an object and when they perceived it, then their brains would be synchronizing, the oscillations would synchronize. This is a theory, a theory for how objects are perceived in the brain which was called binding, binding and said binding something together, binding something together as an object by synchrony. It was observed experimentally. But I think, I mean, this was also like coming from the outside and they, they didn't have any idea how and why those, those, those, those, those oscillations might be synchronizing. I'm coming at it from the other direction. I'm coming it from the direction of saying, hey, you need a network which says connected sequences of observations and the sequences, the sequences will synchronize. Thirty years ago, they didn't, they didn't sync, you know, they're not looking at sequences, they're just saying, oh, synchronize somehow, wonder how synchronize has done that, how synchronize us. I, I'm actually spoken to the guy, I think who coined the term binding by synchrony, a guy Christoph on the Melzberg. And he said he gave up on the idea because he couldn't, he couldn't make a model which was. He didn't have any way of, of, of making things synchronize, according to, according to its membership of some kind of meaningful category. Well, I mean, I, I've got away that could happen right now because, because things which make the same prediction might be a meaningful category. might be a meaningful category and that is those are the meaningful categories we're now stumbling into with language models. I want to see if I understood you correctly or from hallucinating this understanding. When the research you cited about the synchronization when you see an object, is it that like my brain sees an object and then the neurons related to that object, all of them just sort of oscillate in the same direction together? I'm not sure. I mean, I think because for a fact you don't know what neurons are associated with the object, or at least you don't know, except by presenting an object and seeing what happens in the brain. I'm imagining what happened or as I recall is that they present an object and then they observe that some were in the brain that you get some kind of synchrony. And so that seems to be significant in terms of a response to an object, but they didn't know where it was coming from. And in current LLMs, is it like literally the entire database or it somehow gets cross-checked as opposed to literally just the things that are related getting activated or something? What's current LLMs? Well, current LLMs, they're finding the same thing. They're finding groups of things which have the same prediction. So they're finding predictive symmetries in a network of sequences. But the problem is that the back propagation is not real time. It happens once very slowly over a 10-month training phase. And so the whole network gradually narrows in on the best global solution for everything. So you don't have any kind of discrimination of this context or that context. It's optimizing the whole enormous data center of processes and all the data, all of it at the same time. And because it has to do all of it at once, it can't capture contradiction, right? Because that goes against the whole idea of optimizing everything at once. So they fundamentally cannot get contradiction. Now, how does this relate to what I've read from you about intelligence as compression versus intelligence as expansion? Well, this is the thing. So if you're doing things in real time and you can, if you do things in real time, you can have contradictions, right? Because it's like, well, in this situation, you get this kind of symmetry. And in another situation, you get another symmetry. So you have the same set of data, but it has two symmetries. And then it's like, well, each of those symmetries maybe have a different, like, maybe there's two interpretations of them. And then maybe there's two interpretations of those. And you find you can have this sort of, it grows. At the moment, people assume that you're compressing. You've got all this data. We don't want all this data. It's too much for us. We want to push it down and make it smaller. And then, you know, paradoxically, they get enormously big. It's like, they're small models. They're enormous, right? But they think, oh, no, actually, it's smaller. It's kind of smaller. It's enormous. Okay. So they think they're getting smaller. But if you embrace contradiction, then it's natural that things will get bigger. But it's just they're not going to be a bigger all at once. They're going to get bigger over time, right? So as time goes by, and you arrange things one way, it's like, oh, that's useful. And it's like, oh, it can arrange in another way, too. It's also useful. Ah, look, you can arrange it. So it gets bigger and bigger and bigger. So, um, and so you're still got things which are useful because they're predictive, right? Things which predict are useful. But they're not smaller than the data. They're actually bigger than the data. So it's like, it means that you have to change your thinking. I mean, people have assumed that, um, that models of meaning would be useful because they would be smaller than the mess of detail in the world, right? They assume that when we understood meaning, it would be, it would be a small set of principles and that would explain the whole world. So that the utility would be, um, that you would be, you'd be collecting things, but it would be, you wouldn't have to collect the whole world because you'd just have all these principles that would explain it all. True. So the usefulness would be that it would be smaller. But when you, you give up on the idea that things are fixed abstractions and you, um, you focus on the task of prediction, which is, and prediction is decoupled from what the actual grouping is. You have two different groupings. If they both predict, they're both good. So it's decoupled. So your groupings can, you can have an infinite number of groupings. And then you come up with a new grouping. It's like, wow, hey, I haven't had a new idea. It's actually a model of creativity. You think what's creativity? Creativity is a new grouping of the world, which, which, which predicts it in a way that nobody thought of before. Yeah. So it's, you know, I mean, we have these language models and they say, well, they're not really creative. And, um, yeah, of course they're not creative because you're trying to, you know, you're trying, you're assuming that you're going to compress everything and make it smaller. But if you accept contradiction, then, like, you can, the, the, the meaning can grow. You can have that you can have, um, there's no reason why you shouldn't have an infinite amount of meaning. If it's, if they're all different and they're all useful, why not? So it's an expansion of the world rather than a, than a compression. It's just sort of fun, fundamentally, fundamentally turns upside down the way we're thinking about AI at the moment. We're thinking about AI as compressing everything into a smaller amount of, um, core abstractions. But I don't think that that's what it is. I think that what intelligence is, it's finding things which are useful, but it's actually, it's finding new things. So expansion, expansion instead of, instead of compression. It also, I mean, it, and it works in, and all sorts of other ways too. I mean, it's very suggestive of all kinds of little puzzles. Um, because if something gets bigger and bigger and bigger, it's, it's kind of relevant for, I mean, like, it's, it's not, it's not, it's not, it's not, it's, it's not, it's not, it's not, it's, it's not. I mean, like, I don't know if you've thought much about mathematical chaos. Mathematical chaos is when any small difference in a system grows. So this is the famous butterfly effect, right? So the butterfly effect says you've got a butterfly flapping its wings in Africa. And, and so the flaps its wings that cause a little flurry and then the flurry grows and grows and grows and eventually becomes a hurricane off, off, you know, the east coast wherever. Sure. So that's chaos. So if something, if you have a system where differences get bigger, that's actually a model for, um, that, it means that you cannot, it means it's a system which cannot be abstracted more than the system itself because any difference will propagate and grow bigger and any difference will result in a completely different outcome. Right. So this means that, that you cannot copy such a system and the weather is, is one such system. Okay. The weather on earth is, is such a chaotic system. And that's why you cannot predict what the weather is going to be in your hometown in six months because it's chaotic, right? I mean, otherwise you'd be able to say, oh, we've got the equations and it's going to, it's going to be light, light drizzle with noise and obviously wind, right? And, and, and 12 months time. You can't do that because, because the differences grow. Um, so, but if you have a system like that, it's, it's actually, it's actually a model of free will. Because if you, if you're God and you create such a system, you can create it, but you don't know what it's going to do. Right. Right. Because only the system, the only thing which can tell what that system is going to do is that system. The only way that you know if it's going to be raining or dry in your hometown in nine months is to sit there and look at it and wait and see what it does. Yes. So, as you can say that the, I mean, so that you can say the weather has free will, right? I mean, it's like even, even when, even you creating the weather, you don't know exactly where it's where the rain is going to be. Is going to occur. For sure. Yeah. So, like this, this idea of a system which expands instead of like a system which you think of as being compressed, it's also kind of suggestive of solutions for our intuition of, of free will because it means that, you can't, you can't predict what, even if you create a person, you cannot predict what their person is going to do. It's also suggestive of something like consciousness because consciousness is, to me, anyway, it's kind of very suggestive because consciousness is awareness of something for itself. So, it's sort of like something being bigger than itself. Yes. At least for me, the idea that something can become bigger than itself is, is suggestive of what might be the source of our intuition of consciousness. So, I mean, I think this idea of things that get bigger rather than being that gets getting smaller. I mean, so it's like, it's a, it's a very practical solution for a problem that we have when we learn grammar. First thing. Yeah. But. Yeah. Yeah. Yeah. Actually, beyond, beyond that, it's kind of suggestive of solutions that might, that might be for other things which are still mysteries for us. But, I mean, and that's not completely, I mean, there's, you can find other people who are saying sort of similar things. So, there's a guy, a very famous book. I don't know if you've read it. Douglas Hostader. Yeah. If you go to go to go to Echebak. No, I have not familiar. So, this is a very famous book in geek circles. And, and, and well-wared to read. But he goes, he sort of goes back to that maths I was talking about. And he, but he follows different little flurries of intuition on pretty much the same lines. And it was an enormous, you know, it was, it was wonderful for me because I was saying all these things about all contradiction. And, you know, maybe the, the thing can't be described more as complex in and itself. But, but I was guys, oh, you should read, you should read Hostader. And I read it and he's saying, oh, yeah, he's saying the same thing. But like, like the folks that, that, that observed binding by synchrony, Hostader had no, he had no actual network, which created this, this. Expansion. He just had this idea, oh, you know, probably going to need some kind of expansion. So it's a similar sort of thing, but they didn't have, they didn't have the nuts and bolts idea of what kind of network it might be that, that would do this. So, in my own experience of having a, of having a neural net, that's a fun way of talking about my own self. What I've seen in its development, as I've been fostering this non artificial intelligence. And yourself and your conscious self. Exactly. You being bigger than yourself and looking at yourself. Yeah. Literally. Yeah. I have a, I actually have an image of that. You know, I don't know what this, I think it's essentially, it's like a cube within a cube. Oh, yeah. I have that as a symbol for essentially consciousness or. Okay. Yeah. And, and like the, the self that I normally think of, I'm like, hey, I'm Serenna. That's actually the little cube that exists within this larger cube that I can become aware of. And it helps recontextualize things actually. So anyways, in my development of myself, I've noticed that there's both things happening. There is expansion happening. That's literally learning. And it's a very, there's, I don't remember where you said it, but there was something about learning in real time. And I was like, oh yeah, that that's absolutely necessary. And then there is also, I don't know if compression is the right term, but there is maybe distilling. And I feel like there is a difference between compression and distilling. Well, I mean, I think we are compre, when we're learning, we are compressing. It's just, I think that there's, that there are contradictions. So when you have contradictions, it means there's two ways of compressing. And they, but I mean, they do compress and you need to compress things because our brains are puny. Right. So you need, you need to compress things down to some, some lame ass, you know, principal in order to, to do anything. Right. Otherwise, it's like you have to say, oh, well, I don't know. Why lame ass? I have some badass principles. He just, just sort of the humility of the human condition. Yes. Of course on the spice of the universe. And, or, some, another three you might like to go off on actually was a lecture. And then we talked about the brain brain. And then we talked about the brain brain. And, and left brain being logical and, and ordered and right brain being very sort of intuitive and, and open to novelty. And they're talking about how, you know, the sort of left brain thinking. It enforces its own rigid patterns on the world. And it, it, it resists change and how this is kind of symptomatic of, of problems we have in society. But I mean, we need that, we need that logical brain. Otherwise, we can't do anything. We can't build a jet airplane if we're going to sit there and say, oh, yeah, but this fan blade, might be slightly different to that. When you need to simplify things to get things done. So they are compressions, but it's just this, this, the new idea is that on some level, there's going to be contradiction. And we need to embrace the contradiction. If we're going to have creativity. Yes. I would also, I think, add to that, that you need to embrace the contradiction. To be able to distill the whatever deeper truth that might be that gives rise to the possibility of the contradictions. Yeah. Because if you, yeah, if you try to remove the possibility of contradiction, then you also disallow yourself from seeing if there is a deeper truth that gives rise to the, like the, the multiple options. You can't see novelty. Yeah, you can. You can. You won't come to any new solutions unless you. But so you were saying that in your experience of your own neural network, you see both, right? You see compression and you see all the things. Oh, absolutely. Yes. So I mean, you might be interested to listen to this, this talk between Peterson and the other guy. Michaelis, too. Okay. I think, I think, I think, because they, he, he, he, he means, so he's written a book, basically left, saying left brain, right brain, which I sort of resist actually this whole left brain, right brain. Sure. I've heard that for a long way back. It was a bit of a simplification, the simplification itself. But he's quite heavy on it. And he seems to be a fairly, fairly, very. He's very left brain about it. Yeah. Very rigid. He's, he's, he's. Yeah. Maybe. And an open sort of way. But he was, he was talking about both aspects. So, you know, you're seeing both aspects. I mean, that he was saying, yeah, there's both aspects and both in both, both are necessary. It's a bit like a Peterson's, one of Peterson's themes, which is you need to have logos and you need to have the chaos. So, you know, you need to have both. I'll try. I see if I can find the guy's name. I should be able to find. Rather than just saying, make something. But I don't know. I'm just just saying that you're not a partner. Not only. The Gilchrist, and the Gilchrist, wisdom, delusion, consciousness, and the divine. You might like that because I think you're a bit spiritual. So, Dr. N. McGill Christ. Perfect. Thanks. Check that out. Anyway, so yeah, I mean, I think you're right. There's both is there. But the bit that we're missing at the moment in their technology specifically is the expansion bit. And I think that language models have led us to it blindly stumbling without thinking just like saying, oh, GPUs, you know, that I was using in my gaming rig. I'll use them for neural networks. And then it's like, oh, you know, if I do it for language, they suddenly I've got this creepy thing that seems to be talking to me. So, I think that blindly we're stumbling to away from structure to this kind of openness. And then we see it with the size, right? I mean, these can get enormous. Why are they getting? Why are they expanding? Well, I don't know. It seems to be a flaw in our compression. We're stumbling to it, but we're just not. We're not thinking about it. We're not saying, hey, you know, maybe this problem, this lack of structure in which we see as a problem. Maybe that's a solution. Super interesting. And, yeah, from my own, like right now I'm almost exclusively focused on human development, and I have just over time come to see it as like developing neural nets within humans. And I've noticed humans don't always appreciate being regarded as as neural nets within biological bodies. Humans sometimes seem to not appreciate almost anything. That is true. Lack of appreciation to deepen the biology. Yeah. So, I'm listening to everything. You're saying reading everything you've written through also in that lens as well and trying to see where I see parallels between how to see the biology. And how learning happens and how. Yeah, well, I mean, I think learning or rather it's an interesting thing is how creativity happens. Yes. How we change, you know, how we jump out of what McGill Chris would probably say, jump out of the best brain into right brain. There was a guy I came across years ago, Snyder, I think his name was working at Sydney University, Sydney, something in Australia. And he was trying, he was, he had this idea that autism, I don't know if he had this idea, but anyway, he was trying to jump people out of their fixed conceptions by actually putting a helmet on them and scrambling their brains with a magnetic field. Okay. And what was he trying to do? Well, he literally put, he literally was scrambling people's brains to try and make them more creative. And did it work? I think so. You can look it up. It's in why the, I think it's neither. Okay. I mean, I can find the link if you can't find it, but he was. Yeah, I mean, creativity, university, Yeah, that sounds like it. Yeah. Yeah. I mean, I don't know if it went anywhere after that. I haven't heard of it recently, but I mean, it, it fit with me. So, for instance, I think I, I, working on my, my, my idea for what that thought is an organization of, of information. And the problem being that they, or the solution, the way being that they can contradict, I was looking at autism as a failure to, so like autistic, some of them anyway, have fantastic memories for detail. Right. I mean, my thesis was that we, when we learn something, we don't, we don't only learn the abstraction. We actually keep the detail, you know, because I'm saying that you need to be able to abstract at, at runtime. So you need to be able to order things in different ways. Sure. As you go. So you can't throw away, you can't throw away the information. You must remember the, you must remember the details so that you can find the new, the new order coming out of it when you need it. Yes. And, and, and it's, I was fascinated to find that autistics seem to be evidence that humans do retain detail of perception. It's just they can't usually access it. Interesting. So when we access things, we seem to only access things in terms of, of these sort of broad abstractions. But the fact that you, you have people who are, you know, autistic, and the autistics, they, some of them anyway can, can retain fantastic detail. You know, there's a thing, there's a guy that goes on a helicopter ride over Rome. I don't know who's seen that. I've heard of it, actually. Yeah. I mean, you know, so flies over Rome for an hour and then they take him back to the, the lab and then he, he, he spends a week sketching all of Rome. You know, right down to the number of windows on, on the building on the corner of the Vatican. And, you know, on the sort of farm. He just retains everything. But, so it seems that autistics, they, they are like a window into the, the, the, what we're, the brain is actually retaining, but they, they seem to have lost the facility to order it. So they, they, they're not. And that's, I suppose why some of the autistics are like, can't handle too much stimulation because they're receiving the stimulation direct, right? They're, they're getting it's like a fire hose. Whereas you and me, you and me in our, in our, in our maturity, we're closed off, we closed off the most novelty, right? We're like, oh, it's like, you know, those children asking us good big questions. I know everything already. When it comes to maturity, I would say, please speak for yourself, Rob. Well, I'm saying, I'm saying maturity in terms of, in terms of, of, of lack of, you know, inability to see novelty. Sure. Yeah. Yeah. So as you get older, you know, you, as you have more, like you develop more certainties about the world, you're less able to just to see things in a new way. Yeah. Generally speaking. Yeah. Absolutely. And I mean, relative to my kids, I, they definitely see a lot more things in, in ways that I had not thought of. So I just, I learned through them to, to see things in the ways that they do that I hadn't. And I think you get less pleasure from novelty as you get older as well. That, that it's like, it's just like, it's like, oh, yeah, that's, it actually feels physically harder work. It's kind of interesting for me actually, because, you know, I like learning languages and I'm finding that as time goes by, I, I, I've always got a lot of pleasure from the, the, the, like, making the different patterns from a new language in my mind. It's, it's wild, but it's hard work. Right. It's like I'm finding, you know, that's, that's like doing, doing arithmetic in my head. Right. Yes. A lot of the learnings I've had has been through essentially meditating with psychedelics and then facilitating those for people. And it's a very interesting field when it comes to looking at it through the lens of neural nets. Yeah. And so one of the, yeah, get now going. One of them. I was going to say, one of the really, really big things for people is their experience of novelty, actually, that there is in that different state of experiencing things. There is just so much novelty in the same moment that they previously, it's like there was much more of a, this is my understanding of there's much, we have much more of it like a top down understanding of the room we're in. So you're like, Oh, I know this room, like, you know, like the back of my hand and I know the back of my head. Yeah, exactly. Because you know it already. I want to spend a lot of time on the bloody room. I'm doing that. Yeah. It's like, I could even tell you a list of all the things in it. So, and then in that state, not only do you realize that you actually have, like, it feels as if you've never seen this room before. So everything is novel, but it's like, yeah, the back of your own hand, you haven't also looked at before either. It's like fresh data. So that's probably it. It's probably what we're talking about that is it's sort of scrambling the old order. Yes. And see, and finding new patterns in it, finding combining things in new ways. And that's what language models can't do because with that propagation, they have to do everything at training time. Yes. Yes. I can't find the new patterns. Yes. And they don't imagine that they want to find the new patterns because they can't imagine that patterns contradict. If pens don't contradict, you don't want to do it. You don't want to do it once. And then you forget it like your room, right? You just want to learn the room once. I don't want to be like looking at the room every day and go, whoa. Yeah. Yeah. Yeah. It should be clear. I don't want to be on psychedelics every day. Yeah. Well, that's sort of, I guess that would be the extreme of what I'm suggesting is that it's like, you know, every day you walk up, wake up and you'd never get out your room because you'd just be like, Oh, man, look at that too. Yeah. Exactly. Yeah. So it does help to distill things down into some degree of like functionality as like so you can explore further. But unless we grasp that we at the moment, we're completely ignoring the extent to which things can or even the idea that they can contradict. So we're completely missing out on the creative aspect. Yes. And it should be simple. I mean, it may be really, really simple to incorporate that. We just have to dump the backdrop and use away the idea that they can contradict. And use a way of use the same learning mechanism, this, this predictive symmetry and sequences. And it needn't be only be for language. It could be any, the brain may work with predictive symmetries and sequences way beyond language and everything else it does. Like, I mean, for instance, why does the eye perceive the world by performing saccades all over the place? Why does it, you know, what's, you know, saccades. So when the, when you look at something, the eye is not stationary. It's only sharp and the very center. And what it does is it flicks, it flicks back and forth, right, with saccades. Oh, I was unfamiliar with this. Yeah. I mean, the eye is constantly moving. If it stops moving, you stop seeing. So it's like, I don't know how deep this whole sequences thing goes, but it makes sense to me that when, when the nervous system was first evolving and the first worms, probably sequences were something even which was significant for like, you know, that the proto were. So probably the, the proto new nervous system, it may have evolved from a very early time to find a symmetries in, in sequences of experiences so that it could better predict the future. Even if those sequences were just like, you know, I taste sugar and, and it gets warmer. So I will, I will combine those things. You know, that's, that's a symmetry. So that's a, that's a prediction symmetry or something. I'm going to get more sugar or something like that. Yeah. So this whole sort of prediction of things as grouping predictive symmetries may be very deep in cognition going right away, you know, back to earliest nervous systems. Yeah. So, yeah, it should be, it should be very easy. If we once accept that we need to do it, we can't do everything all at once in an enormous training phase. Yes. I mean, it's a, that, that's how I've been approaching my own, my own development and then the development of like people around me. It's just what, what is it that is in front of you right now and what needs to be developed here. And then let's make sure we develop it really, really well to, I don't know if this at all relates, but that's how I'm hearing what you're saying about non human neural nets. Well, it's all about perception of the world and what meaning, what meaning means or, you know, what's, what's the, what is meaning at, at, at root. So it's going to be relevant to anything, anything that humans do self development or, or, or recognizing a fire truck. Yes. Yeah. Something that I would love to just put it on your radar. I don't know what to make of it or what you could make of it. But something I've noticed from my own experiences with psychedelics, met it specifically meditating on high doses, because it's a different state than low doses or being approaching it in a non meditative way. Yeah. So seems to be humans quite universally. So it's not like a me thing or any person specifically. It's like a, just a operating system of us is that there is a level of our or a depth of our consciousness that is pre language. Uh huh. Yeah, I'm sure. I mean, I don't think language is fundamental at all. I just think that it's, it's, I just think that it's kind of like a cheat code. It's definitely a gateway. And when I speak with people, obviously, I, I'm not there in some telepathic. I do think there's a lot that happens non verbally, of course, as well. And. Yeah, no, I mean, like a lot of criticism of valalins and that sort of thing is, is, is attacking the idea that language is absolutely fundamental. I don't think language is, is basic to cognition at all. I think that it's just another example of same kind. Yeah. It's just that it's a particularly transparent example of the same kind. Yes. So, okay. So then we are aligned there. I'm curious then about what you were saying with the oscillations of the, would it be accurate to say nodes of information? Is that that you're also adding the nodes? I think, I think if you took it, if you took it down to the, the, the metal, so to speak, like down to the, the, the lowest level, it would be sequenced sequences of site of perceptual spikes from whatever sense organs you have. I mean, if you, if you're going to do it with, with language, it's going to be like vibrating, silly, or in your inner air, right? And that produces a sequence of spikes. And, but it's all, they're all going to be sequences. We're going to be perceptual sequences. And then you need to cluster those sequences and you need to group them into things, right? So that you can, you can't just have like, you know, spike trains. And so, yeah, I think if you, if you started, if you, if you wanted to go right down to the, the, the, the, the baseline, you would just, you would be stuck with sequences coming of spikes coming from whatever sense or when you're looking at whether you vision or hearing. Or the worm tasting sugar and heat. Yeah. I think it would just be. You start with a sequence of spikes and then you would find how those sequences with spikes cluster in some way. And, and you can, you can build structure on top of that. I mean, at the very lowest level, there may also be a clustering in terms of different associations and reward mechanisms. Like the language, at least for syntax, it's very clearly. So syntax being how groups of words combine as opposed to like morphology being what makes up a word. And syntax is the interesting one because syntax is the one which is new every, every sentence, every sentence you say is a little bit new. Right. So syntax is where not where you find novelty. So I think that this grouping on symmetries of prediction is, is, is, is vital for creating new novelties or syntax. There may be a layer below that, which is just in terms of things which are repeated and repetition. I mean, repetition is something which people use at the moment as a way of, that's like a basic mechanism or what they call unsupervised learning. So, the, the neural networks, lots of people have tried to build AI models for language by grouping things which repeat. It's, it's an obvious thing to do. You, you group, you group sounds which repeat or groups of letters. You group letters which repeat. You get words. Right. I mean, sure. So, I mean, that's a mechanism which probably exists as well. So things which repeat get group. Yes. But I think that the bit that we're missing is a creative one which is things which are grouped because they, they cause make the same predictions. Yes. So, yeah, going down to a lower level, these sequences of perceptual spikes, there may be some sort of grouping in terms of repeated patterns as well. But the one which I think we're missing is the, the, the grouping of things which make the same predictions, even if they're different. Can you give a specific example of this just so I could ground it in my mind? Well, so I think language is, language is just a very simple window into it and repeated sequences of words, right? Words, you can, you can easily find the words in a language just by looking at letters that repeat. Right. So if you, your group, if you, you get a, you take out all the spaces in a, in an English text and you, and then you just list things in terms of how often the sequences repeat, you're going to get a list of words. Yeah. But you're not going to get grammar that way. You, what you get is you get things which, which sometimes which repeat kind of, but they're sort of a little bit different and, and you don't, when you try and make rules, you find yourself. Hypothesizing categories like noun and verb and noun phrase and that sort of thing. But you, you can't, you can't get a precise definition. So, okay. So you were asking for an example, I think language is, is, is easy. The mechanism which I think is being missed at the moment, but, or it does exist in large language models, LLMs. But it's mistaken because it's, they attempt to do it one time only instead of doing it, instead of embracing contradiction. But this mechanism is to group things according to shared prediction. So if you have, you can, you can do it as like a, a gap filling test. Right. So take any, any sentence. Okay. I've got, I've got a letter here and on it is written in event of non-delivery. Please return to. Okay. So you could, you could delete the word event and replace it with imagine other words that could have come in that sentence in the case of non-delivery in the, I don't know what other words could you fit in there in the. Happened stance. Yeah, it happens stance. So you could, you could make a grouping of words which all have roughly the same meaning because they all fit in the same context. Right. And that's the mechanism. That's, that's what I think is the, that's the mechanism of grouping things according to shared prediction or shared context. And that's what LNNs do. You'll see it. They, they talk about it as embedding because of words embedded in the context roughly. So LNNs do that, but they try and do it all at training time. I think that those, those groups on things which can be substituted have to be, have to be all of them. And then you can't find a single group of words which fits in every, every context. So those words were what in the case in the happenstance. Okay. So you've got a grouping. They've got event. Case and happenstance. And then you might, you might have another sentence like we're having a, we're having a big event on Tuesday. Okay. And well, you can't, you can't use the same grouping because you say, we're having a big case on Wednesday. Well, it doesn't really work. But having a big happenstance on Wednesday. Yeah. It's not really. No. Right. So it's like what's, what's a good grouping in one context is not quite a good grouping in other context. Yes. So you need to, you need to keep the, the original text so that you can find all of these groupings at runtime at the time that you want to make the, the, the generalization. Yeah. So the, is this what current LOMs do do? Yeah, but they only do it once. Because they, they, they're dependent on back prop. Their whole, their whole paradigm is dependent on learning, learning abstraction. One time only by back propagation. That's the only tool they have. So, so how, how could you, I know I asked the similar question before as well. And you said the thing about oscillations. But how could you do it without the one time training? What I said before. Yeah. Exactly. So predictably, I understand, but I, I, I'm stuck in my own. I don't know. If you can think of a different way, that's good too. How, how could you find those groupings over an entire body of language at, at the time, at the time that you're speaking or that you're hearing at runtime? I mean, you know, so you're thinking of a language, what is a language? A language, let's say a language is a bunch of sentences that you've heard in your lifetime, right? Going back to your childhood. And that's all stuck in your head. Like an autistic would maybe have all of those, it would, maybe the autistic would just have those raw, you know, they might be able to repeat them. They might not be able to say anything meaningful, but they might be able to repeat the exact sentences. So they're all stuck in your brain there. So now you're, you're, because you're an intelligent entity, you're going to, you're going to group those sentences in some meaningful way for one, some way which has meaning for the, for the situation you're now in, right? Yeah. So, you're going to think of a new grouping for those, those, those, you know, or you've just gone into the bread shop and you want to buy a loaf. So you want to, you want to put things together in a way which is going to get, which is going to achieve your goals. So how do you group those sentences and elements of those sentences together in a way which is going to get your loaf of bread and do it at runtime? So, you know, what LLMs do now is they, they learn groupings. And because buying bread is a fairly common situation, if you ask it to buy bread, it will probably do it quite happily. But if you wanted to do it, if you wanted to think of a new, a new way of, of conceiving bread, then you want it to be able to do that at runtime at each, do it differently at each moment. So, yeah, if you can think of a way to group words which will be, which can be substituted in a context or which share a context and do that in real time, I think that's, that's what we need. For me, the one which is striking me most forcibly is to have a network of those sequences. And because words which share context will be richly connected with, they will share rich connections with those contexts. They should, if you put energy in that system, if you charge it and it's like producing spikes and oscillating, the rich interconnectivity should cause those oscillations to synchronize. And that will be a way of identifying it. Right. And this would be on a literal hardware level. Well, you can do it in software as well. You just have, you can, you can simulate it with software. But yeah, I mean, if you look at what X-tropic are doing, they don't have, right, so X-tropic, I love what they're doing, that they don't have the idea of contradiction. They just have this idea that, that oscillations might be a way to find things which are tightly interconnected. So they're actually doing it, but you know, with, with oscillations, literally oscillations which are just heat noise. Yeah. So, I mean, you know, it would be hardware. Any system where you've got interconnectivity where things are connected. And so, so stimulation on one element is going to transmit itself to into to closely connected elements. And that could be a software activation or it could be a physical activation. For the brain, I think it's electrical. Mostly, I mean, obviously, this chemical is coming in there too, but I think when we, when we, when we have a physical activation, we, when we group words in order to buy a loaf of bread, I think we're, I think we're synchronizing. I think we're synchronizing all the words which are similar to bread. Yeah, yeah, I understand. Or maybe we're synchronizing with butter so we can buy butter too. Right. Yeah. Yeah, that's the idea. But so it's in order to think that it's interesting. You have to accept that, that meaningful groupings contradict. And that's something which is completely ignored by current state of the art for all of their 7 trillion request of investment by Assam Altman. I'm sure he's never, I would, I would like to place that said he's never, he's never conceived of the idea that the, the patents, his systems are learning might contradict and that that could be the reason why his language models get so large. Very, very interesting. Rob, thank you for your time and taking the time to actually explore these with me. Thank you for your curiosity. You wouldn't believe how rare that is. I would believe it. I have been curious about that as well. So yeah, it's nice. And also on your side taking the time to actually explain your stance is not that common either actually. So there's a, you think I think people will talk for a total accounts come home about their hobby horses. It's just, I think it's, you would think so. Yeah, just there's a, at least this is my perception of it. So I, maybe you can confirm for me what led to it because I just asked if you're open for an ex base, you said, yes, right? I have asked many people who, for the most part, never responded. Like they would be very much in the, in the chat on text and then the second, the idea of a more rich higher bandwidth exchange would be possible. They'd be like, either disappear or no, mostly disappear actually. I have a suspicion that the possible reason for that would be that they simply don't understand what they're talking about. Right. But I mean, I think possibly people, it would be interesting to look at specific cases, but I mean, I don't know why people argue things, but I think a lot of the time people just don't understand things. And they may be repeating. I'm thinking of Kamala Harris. I'm sorry to get political. I am very much so. So I think of, I literally frame those types of situations as people being LLMs. There, there are, I think a lot of the time people are just repeating dogma. And if you ask them to explain their reasoning and to elaborate and to expand, possibly they just uncomfortable because they don't know. It's like, it's like a child asking, you know, so the curious child is like, you know, well, why, why is the sky blue? And it's like, don't ask stupid questions. It's blue. Yeah. Yeah. Yeah. I have really great interactions with my kids actually because that's very much welcomed for me. And it's like the sort of like the best case scenario for me is that they ask me a question. I'm like, wow, I actually don't know that one. Let's find out. And then, yeah, it's great. They're in there and the, you know, you're in the, you're in the sort of the logos rigid. I know how the world works. I don't need to think about that again. And then they're like saying, oh, you haven't bought this one, buddy. Yeah. Yeah. It's lovely. And there was a time I had this recently. I had a conflict with my son step steps on. So there is something interesting there where the neural net has been programmed in other ways. So now I'm encountering someone else's programming to an extent. And then there, this was like a really fundamental thing where I was asking him in a moment of conflict of like, hey, why do you trust yourself, which is a really foundational idea. And then his response was because it's me. And I was like, that is a terrible reason because like I have learned with my own self that I should only trust the things that I. That there's been evidence for. Beyond that, I'm open to learning. So I wouldn't assume that I am necessarily good. I wouldn't necessarily assume like literally anything other than what I have seen to be true. And because if you assume that like, for example, because it's you, you have to be good, then there's going to be a lot of evidence that you're not going to want to look at that might contradict that actually. So people don't like to look at contradictory evidence. It's uncomfortable. Yeah. And so, some reason I find pleasure in that sort of process. Yeah. So we're all, we're all wired. It's definitely not that. And the ones that are curious. Yeah. They're the ones that are open to the novelty and get pleasure from the novelty and at different ages. So yeah, probably the resistance, the reluctance you find in people to explain their thinking more deeply is because it's not actually going really deeper. Right. Yeah. Yeah. If you look at the people who, who had put a lot of thought into it, I would think that they probably would be, would love to bore you stiff with details. Yeah, for sure. Thank you also for mentioning the few different resources, extra epic, the Jordan Peterson guest and the Snyder brain scrap link. And yeah, I will look into those and if you have anything else you think would be interesting, please share. Look, you know, if you have any questions in this fire me a message or we can chat again, whatever. I mean, I have a lot actually because this is, this is different. Different different personalities work on different things and I work on understanding things and so. Yeah, I've gone down a lot of a lot of trades on this and have a lot of. Interesting. Interesting references and suggestions. But anyway, so you've got it. You've got a bit to work with there. But feel free. I mean, you know, if you think, or you think that you, or if you can pull it apart, come and pull it apart. Yeah, probably. Sure. Yeah, sure. I mean, what I brought up with the whole, there's a depth of consciousness that's. Pre language or sub language and you had no issue with that. I was like, okay, cool. Then you're actually super on point. So, like, our foundations line actually. So, you know, if somebody, if somebody can can find a thread which will unravel my sweater. That would, that would save me, save me trouble in the long run. So, I'm very happy to have any criticisms or any, any, any, any. Yeah, cool. Yeah, I'm still going to have the, I'm going to loop on the whole idea of oscillating. Networks, I have like a picture of it in my head. So, I'm just going to loop on that for some time, try to see if I can understand it or connected to my. Which part, for me, it hit me like a brick because I was, I was looking for a way to group. Meeting pool elements in language sequences. Yeah. And then it's like, maybe, maybe oscillations could be that. And it's like, what the, you're telling me that it's like, we know that the brain is oscillations in it. I've always just assumed, I've always sort of like, oh, you're oscillations, but that's just, that's just superficial. That's just like, yeah, that's like. Implementation detail. It's not, it's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. The relevant system. Yeah. Look, I came across the other day on Twitter. I came across this group of neuroscientists. And they had a, they had a, they had a, a group discussion about whether. If they are meaningful. Okay. So neuroscientists are still debating. They're still, they're still debating whether, you know, everybody knows the oscillations happening in the brain. Right. But nobody knows why. So it's like this, this, this group of like six, seven neuroscientists were all sitting there for an hour or two hours. Arguing or discussing about whether, you know, why the, you know, whether the oscillations matter. People just still have no idea. For me, it's like, it's like, oh, this thing, which I just like stumbled on is maybe a solution for my narrow technical problem is actually observed in the brain. It's like, yeah. You know, it's like the thing which I'd always thought was, was insignificant might actually be my solution. So for me, it hits me like a brick. It's like, of course, that's how it's happening. Do you know what I named? Oh, sorry. Go ahead. I'm just going to ask a bit why, why, why is it, why do you find it not intuitively obvious? Why do you, why do you, why do you, why do you, why do you, why do you, why do you, why do you, why do you find it difficult to, like, impress it. Why do you, or do you not see that it would cause that if you have a network and sub networks are tightly connected to the brain? That those sub networks will probably synchronize more closely than the wider network. Oh, for sure. For sure. Yeah. It's, I don't have any reasons. Like I'm not trying to make it make sense. I'm more so, I want to let it, let it sit with my existing network of understanding and then see how it connects and then how I can re-understand everything I currently have. You know, this is my model of understanding and I use it all the time. My model of understanding is you have to organize experience. Yes. If you don't have the experience, you cannot organize it. Yes. And, and so, actually, have you read Thomas Kuhn? He's another reference for you. No, no. Thomas Kuhn is the guy who, I believe, he's a guy who invented the word paradigm in the modern scientific sense of paradigm. Right. Oh, wait. Actually, I have. I just didn't know he was the author. The structure of scientific revolutions. I've read that one. That's the one. That's the one. Look at Thomas Kuhn. It's, it's Thomas. I love it. I mean, this is another one I stumbled into, which was just like, oh my God, you know, this is exactly what I'm saying. That it's a model. He says I have this thing with us, which I came, I think, was in appendix to his book. It's something like, I'm talking about a model of understanding, which is, I'll let me find it. Talk about Kuhn so much I can't. I search for this. Something like a model understanding, which is misconstrued if it's interpreted as something which is abstracted and thereafter functions in its place. Yeah. Which is the same as I'm saying. It's like I'm, I'm playing that, that, so paradigm, he originally got it apparently in the Greek. It means examples means example. Okay. And, and, or a set of examples. And so he's saying that his model of meaning or his model of thought is that it is a set of examples which are organized and can't be abstracted beyond that organization. And also the fact that, that you've got these paradigm shifts and people, people can't get from one to the other. Unless you have, unless you have a body of experience, which is the examples, which is the paradigm of science that you're doing. Yes. You cannot organize them in the same way as somebody else. Yeah. So, and what I've, and what I've found with things like philosophy too, is that I will, like, read some, some philosopher. And because it resonates with what I am already saying, I can, I can, I can, I think, if you don't have the experience, you cannot understand what they're saying. And it's because understanding is organizing your experience. If you don't have the experience, you cannot understand. Yes. Yeah. So, so maybe you just need to have more experience of oscillating networks. Yeah. That's what I was thinking. Yeah. Or maybe get a ripple tank or something like that and play around with it. A what? A ripple tank? Oh, it's just something you do if you study physics. I mean, so you, because of ripple tank. Oh, my God. I would love this actually. Yeah, because physicists are looking at oscillations all the time, actually. I mean, this is, this is much suphysics, you know? Yeah. I recently was watching videos of, like, there's a sheet and there's sand on it and then, yeah. Yeah, I think that's all that too. Yeah. It's so lovely. And that's warm subjects, right? Yeah. Yeah. It forms different, very distinct different shapes based on the, the frequency that is being played because it's like that's the, that's the oscillations of sound waves or something to that effect. So the grains of sand place themselves or move eventually to the place where the sounds are most concentrated or least concentrated. I forget which one. Right. Right. Yeah. Well, this is exactly it. So now take that and so all the grains of sand are connected in the same way because of this lump of sand, right? So you're only getting the sound pattern. But now imagine that those, imagine those lumps of sand had little, had little strings between them. Correct. Okay. So some of them were more clumpy than others. Yeah. And then, so imagine how that might affect the, the patterns that emerged. Yes. And that's exactly what I'm saying. I'm saying, that's, that's the exact system that I'm proposing is that, or I'm suggesting happens. This is really cool. Cause this is like my, my understanding of the, my mind essentially. I just didn't have it framed in this way, had nothing to do with AI, but I was more thinking of it. And like, for me, it feels very, like, esoteric. I'll describe it. And then maybe you can connect it for me. Okay. I've, I've seen that like the, the state of love or appreciation is a different vibe. Then the state of, um, something like block of something that blocks the flow of love or is like, depreciative. And it's a different, it organizes information differently within my, like the vector space of my mind. So that's where I started to look up the, the videos of the sand to sort of help myself see the thing I'm seeing in the physical world. I was like, yeah, it's just like that. Yeah. Well, that's it. I mean, it might be love is one way of organizing it. And, uh, hate is another way of organizing it, whatever the, it's, it's like, it's just different ways of organizing the same information. Yeah. You, you, you, and, and so the information, um, uh, has to, there's no single way of organizing that information, which captures everything you did. It's like this, if there's contradictions and I suppose hate and love. Yeah. So you need to be able to do both. Yes. Yes. Organize. You can all, you can construe the same information. I mean, that's what you see happening in politics all the time. People construe the same thing to mean what they want it to mean. Yeah. That's what the postmodernists went on about for a lot. Yeah. But yeah. So that is exactly what I'm, the way I'm seeing things working is, is, uh, as a vibe. I mean, it's a vibe, something vibes with, uh, with, uh, your, you know, your experience vibes together in one way or another way. Um, exactly a vibration. Okay. Super cool. Super cool. This, uh, um, connect significantly with my understanding. Yeah. I mean, I think it fits in all sorts of ways. It's just, uh, it's just a matter of, um, in terms of the nuts and bolts. Um, uh, it'd just be fun to, um, to actually sort of marry it with what people have already done with, um, with large language models, you know, because, I mean, I, I've been thinking about this for a long time too. I just haven't had the, I haven't sort of had the polish out. The oscillation thing is about, it's about 2016. I came to this isolation idea. So it's kind of a contemporaries with, um, with, uh, language models first, uh, taking off. Um, so, um, it's, uh, it would be nice. Yeah. I mean, I think that, that people are going to stumble on it eventually. Um, it's just sort of frustrating that it's difficult to get people to jump out of their current paradigm, which is because I suppose it's making billions of dollars for them. I don't know why it would be. But, um, try something new and just look at the problem in a different way. Sure. Cool. Um, okay. No. Yeah. So good talking and, uh, yeah. So, uh, feel free to let me know what you think of those, uh, risks or whatever. Yeah. Absolutely. Thank you very much, Rob. And, um, yeah. See you around. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye.