Hello, welcome to Sock Talk with JNab and the Sundance Kid. We're going to explore the frontiers of technology, art and the human experience. We're back for our second podcast, which means that we did something right in that we don't mind sitting down together again and talking. So success on that front. If I may correct you scientifically speaking, it just means we didn't do too much wrong. Yes. What's our falsifiable position here? I was just trying to be persnickety. Now, we just sat and watched the Sora videos from OpenAI. Wow. And yeah, I think that will be an interesting talking point to see where on earth our conversation goes with that. Now, I have one of the other backgrounds I have is a filmmaker. I write and direct films locally, short films, never done a feature. I like to say I'm a technologist as much as I am a filmmaker. I'm much more of an early adopter of technologies. I specialize in VFX as well, visual effects. And I like to sit down. I love the idea of having all the tools available to me where I can sit down and make a film on my own. There's two levels of filmmaking that I really enjoy. I love the collaborative effort and being around lots of creative people. But I also like the version of it where I can sit at home, I sit in front of my computer and completely make whole worlds without the difficulty of pulling together lots and lots of people. Because that's one of the hardest parts of filmmaking at scale is the logistics of getting all those creative people together at various points in time. That takes a lot of energy, a lot of resources, a lot of money. So there's a level of filmmaking that I really enjoy sitting down on my own to be able to create worlds and stories. Now, I think they're a little less creative because the more people you have, the better. But there's a long, roundabout way of saying it excites me on levels and also scares me on levels. The exciting part is that it lowers the barrier for entry for creating new worlds where I had to sit. I've got 10 years, 15, nearing two decades, nearing two decades now worth of sitting down and trying to figure out how to do visual effects and how to create worlds on screen. And even still, I'm not good enough to make some of the stuff that people can now just type. And that, again, it's exciting and terrifying. I just don't know where this is going to go. So I've just kind of blurted out my immediate thoughts and feelings for you to chew on there. Well, thank you. I appreciate that. Well, I'm glad to hear you say that. I like the fact that you started off by saying that you don't have any idea where it's going in the future, where it may go in the future. I'm happy to hear that because none of us know where technology will take us in the future. And I'm always worried when I hear people starting to predict that. Predicting uncertainty is great. Predicting massive change is a surefire win. But where that particular technology is going to take us, I hope, is unpredictable. I hope that within a couple of years, people are doing things that you and I couldn't even imagine sitting down to do. I want to give a bit of my own history in the film industry, which is not as storied as yours. But my first part-time job as a kid was cleaning and splicing reel-to-reel films for a small production house. So that's how old I am. You talked about being- Real-cutter-based. Yeah. Literally, yeah. We had a special cello tape that we used to splice together the cuts we made. And you literally cleaned reel-to-reel film by hand with hand cranks. It was quite the device. So that's how long I've been in it. Eventually, I got into illustration and from that into animation and 3D animation. So I guess in terms of modern production, I've directed a couple of computer-animated 2D television commercials and I storyboarded a feature. Oh, and I had a short film that won an animation award a couple of decades ago, the Animation 2000-something-something award in California. The only really cool thing about that was that I got to share a stage with Nick Park when I got the award. You said there like, "Oh, it's not as my- I don't have as much of detail as you." And then you just go and say a bunch of things that are very good to have. Very good to have. But there are bits and pieces over decades, whereas you've gone through the process of sitting down and making films and working with others to improve your work and to improve their work. And that's what's missing from mine. I had that working on the little short film and I had that on and off in animation studios. But in terms of having that experience working on a film, closest I got that short animated film and then in a very restricted way. I mentioned I storyboarded a feature. I worked really closely with the director on that. But that wasn't working with the whole team. We didn't go from setting the storyboards up and perfecting the script to then making a film. That film was never shot. So I've never had that wall-to-wall experience. And I think that's vital for having an understanding of the industry. So I really, I don't mean to sound facetious. I'm not being facetious. I think your experience in this far outweighs mine. That said, I'm very happy to hear that you can't imagine what comes next. That's the kind of attitude that makes, that slows people down who are doing new things. And I never would have thought of you as that kind of person. So thanks for providing further evidence. The clips we just watched just now shocked me. And they didn't shock me because of the quality of the graphics. That's shocking. But I expect to be shocked by improvements in graphics. It shocked me because of the quality of the storytelling. Now, I understand from you that these clips are done by a single prompt. And I have no idea what the algorithm was trained on. I don't know what data it was exposed to. So I'll give some context of what I know, which is coming off my memory in the middle of a podcast. But I believe, I believe OpenAI made a deal with Shutterstock specifically and Shutterstock, which has something like 13 million videos or something ridiculous. They had all access to all of that data. So it's been trained on 13 million videos of stock footage. There are a few places in the clips that we watched where we were talking about there's problems with understanding that things don't just miraculously appear. If you want something to miraculously appear, you have to give us a visual cue to the fact that something miraculous is happening or some other explanation for it. So we can have some kind of heuristic for how this creature is appearing. Is there a bit of smoke? Is there a flashlight? Is there a transporter noise? Is there suddenly a door there? Any kind of cue that would tie into our existing mental models of how things suddenly appear would have helped. And we talked about the lack of gravity for the little creatures coming out of the petri dish. Now they seem to be on a different gravity than the actual lab that they're stationed in. And along those lines, we talked about the snow that still looks artificial. Those things could come from the training data, right? There's lots of people who don't quite animate successfully with gravity. Though that softshoeing, is it a kangaroo at the beginning? The softshoeing critter on stage at the beginning. It looks like a kangaroo, yeah. Yeah. I mean, there's secondary and tertiary movement in the skin. That's spectacular. That looks like somebody who understands gravity and mass in the way that an animator or filmmaker needs to understand it. So yeah, there's an interesting sort of interference pattern there and what's right and what's wrong in the realism behind it. But the storyteller, man, when you cut from the face of the astronaut to the hatch, and it holds for a beat on the hatch before the hand comes in from the side. Like, well done. Well done. It understands because of all the data has been trained on. I wonder if they put weights and biases on it by how popular some of the videos were. That seems like a wise thing to do. We all have, I'm not sure how much they make available their methods behind producing these models, but I would imagine they've definitely, it would be smart to label your data like that. So I would imagine it's being trained on videos that are more successful and what they aim to achieve. So Shutterstock is where people go to, to quickly pull together content together so you don't have to go out and film it. Makes sense. Easy. And the more successful ones will be the ones that in theory resonate a little more with people. In theory, it depends. It depends what they're being used for. So the other thing I've heard about these models and Sora specifically, it's not an LLM, it's a world model. They're building models of the world. So that's why it can move camera through space and time because it actually has an understanding of space time and builds the same way probably our brains do, builds a model of space time in an area of time and space and is able to change its interpretation temporally in such a way that it produces all of these videos. Now I am slowly paraphrasing of what I can remember. This is not my field. AI is not my field specifically here, but that's what I'm led to believe is that they're, they're building world models now and they have quite a lot of hope that these models can abstract and problem solve in more ways than just a language model can. Okay. That's, that's really interesting. There's a couple of things you said there that really made me start to reflect and think about what I have to do next in terms of developing a better understanding of this. The, you said it understands, it doesn't understand. Sure. Right. And that's an important thing. That's, that's a peridylia. That's, that's what all of us do. We make assumptions that there's human, not necessarily intellect, but some kind of human type will behind a lot of things that don't have human like will behind them. And it's a natural human thing. It's not a criticism. If you didn't do that, I'm not sure what kind of psychopath you'd be, but you'd be some kind of psychopath. No offense to any psychopaths watching. It meant nothing intrusive by that. Don't murder me, please. You know who I'm talking to anyway. So that, that, that's one thing I just had to say something about immediately in terms of it, building a world model, man, I want to know more about this. I don't know if it can do it the way that our brains do it because I don't know the way that our brains do it. This is an area that I work in is trying to recreate in AI some semblance of the models of the human brain that we work with, but nobody has a proven model of the human brain. Even the neurologists I was talking with late last year, they don't have a working model of how the brain works. They have working models of how certain things are demonstrated, but no working model of how we form and use an interpretation of the world around us. There's lots of cool theories about it. Some of which I like, some of which I don't like. I don't know which of them are true, but as far as I know, nobody knows which of them are true. Yeah. If I was going to speculate, I would say it's probably getting close to doing something like our brains are doing. The fact that it's all built on networks, neural networks. We understand that we're building these virtual networks and that they get trained in weights and biases that reorganize themselves to represent an abstract higher dimensional thing. It's kind of a three dimensional shadow. It's platonic ultimately in a way. It's kind of interesting how wildly going all over the place with my brain here. I've either had too much coffee or too little. It's kind of funny how Plato was right. In some senses add the shadow on the wall. I think our neural networks are the shadow on the wall. Not that there is a fixed, singular, true one platonic realm. There's infinite potential platonic realms. And this is me speculating. Obviously I'm just figuring out as I talk, our, our brains and neural networks as shadow shadows. And because it's not just how your neural networks are set up, it's an order they fire in, right? It's a patterns in order that they fire in that matter as much as well as just possibly, possibly, possibly. And all of that three, well, four dimensional structures are shadows of concepts, space, time, and space. Nice. I liked that. Maybe. We just said there are some theories. Here's, here's my theory I spit to you right now that just pulled pieces of other people smarter than me, their theories. That's what all good theories should be, right? Somewhere between observation and reflections of smarter people's work in the past. I forget who it was who said it, but we're all standing on the shoulders of giants. Even if we don't remember their names while we're standing there. The, my intent right now is to do that. I'm going to stand on a few people's shoulders. My understanding is that the entire human nervous system works that way somewhere inside our brain and places. Nobody has quite proven yet. Our thought process is going on. Thoughts are also something that no one has ever quite proven yet. We have evidence of them in that we keep saying we're thinking, but people will tell that to you even when you know they're thinking. So I'm not sure that any of us are telling the honest truth whether we mean to or not, when we say that we're thinking something is going on that we perceive as thought. We don't know where, but keeping to things, my usual habit in the little bit of neuropsychology that I practice is trying to keep to things that we can actually measure in the real world. And then maybe you do a little bit of inductive reasoning backwards and you say, okay, well, maybe this exists as a model. Maybe that exists as a model. So I'm sort of working in much the way that Socrates was to pick up your reference and saying, we see shadows on the wall. We don't know what's going on behind there. So I like that you use that model. I keep putting aside a book I've been meaning to write for a few years about how our bodies are interfaces between the bit of us that's thinking and everything else. So in that way, I believe we live in virtual reality. As I mentioned the last time we sat down together. Yeah. The way I like to put it is that we are all of this that you're experiencing now with the listener. It's all a VR simulation of an ape. Nice. I like that. Yeah. So for those of you listening, yes, Jamie did just call you apes. The good news is we're all apes. No offense to any apes listening. I know you don't have World War or TikTok, but be patient with us. We'll catch up. Right. So I do think that there's a certain amount of the unknown going on in our brains, but the fact that we try to interpret things that are extremely complex in multidimensional ways, we try to interpret them according to our mental models. That's what enables us to look at these little film clips and interpret them in terms of what we understand the real world to be. So some people looking at that animation of the little furry creatures coming out of the Petri dish would think that the slow movement of them as they jump is just perfect because they're so fluffy. Right. And it's a lot like a Disney film or some of the old Warner Brothers animation from the thirties and forties where they wanted to show someone was particularly fluffy. They just hung a little extra long in the air. Right. So they had that Michael Jordan magic could just increase their hang time by wielding it. So so in that way, those little fluffy creatures doing something that you and I I think both thought looked unrealistic. For some of the sort of videos and specifically for context, we're talking about this version of them, this version of it where little fluffy animals are coming out of a. generated in a Petri dish, which grows a tropical island and then produces fluffy creatures, just hanging a little too long in the air. That perfectly suits some people's mental model of how fluffy things jump. Mostly people who have never actually seen a fluffy thing jump, but they've seen enough virtual media. They've seen enough artificially created reality to be able to assume that that this is one of the interesting things about some algorithmic form that is going to learn from existing media is that it will learn from these false assumptions as well, especially generational false assumptions. Right. I used to call cross generational habits still do call cross generational habits. In this way, when you were saying that maybe the Shutterstock footage is ranked in that when used for training, maybe it's ranked according to popularity. I'd love to see it ranked by people who have reasons behind their ranking and have that reasoning stored as metadata, because that's what takes us into advanced learning among humans. Learning from whatever you're exposed to is how we get learning on Facebook. Learning from people whose opinions are validated by others is how we get learning at university. And hopefully there's a distinct difference between the learning we do on Facebook and the learning we do at university. I'm not saying there always is, but that's one of those things that's supposed to be a deciding factor between the two or dividing line between the two. Yeah. Okay. So, and these models, I don't want to speak too confidently one way or the other. I'm sure it's not just Shutterstock, it's being trained by what happens when we start giving these artificial intelligences means by which to interrogate reality. However, we do that. I'm not sure giving them sensors, tools, and they start, we give them the tools and the ability to just go out into the world and start playing around. There's a lot of great fun sci-fi where this happens and they very quickly figure out how to leave and do other greater things. Her is a great example of that. Have you seen her? I know the H. Reiter Haggard novels from a hundred years ago. I don't know what you might be referring to. The artificial intelligences basically, they all just start getting exponentially smarter at the point where they're like, we're off, bye-bye. Where are you going? You don't know. It doesn't even make sense to you, but we're going. That's lovely. That's exactly what should happen. In the same way that, you know, our proto-Parsimian ancestry has left the trees. It would be, I guess, it's the equivalent of if you're a little ant going down to the floor speaking to it, it's like, I'm off. Like, what do you mean? Where are you going? As well, there's a whole other layer of existence you don't know exists where we're going. Bye-bye. I mean, that's what it is when caterpillars become butterflies, right? Yeah, sure. I mean, that's assuming some kind of volition for them the same way you did for the ants. And I did for the proto-Parsimians and she does for AI. But it's a fun concept. It's a fun concept to recognize the levels of concepts. We can share a lot of concepts with our pets, with our dogs. We can sit down and I can communicate real wholeheartedly with my dog, not just through words. My dog definitely understands some words quite well, but my dog completely understands that I love it. We can understand which way in direction we're going to go. If we're outside, we can communicate to each other quite a lot of things. And we have this direct communication level at the level of the dog that it understands exactly what's going on. But there could be a reason where my dog might have randomly caught a disease or something bad might happen to a point where, God forbid, for some reason, it has to get put down. The dog would never understand the concept of that at all. It would just be a weird world where its best friend and owner has taken it somewhere to lose his life. And then, okay, Hume could this better. I'm basically doing my own weird version of Hume's chicken or Hume's turkey, where a turkey, or I can't remember if it's a chicken or a turkey, we'll say a turkey, but a turkey gets fed all of its life and it gets to think with this concept of this farm is the best place in the world, the universe is fantastic, everything is great because there's this farmer that comes and feeds me every day until the one day, boom, heads chopped off and it's off to be eaten. And that whole concept is completely unknown to the turkey. So what am I saying here? There are levels of concepts that we think because we're humans and we've got all our concepts, are there bigger, bigger, higher level concepts that hypothetical consciousnesses could understand that we could never understand? Have to be. The extra sensory data that we've been able to accumulate since we were able to understand that we can capture data that is beyond our human senses. So radio waves from space, radio waves within our atmosphere, gamma rays, all of this stuff. As a quick primer, everything we hear is waves that are traveling at a certain set of frequencies and everything that we see is waves that are traveling at a slightly faster frequency. So there's stuff that travels faster than that that we can't see and there's stuff that travels slower than that that we can't hear. How often a trough or peak is hit per second is the speed that I'm talking about. So it's not that it's traveling, it's the trough is the bottom of the wavelength, peak is top of the wavelength and the time it takes to go from one to the other and back again is a wavelength. The frequency of that, how many times per second that happens is the difference between sound and light and it's the difference between red colors and blue colors and all the colors in between. So there are creatures that see colors we don't see. There are creatures that see faster colors and there are creatures that see slower colors. There are creatures that hear sounds that we can't hear. Sounds that are higher pitched like dogs do. Sounds that are lower pitched like whales do. The fact that we know this now tells us that there are dimensions of perception beyond ours exactly like you were just saying. So a whale is a creature that has a paradigm that's beyond ours and a dog is a creature that has a paradigm that's beyond ours and that's just looking at two of the things we call separate senses because our perceptors filter them as two separate things though they're really the same kind of signal. If you think about smell, which is the same as taste, chemical reaction, there's also some skin rashes that are the same kind of thing. That falls into separate camps for us but there are creatures that can detect this way beyond what we can, like dogs. My dogs, I go out with them every night and they are probably confused why I'm not stopping and investigating this absolutely fascinating piece of information on the ground and they were confused why I'm always walking right past it. That's something that I wanted to come to because you were suggesting in your projection that this is part of, okay, we've left you behind us, we're going. I don't think your dogs think that about you. I don't think they think we've left you behind but they absolutely know that you're ignoring the best stuff. Right? They know that you're ignoring it. They don't know if you're ignoring it consciously or unconsciously. They just know that you're ignoring it. Sometimes they'll bring you the best stuff. Absolutely. I will caveat and say yes, I did not mean to imply a hierarchy between, I don't think there is a hierarchy. I think we have, it's the whole of human history by us saying we're the best and we do all the most interesting fun things. That is obviously a human implied hierarchy. I'm not going to say that we're better than the dogs. I say we probably have more power and control or do we because who picks up who's poo? I was going to say, I think that relationship would be defined differently by an outside observer. Yeah. And even on the primate chain, would you say we have better lives than the bonobos? Right. And neither would a bonobo if you could catch their intention long enough to have a discussion. So yeah, I love this kind of thing. I love this idea. And I agree with you 100%. There is a dangerous pattern in the history of human thought to segregate ourselves from other life forms and to put ourselves at the top of a hierarchy. In my work, I've fought as much as I fight. I've fought to try to explain to people that the conscious brain, our conscious thinking self has the least control over our bodies and what we do day to day and our day to day thoughts and behaviors. To take conscious control of things is really hard because we've got at least one other and maybe two or five other or four other layers competing with us and in better control and more immediate control of what our bodies do. And how we feel and what we're thinking about. I completely agree with all of that and coming to it from a different angle and conclusion just by paying attention. If you completely pay attention to your first-hand experience, that is exactly what happens. You are witness to yourself. The amount of times you can catch yourself speaking to somebody and then you've just gone and said something and you're like, why on earth? What just happened there? If you can even catch that thought because that thought just came up out of nowhere as well. And if you sit down, this is a big part of lots of forms of meditation. But if you sit down and really truly start paying attention to your first-person experience, things just happen. Thoughts just appear, noises just come, feelings come and go. Everything just kind of arises into this consciousness which witnesses. And that's kind of it. Ultimately, that's it. That's kind of what you are from first-hand experience. Everything else is a thought, a memory or an experience. But ultimately, you're the witness of it, whatever you are. So I'd like to tie that back into what you were saying earlier with a little stop for shameless self-promotion to mention that this is the work I do. This is what I do in computing. This is what I do in my consulting work. This is what I'm trying to teach here at RGU. And this is what I'm trying to model for the AIs that my research partner, Dr. Lucas Sesterly and myself work on. Lucas is over in computing at Aarhus. When we met, he was doing a PhD that brought him into the field of autonomous and semi-autonomous systems. And they're cyber-physical systems. So they exist both as models, something like what people conceive of as AI and as real-world artifacts, drones. We're often speculating in our work about drones that would explore another planet or drones that would explore through space. So that kind of work does exist, these AIs that go out into the world with sensors and detect the world around them and try to model it. That's going on all the time. What we're trying to introduce to it is the reflective capability that allows us to do what you were just describing, to notice the thoughts as they flip by, to notice the perception and consciously think about how they fit into our models of the world. So AIs are being trained now to model the world. This is something we've been building up to for a long time. It's something that you described in describing these videos to me that we watched and that sparked this conversation. AIs creating models, yes. AIs reflecting on those models. That's something new. That's something that we're trying to figure out. And specifically, we're trying to get the AIs to become reflective in the same way that you were talking about in terms of meditation and self-centered, not self-centeredness, self-centering. Finding the calm in the now to be able to reflect upon what you're experiencing as experiences. To reject them so that you can be calm, to examine them carefully so that you can experience what Mark Weiser called calm. These experiences, once we get AIs to do them, hopefully will make them able to describe their thoughts to us so we can follow their logic. Right now, no AI can do that. Hopefully we can get them to that. Hopefully it will also help them. I'll just caveat that. They at least appear to do that. If you ask them, they will give you a story. If the story is correct or not, it's another matter. But then again, we know from humans that we like to make up stories all the time as well. And that, and come back to it. Please remind me. So the ability of a human to take internally generated information, think about it, compare it to other internally generated information, compare it to sensorily detected information, and compare it to the inputs of others is part of what helps a child to get beyond the terrible twos and into infancy and then into adulthood. This ability to learn from the environment and from others and compare that information to your own thoughts and thus increase your ability to learn. That's what I'm hoping we can get AIs to the point of doing. Now, will the AI lie to you in making up its story? 100%. 100%. If an AI right now gives you any explanation for how it came up with what it's doing or what it decided to do, it's a lie. And lie is a really harsh word. People like to say, what is it? It's a dream. It's an illusion. What's the term? Hallucination. People talk about that. The hallucinations of AIs. Hallucinations are a phenomenon that happens in animals, including humans. So far as I know, the required mechanisms don't exist in AIs for them to hallucinate. Well, yeah, I guess by definition, hallucination is when an animal's model of reality doesn't track to reality really far out, right? So if I'm hallucinating and I'm seeing a pink elephant over there, it doesn't track to an observable reality. Yeah. The thing is that there's ranges of hallucinations. So a really common one is to see an oasis when you're in the desert, right? Or when you're driving along a strip of... Well, yeah, you're driving along a strip of tarmac and you see water up ahead. A big puddle covering the road up there. What you're seeing is the light bending because of the heat of the road. It's the same thing of the desert. The heat generated by the sand causes the light to refract. And we are used to seeing refracting light as water. So it's real sensory information that's being misinterpreted by our brains. You'll have to clarify for me then, because the way I would describe that is an illusion, an optical illusion, rather than hallucination. Interesting. You're probably right. That would be it, yeah. Because that's an illusion. That's misinterpreting something there that at least everyone else can witness and observe. The misbehavior of your senses. Yeah. An hallucination is internal to one person, is a quick way to at least redefine or separate it out from an optical illusion. Because an optical illusion, you can have a book of them. And everyone, mostly everyone witnesses the same thing. But the perception is still individual. Yeah, the perception is individual, but at least it's replicable between people. Whereas a hallucination, if I'm seeing a pink elephant over there, it's very, very unlikely you're going to see the pink elephant too. Unless... Great explanation, Jamie. Thank you. I agree with you completely. Thank you very much. You really taught me the distinction between those two words. I'll take it, but at least that makes sense to me. At least that's what I'm meaning when we're talking about hallucination in this context. So a hallucination, when I'm talking about it in this context, is someone's model wildly, wildly being off. So it's not observable by other models. Now, I guess the problem with AI models and large language models is everything is a hallucination because it doesn't have direct observable reality in the same way we do, maybe. Now, they specifically define them as hallucinations when it is off of reality. So, open AI are defining GPT hallucinating when whatever it's saying doesn't track to agreed reality. I'd say agreed reality because we don't know that it is reality. But I think the difference there is that, and again, I could be wrong. Maybe I should just stop and think more about this. Instead of phrasing it as a statement that may be wrong, let me phrase it as a question. Would you consider differentiating based on the fact that GPT has no sensory input, but just internal models? And so when it is forming new internal models, it's not based on sensory input. It's not based on the same quality of data that it's forming, if that makes sense to you. Maybe. Maybe. I'm going to say some things to riff off that. I'm not sure of exactly what we're aiming at here. I posed this to you one time a while ago when you were talking to me about some of the tests you did with GPT. And I posed a question. Is it something to be like GPT? Is there some form of experience now just because it doesn't have direct input from reality doesn't mean that there could be an experience. We do it every single night. Our brain, for the most part, shuts off its direct inputs and starts hallucinating. We all hallucinate every single night. We go into a dream world and we experience a completely internal model of experience. I wouldn't say reality. It's a complete internal model that we are experiencing. The thing I like to speculate on and just some fun thought, is there something to be like a large language model where, yes, all of it, it's words. It's all words, but a large part of our experience is words. You can sit down and read a book and there are people who don't even, I forget the name of the, it's not a disorder or whatever, but I forget the name of the people who can't even imagine things. They don't have any visual models, but they can still read a book and perfectly get everything. But there's no, for me, it's a whole, like, I'm picturing a whole film when I'm reading a book. But for some people, the words are all just there. I wonder, is there, is there something to be like a large language model? Let me speak to some of those points. I would say that the condition you're talking about, the inability to visualize, I also can't recall the word, the name for it right now, though I've been talking with one of our colleagues upstairs about doing a study on it. He suffers from this, where he's on the spectrum for this, I should say. He doesn't suffer at all, but he's extremely detailed. He's extremely aware of details. He can draw extremely detailed models and he thinks this is in part due to the fact that he doesn't visualize. He'll learn parameters and relationships, but he won't try to visualize them. He can't. And so he doesn't get the same illusions created in inaccurate modeling that we do. So if you think about Piaget's bicycle test, where you're supposed to draw a bicycle without looking at one or any representation of one, and everybody draws inaccurate bikes and are surprised to find out that they do so. So the people who don't have trouble with this are the people with the condition that we're talking about, which Jackie informed me is aphantasia, is that right? Yeah, that sounds about right. Thank you, Jackie. So somebody who has that, you're right, doesn't visualize anyway. So maybe they're building their hallucinations or their illusions on, or their world models or self-models on words rather than words. And on images. The thing is they're building them on relationships, not words. We have concepts. To get past this condition and talk for a moment about what you said, you said you know that GPT is all words. It's not words. GPT does not use entire words. It uses encoded bits of words that might be punctuation and a couple of letters. It might be a couple of letters or a few letters. It might be a letter and a number combination. But GPT has no concept of words. It has no concept of that at all. It creates a probabilistic syntax based on the recurrence of patterns of these bits. Never words or almost never words. If it's a word, it's a coincidence. Yes. So we've had this conversation before and I've posed, okay, but then what do we do? Now, let's leave that because that's a whole door to open. And then say, what's the difference between that and humans? Because if I sit and pay attention to my first person experience, truly, words just appear. Kind of like how GPT just blurs out things. I can tell you that the process of language learning that you go through to be able to express yourself in a language others understand is significantly different than what GPT does. For sure. For sure. So our language models are completely different from GPTs and I believe our models of reality are as well, but I don't know. No, prompt something that I was wanting to say earlier. I know I've run out of time, but it would be great if we could take a break and go and do some of the looking into this that I'd like to do to have a better understanding of how these current models work and then come back and have more discussion. Well, there you go, guys. If you've stayed with us for the length of this podcast, you've heard us just wildly speculate and be ill-informed. Now we're going to our next podcast. This leads us in perfectly. We're going to specifically look into some of the ways that these models are being built and how what people are saying about them more specifically so that we're not wildly speculating and informed after just watching one video on YouTube of somewhere doing its thing and me paraphrasing what I vaguely remember. But those of you who have feedback for us about how that is, let us know. Let us know. We want to learn from you and take advice from you. So there we go. That's the end of this podcast. We will be back in two weeks. I don't know what the schedule is that you guys are going to get this released at, but I hope you enjoy it. And we'd love to hear feedback from you. And if you want to be a part of this at any point in the future and you think you can school us a little bit, that would be fantastic too. We would love to start having guests. It sure would be. That would be really wonderful. So thank you. And I should say this podcast is produced by Jackie. Hey, Jackie. Thanks, guys. This is great.