twimbit-gif

The Role of embodiment in the construction of integrated models of the world

Session 5: Foundational Aspects of General Intelligence and AI

Adam Safron (Johns Hopkins School of Medicine) – The Role of embodiment in the construction of integrated models of the world

To watch the video on YouTube, click the ‘YouTube’ button above.

Transcript: –

Sheri Markose

To Adam’s talk. And so, Adam is at the Johns Hopkins Medical Centre. He’s a postdoc Research Fellow there. And at the Centre for psychedelic and consciousness research, he shows me that, I don’t know which part is not, you know, the psychedelics or the consciousness. In any case, he will he has a number of interests. And his ultimate purpose in Explorer is to explore how individuals can be adaptive, creative and free in all aspects of their lives. Adam II said, like joining a hippie collective. Anyway, the floor is yours. We look forward to your talk. I’m hoping George will rejoin us. It’s too early that I could make out in Canada.

Adam Shroff

Um, he sent me a message says he has to leave, but he’s hoping the recordings will be available so you can catch up later.

Sheri Markose

Yes, he was looking forward to your talk. Absolutely we will be recording and we’ll revisit many things hopefully.

Adam Shroff

All right, can everyone see my screen? Yes. Excellent. Okay, so today, I will share a few thoughts on the roles of embodiment and constructing integrated models of the world. And I think this will connect to varying degrees to some of what was just discussed. So in trying to reverse engineer human intelligence, or just figure out our own intelligence, we have this, you know, challenge of trying to figure out like, how are we so smart, where we’re nowhere close to building systems, even remotely close to humans and intelligence, with very few learning opportunities based on ambiguous information, sometimes, you know, ill posed facing ill posed problems. And with within a very under constrained inference spaces, we somehow managed to figure things out with incredible rapidity and robustness. Limited, you know, we can get confused. Our folk notions can sometimes deviate from reality. But it is quite amazing relative to anything that we can engineer. And so what a lot of people are saying is we need some sort of inductive bias, some sort of useful priors within the, you know, Bayesian cognitive science paradigm. And within, you know, machine learning, they call it inductive biases. But with a biological system, we face a certain challenge. I’m trying to port this as an explanation, which is, well, what forms might these evolutionary priors or inductive biases, what forms might they take? Can you do it like, what’s the complexity of what you can pre load into a nervous system? You have this, you know, mismatch of complexity between the genome and the connectome? I don’t know how much I want to make of that. But still, the idea that you’re having this massively self-organising system where a lot of the information is not just like complex Lee, a function of what happens, but it’s unknowable, our priority in a way because it’s independent information that will only be available to the agent, as it interacts to the organism as it interacts with the world. And so like, a lot of the information will determine like what neurons mean, what will be a function of particular idiosyncratic histories experience, this isn’t to say that there isn’t, you know, extensive cannibalization of the phenotype by genes. But the question is like, what are we looking for is this, what kind of biases could we have, we might be very limited. So natural selection might have had kind of like fat fingers and the engineering problem of trying to sculpt us in particular ways. So how my biology has handled this, I suggest that the primary way that this was handled by nature was using the body as a sort as a prior, or you could say, as a source of very reliably learnable posteriors. I think you may never get opportunities for learning that are as well-posed and fruitfully constrained as your own body. It’s a good picture like the young infant learning just you know, its hand. You know, it’s, it’s always there, it feels like something so it cares. It can move its hand to disintegrate hypotheses. All of the modalities are giving these rich time-varying data streams that are in real-time constraining each other in the look, maybe some sound the feel from without within Body Body interaction. You’re having these massive constraints that you wouldn’t get from just like a passive observer. Just looking around the world and trying to infer a world bound for what’s happening, what’s out there from there. So as opposed like the naive Cartesian observer, it’s the opposite of that it’s very inactive, and very constrained. And, I would argue, creates a near-ideal kind of training wheels system for learning the first essential lessons and a learning curriculum provided by somebody. So that’s the idea is that the body could be a source of very useful evolutionary priors. Other people have spoken of some of this like it’s a role Pfeiffer talks about this very talked about this very eloquently. You know, even like, the joints of the arm are part of the learning curve, the fact that the hand reaches out in front of you, and places things for inspection, or the fact that like, fingertips are kind of squishy, soft, robotic systems that help with control were allowed the intelligence is offloaded morphology itself. This is further scaffolding and training wheels. It’s not clear we’re doing things like this in AI. And I would suggest that this might be essential, this might be that you might need these kinds of lessons to constrain hypothesis space is enough that you can actually bootstrap your way to a robust causal world model.

Sheri Markose

Can I get this going? This came in evolutionary time terms, the idea of this self comes very much, much later, you know, I would say, you know, something like 100 and 50 million years ago, but there’s a group of us, as you well know, I mean, Sylvia Ramos, not here, but she was there last last time. And hopefully, George, we, you know, we’re saying that there’s a precedent for the cells. And that wasn’t the adaptive immune system. And you’re aware, the, at a molecular level, you know, the knowledge of self genes are the ones the genes when expressed, create your morphology, and that sort of thing, the the the organism itself, that needed to be projected, somewhere from which you need to make inferences about the other or the external. Sorry. So, why this is like, you know, your cat, your cat to the story, they further downstream, right, we need to go right back to when the self became integral to all aspects of inference. The body itself first came about the adaptive immune system 500 million years ago. It’s,

Adam Shroff

it seems like that would be yet another potential constraint provided on this engineering project, that might, I guess, would be an upstream of these things I’m talking about. So I guess I’m saying that the, the Bayesian cognitive science programme, the problems of deep learning, just on their terms, it seems like they need some sort of trick of like, so they’ll talk about, like, you know, selkies core knowledge that we have, but you know, what we’ll have like objectiveness, cause space-time, basically content categories somehow in there. But it’s never explained how they’re in there, or how, and what their nature is, like, in a biological system, it’s just, you know, let there be priors and it was good. But you know, you can for what we’re talking about quasi Kantian categories, the self, I think juries have written about this, like referencing like conflict, transcendental unity of apperception, even like, that x itself is needed as a kind of inductive bias for World modelling. And so part of this would be of selfhood would be, you can get some selfhood, just from like learning the constraints and affordances of embodiment, the body is a unitary object and causal system that you have asymmetric control over and knowledge are relative to things externally. But even upstream of that you have like these autonomic and immune and not just upstream, but continuously interacting, that are probably further essentially important for the construction of selfhood. This, you know, grounding it and homeostatic and Allostatic considerations. So the, yeah, that’s good to know.

Sheri Markose

I totally grant you I mean, you know, everybody at the base, it has to have priors and what better than the embodied self I mean, you know, get it ready. But the macromolecules that emerged in 500 million years ago with the adaptive immune cells are exactly the macromolecules. So we are hoping and we were hoping the next three months we will have that because this was, this is never the case in prokaryotes, you know, cells, they never had a concept of the other as a projection of self. So why did it happen? It happened for the homeostasis of being having to fight off, you know, the virus this this was part of the problem. And then of course, at that level of The level of the adaptive immune system it, of course, is sub-personal, it’s not conscious. But by the time it came into our, you know, central nervous system, the sentient self became apparent, but the macromolecule, so we really I mean, you know, so I know I know not to be that this is almost now very few people. Chalmers is an exception. And Tony is a dualist said not these things will mean very much. He’s the only guy who doesn’t believe in identity and body itself. There is one of the rare Cartesian relics left to so I’m told. So. So now it’s virtually universal amongst Most neuroscientists, the input the stock or the body itself. So but I think we need to go, you know, much earlier to find out where that started from. So that’s my two pivots.

Adam Shroff

No, I suspect you’re right that like the Princeton both of the mechanisms, the developmental pathways, and the principles of selfhood and agency have this older vintage and not just ended, this is going to end up being an essential part of story, both for illustrating the principles and for an understanding ongoing functioning, something I’d like to explore later would be like, the ways in which similar self non self discriminations of the kind that are laid down with immunity are reflected even decoupled from me, this is the same type of problem and challenge of interfacing with a world where some of the dynamics could be challenging to where you could face different types of like adversarial attacks to you, and that you have to overcome this and the ways in which this challenge might be overcome in different ways both by and basically like ways in which the immune system handling of this might actually be reflected in things like body maps, and maybe even continuous with them in terms of understanding and in terms of the ways we model, self and world. So I think that’s like a really deep rabbit hole. And so like, I’ve been focusing, and something I’m still updating on. So I’ve been like, mostly focusing on is trying to basically, you know, from this idea that we we handle these inductive challenges, like bootstrapping off with embodiment with this learning curriculum, the idea would be that you have this basically developmental legacy from this, that makes embodiment the core architectural principle of the brain. And that almost everything in the brain should be basically pegged back to cybernetic control, a basically a predictive cybernetic controller for an embodied embedded agent. And so ultimately, you know, what I want is a kind of Marion neurophenomenology, where we go from, you know, computational slash functional algorithmic implementation levels, have rich connections across these levels, and then connect these the phenomenological level, and that this will all be essential, I think, for both understand the way we work and I believe, ultimately, we probably will need to construct systems as intelligent as we are, we might need to build something like an artificial brain, when the otherwise the problem space might be too under constrained, we may flounder there for a very long time. So and for this bridge, no, well, I’ve been thinking about different machine learning principles as whether the bridge between the algorithmic bridge between computation implementation levels, it seems like, within like the free energy principle and active inference, there are some notions of different computation, our machine learning architectures that could do things like predictive processing, which could help to map neural systems onto these functions. And I’ve been exploring that in my recent work. Um, and so I guess to don’t go too much into consciousness. So how much time do we have?

Sheri Markose

You’ve got a good 20 minutes with q&a.

Adam Shroff

So I’ll just blow through a couple more slides and leave plenty of time for discussion. So yeah, actually, yeah, so I’ve been in terms of like, I actually think, if we’re going to have systems that can navigate the world, and can do things like natural language processing, so that they inherit the Cultural Endowment, we probably need something like artificial consciousness of the kind, you know, definitely the kind that like Benjamin lacunas started talking about in terms of, you know, higher, you know, system to properties or the ability to, like, reflect and have these different types of awareness of what’s going on in terms of a kind of knowledge, and maybe even artificial, phenomenal consciousness to provide the basis for certain types of empathic processing or just maybe as an efficient computational structure for World modelling and body world modelling. I’ve been specifically working on theories of consciousness by bring cross-referencing different theories and seeing what intersects between them. And as part of this I’ve been claiming that most of these theories, well, they have like merit and seeing different sides of this, this, this new mental elephant, and also the different sides of what we’re talking about when we mean consciousness, then most of them are begging questions with respect to why we should think of any of them in terms of like experience or what it is like. And that ultimately, embodiment is the thing that’s missing from most theories of consciousness. And this is a place where we, basically, cognitive science and theories of consciousness are going to come together on what does it take to realise a lived body to model it and its interactions with the world, such that you can stay on top of these interactions with this world such that you can in both inform and be informed by action, perception cycles, and processes of sense making and learning. And I argue that this is actually the functional significance of phenomenal consciousness as a kind of modelling of system and world and their interrelation. I think, Georg, find that be very sympathetic.

Sheri Markose

So we were dreadfully missing Georg,

you know, we had to pick a date. People, whoever was that? Is there. I mean, he he’s very much involved. We may have another session with Georg in it sooner than later. I hope so. Yeah.

Adam Shroff

So you know, in a very, very, like, broad brushstrokes way. And with devils in details here, you know, don’t be glib about this. I think there is a sense in which these types of fictitious stimuli that we see produced by different kinds of generative models, that this is roughly in the ballpark of the entailment relationship we’re looking for of a bridge between mind and brain potentially. But the idea would be instead of filling in like pixel arrays, you would be filling in a multimodal experience world in different combinations with a particular combinations of what you fill in would be the contents of your stream of experience, very broad brushstrokes, many doubles and details there. But that’s the rough shape of the intuition pump and where I’m going with this. So I proposed a theory I called integrated world modelling theory saying that basically, you need to have quasi content categories of coherence with respect to space, time and cause and maybe a very basic, minimal selfhood, in order to bring forth a world of experience. So to have a system that can actually model the world robustly, we’re going to need things relative to things situated with particular properties. And so for this will likely need space or some kind of locality for the situation, time or relative changes in this space to capture process, and cause or some sort of regularity in these changes such that such changes in phenomenas could still be modelled. And we also might need a different kind of minimal selfhood, where some of the principles actually might be continuous with the principles of adaptive human immunity, and maybe even mechanistically continuous. And so I’ve been exploring different machine learning architectures as ways of thinking of predictive processing, specifically, if you take auto certain types of auto encoders, and you take them, maybe fold them over, and, and give them a current dynamics that this could be a decent model for Cortex as a predictive processing hierarchy. And then if you stitch multiple hierarchies like this together,

Sheri Markose

can I just do that, can I just press you but predictive processing, right? I mean, I have I understand exactly why you need to do various things, you know, in other words, again, going back to the immune system, I mean, the whole whole thing about self reference, in my opinion, is not an end in itself at all, self reference to the actual mathematics of girdle and so on, as you know, is, is to find a means to access open ended search, you know, in order to actually identify various things in a in the immune system is to work out what novel, non self antigen is going to be attacking is going to attack yourself codes. So we have access again, going back from a million years ago, to someone with some, you know, I don’t know how those things happened. But the rag operators, the recombination activation genes can generate to, you know, 10 to the power 20. And 30. You know, like there was in billions of trillions of alternative scenarios. But this is done offline. So do you make a distinction between online and offline? I mean, the online is whatever the machines run and produce your your corporate cells, or whatever. And then offline, you have to generate the whole platform in a way where you try out all these hypotheses. So with collagen Call can be here today. And the whole business about predictive inference is there because you predict you make hypothesis, and then you test it with your experience. Because all of these things that are done in this offline venue is never pre recorded in your genome, it just, it is not structured in your genome, that’s another thing it’s learned on the fly. I mean, this is the nature of a lot of the learning. It would be fast, you know, because the experientially driven stuff is so very, that you could not have pre recorded in your genes. It’s done, you know, outside of everything in the adaptive immune system, or even in things we do, as per learning in the brain, very structured within. So the predictive coding, you know, in your mind, where, what what, why do we need to make predictions of things? And then what are we predicting?

Adam Shroff

So, predictive coding, in a way seems like a multi scale story that could apply both to neural systems and there’s even been handlings of immunity as a predictive processing system. But the in terms of something like handling novelty, there is a kind of fundamental conservativeness of predictive coding, as described of, you know, you’re explaining away as much as you can to minimise your cumulative precision weighted prediction error. And so it’s inherently on its own just taking on some it’s not incorporating things like novelty, but I’m associated with, like the free energy principle, there’s this process theory of active inference, which acknowledges we need specific mechanisms to do things like forge for new information to explore the adjacent possible in different ways. But the actual implementations might be quite heterogeneous, and not well, mappable. Back to predictive processing. One of the things, though, that connects some of the interests I’m describing is, and consciousness is as a kind of data augmentation slash exploration process where basically imagination where you would be doing offline, these different rollouts of possibilities of varying degrees of plausibility of varying, you know, temperatures of the search, you know, more like conservative like, what am I going to do next? What am I going to eat for lunch to more impressionistic, and dreamlike. And, you know, maybe for those like really thorny edge cases, like loosening up your system to be able to handle things that would seem to be improbable to you entertaining, more implausible counterfactual possibilities. But part of what both so they do is like the predictive processing would be part of a broader architecture for World modelling. And that one of the things you would want with this world modelling process would be to be able to support imaginative rollouts of different ways of different possibilities. And this would give you basically some ability to depending on how you do it, handle novelty and learn both offline in terms of you’re not in the midst of a task, but even in a kind of like offline way, like even when you’re in the midst of it. No very brief rollouts, like a driverless car, not saying driverless cars are conscious, but like, you know, they’re on the road. And then they’re entering traffic. And they’re already doing like these different like, sussing out of possibilities. Just to make it through, there’s already the importance not just for, you know, offline, you know, what you it gets, the Tesla’s output gets uploaded to the dojo, and they do the training, but even like, when it’s on the road, it’s going to have to be doing some of the exploring, of the new of the uncertain. And so,

Sheri Markose

so, so sorry, so it’ll be like, it’ll have a number of questions. It’ll hypothesise is like red, green, yellow. So it has to have a number of these projects, you know, predict predictions or quite hypothesis, I’d say, you know, you’re asked various hypothesis, and then you see green, and therefore, the hypothesis that it could be green fits with the green in my, in my book, that’s a fixed point. It’s not what I mean, you know, a lot of people take it as a prediction error, but I don’t see how it could be a prediction error. It can be only red, blue, or green, red, orange, green. And then you have you hypothesise any one of those and then when it is green, you know, it’s one of those and then you take off if you’re a self driving car. So is that how you have the would you have in mind

Adam Shroff

the predictive processing like it does seem to be a multi scale story that would apply anywhere from MessagePad Passing in cortex, potentially to understanding even things like low level like intracellular processes, even as types of as even there, you might be able to tell a predictive processing type story of like minimising prediction error where what would being the question that was, well, what’s being predicted, within the free energy principle, the word prediction is used like very broadly. But this, the primary thing being prediction predicted would be the adaptive phenotype of the organism and its relationships with the Eco niche. And so if what’s being predicted is the organism engaging the kinds of things it needs to do to survive, which would include going out and foraging for information and exploring the unknown, what you’re end up doing is you’re minimising prediction error with respect to this model of you being in the world. So you’re not just minimising prediction error, because otherwise you have this dark room problem, where it’s just like, you, you’re just trying it, but which actually, I would suggest is like, not completely solved. And there’s actually some evidence for something like prediction minimization that we do often, like, minimise complexity, sometimes in pathological ways. But in general, when it’s the way it’s working, right, is the organism is predicting itself, going out and exploring and finding new things. And basically epistemic value is part of like, what would be there, like an implicit objective function. And so the expected free energy, the the prediction would be with respect to that expectation of you being configured in a way which would be which would, which would involve

Sheri Markose

new things? So that’s, that’s it, that’s all very well, I mean, you know, that the, you know, the issue, but would I call some of these thermodynamic models, I mean, you know, call seems to think that at all, and every level, we just somehow have the wherewithal to do these things. And I keep pointed out in the darkroom thing, you know, there are single cell beings that have been, you know, they’ve never changed for almost billions of years, right, they’ve been in the dark room, they sort of batten down the hatches and just live like that. So to do the sorts of things that, you know, we are doing in a lot of structures that come in, in a way, you know, this this thing called the rag operators, without the right operators, we wouldn’t have any diversity there, right? Just out 10 to the power 20, different alternatives to wherever, wherever you chose to be working, you’re trying to understand what was the area that’s central nervous system, where the rag operators, are there in large quantities, apparently, isn’t the hippocampus? Because where you need to be generating lots of hypothesis, I suppose we don’t know yet. I mean, so I, you know, so I keep telling calm, and you have to either talk moleculer stuff, or, you know, as we go down the route of law, the thermodynamic models, where suddenly you have self organisation, and I want to know, how I mean, you know, I don’t see any of these things, just naturally self organising itself, you know, either there has to be the software, or, or, or, you know, you have given the wherewithal of how anything became as complex as it did, or does or it is. So, that that is the difference between the hand waving, or just assuming it’s going to happen.

Adam Shroff

We need specific mechanism. So there’s, there’s an assumption that’s like normative almost like a, almost like a kind of anthropic principle, like, if you exists, and you’re alive, you had to, on some level, been able to refine your models by getting new information. So you had to be exploratory, you had to be working at the adjacent possible you, you you had to have escaped the dark room to someone, otherwise, you wouldn’t be here. But still, that’s someone hand waving, even though it’s also kind of like, hard to think of how that couldn’t be the case, because it’s necessary. But now we need the mechanisms. And so once you get so at the normative process theory of active inference, you say that’s, that has to be there. But now you need additional process series, to actually bring in the the novelty bonus to bring in the artificial curiosity. And so things like rag operators in the campus, that would be like on the tables of potentially fundamental mechanisms, especially considering that the campus is the top of the cortical hierarchy, in some ways, the highest levels of policy selection for the overall system, agents organism as it moves through time and space. You know, you control the trajectories of overall locomotion through space, that’s a very high level of integration at the organismic level. And so if you’re injecting a novelty right there and connecting it to potentially immunity in Salford, like there’s all sorts of ways you can rich ways in which the many aspects of what you need to be alive, unintelligent, and exploratory and curious could be put in there and that that could be one of them. This idea of these transpose on type elements or like the like shuffling around creating diversity within the neuron, absolutely. This Nexus system, that would be a good way to do it. And you know, other ways, and there’s likely multiple ways that this is going to be reflected. And that could be a really essential one.

Sheri Markose

Anindya I want you to, you know, this is something I read from Adams paper, know, the turbo coding tell us something about this turbo coding, as

Adam Shroff

well. So, the turbo coding data is like, so in order to create something like a world model, part of what you would do is multimodal synthesis, each modality is giving you a different perspective on things with non overlapping ambiguities, non overlapping details, and so you could get synergy across them. And so the question is, well, what’s the source of these cross modal priors? How can we think of them? And so one way you might be able to think about them as you can think of each modality as a kind of noisy channel, and then as you move up these different hierarchies, if you let these channels, these know these if you basically, like Association cortex, heavily reentrant, less directed. And so in theory, if the top sleeves hierarchies have this intense reentry, this can be thought of as having parallels, I would suggest, with something called turbo coding. Judea Pearl, basically called it a loopy belief propagation for approximate inference over graphical models. It’s also discovered by like, like, once it’s, um, 3g and 4g, yeah, French coming. So it’s in cryptography, 3g, 4g communications depends on this. And you approach the Shannon limit for efficiency. And so that is the nature in order to pull off multimodal synthesis. Part of what’s so you have this rich club of the brain that stitching together all these hierarchies, and taking about 50% of overall metabolism, it seems so and a lot of people talk about this as having some conjunction with being like we’re maybe we’re like the business end of consciousness might be, but also like, what are some of the functions what is doing? So some of what you might think of doing at some level abstraction would be turbo coding, which is giving you a very advanced way of doing multimodal synthesis that approaches the Shannon limit for efficiency. And in general, you would expect nature to if there is some sort of optimality in nature could have, it might have found it, maybe it did here, and this could be a useful way of thinking of cross modal integration and wave fitting,

Sheri Markose

sorry. So this is in terms of minimising your metabolic metabolic inputs, is it is it optimal in that sense?

Adam Shroff

So within predictive coding would be optimal. And that the better your predictions, the more you can do explaining away and then you’re you have basically, you’re minimising thermodynamic expenditures as well with the minimal activity, but also efficiency in terms of just speed. And in terms of data efficiency, like alertly getting quicker to having intelligence, which nature being a world of the quick and the dead, you know, a small marginal improvement in your predictive power, and your energy efficiency or in your prediction efficient your your action selection, that will get expanded by differential exponentials. And you should expect this to potentially be discovered if the fitness landscape is nice enough. So yeah, so that’s, um, that’s like the turbo coding idea. So, you know, I’ve been calling it’s like the conscious turbo code. So

Sheri Markose

I think, yeah, we can now sort of be when you’ve got about two minutes left.

Adam Shroff

Yes. So basically been wondering whether some principles of brain are understood in terms of geometric deep learning. So this comes back and bite him, I can’t get into that right here. But coming back to safety, like I, with a very poor reception, I presented some of the ideas were talking about earlier about half dependence, natural language understanding, creating path dependencies that infuse value and intelligence has been coupled. I don’t think it was particularly well received. But that made basically the argument I made earlier, that you’re going to need some degree of human likeness, some degree of something like a kind of empathic Ness, basically, an effective and common ground, and common ground and, and substantial degree of common ground of your embedding in the environment. Something like developmental robotics might be an essential part of making our way to AGI. And that’s up

Sheri Markose

So many questions, so many new things. Vincent, do you want to add to issues that Adams raised?

Vincent Müller

Obviously, I’m just curious, Adam, what do you have you published something on that last point that obviously is interesting to me. Read.

Adam Shroff

So in terms of the roles of claim that embodiments provides for bootstrapping with like a basic cognitive science paradigm, I published that in entropy with an article called The radically embodied conscious cybernetic Bayesian brain. And so that one’s like very getting like the the importance of embodiment for like bootstrapping models, and how this might be a central architectural principle of the mind. I go later into little bit into different types of like, selfhood, and like, what we might mean by meta awareness, like mechanistically, what could be important that, but that’s, that’d be that paper, and then the consciousness ideas, article in frontiers, called integrated world modelling theory that was in 2020. But in terms of the connection to AGI on not really that I’ve, I’ve been more quiet about that. Because you know, people do like your Elune. Unless you have a certain amount of social currency, like, then you can talk about it otherwise, you know, so

Vincent Müller

it’s it. Yeah, I know that risk. Okay. Yeah. And I just spotted your paper so that when you say good, so that they’ll have a look at that thing. So I think that I think there’s something interesting to be said in the direction that you indicate it’s, it’s obviously tricky to see. Where in the argument Exactly. That bit slots in so to speak, where where is it? Is it where you say you want to get to super intelligence? Or is it where if you do get it, what consequences will it have? And, and so on. I think that that that’s interesting, but of course, the the really interesting question, at least for the moment, it seems to me is exactly what are the conditions for actually getting any kind of machine intelligence that deserves the name properly? And so I think that’s obviously something that you are contributed to. So that’s, that’s really cool stuff. So I’m trying to learn. I sent you another reference in the chat by the way. Kevin, Kevin O’Regan, I don’t know whether you if you you probably do know him, but if you don’t, then you probably I’m pretty sure you would like him. He’s radical. Uh, huh. Yeah. Similar things to yours. In the EU Commission network that I used to organise, which is like, one of the CO organisers. Well, funny that you mentioned

Adam Shroff

it that’s actually really, like, his work is like some of the most interesting work out there, I think.

Sheri Markose

What’s his name?

Adam Shroff

Kevin. O Reagan lifer. Oh, yeah. But like, so I call like the paper like radically embodied, but like the people who are into radical embodiment would actually not, I think, like what I said, Because I argue that basically, we want to think of the brain as like, not just like as one principle throughout, but it’s like, it’s a complex architecture. And so like, if you’re thinking of like things like different sensory hierarchies, maybe lower down, it is more an activist. And if you’re thinking things like representation and modelling, it might be more of an implicit variety of enact a variety of a coupling, like invite involving the environment and coupling. But I’m also like, I think, like, once we get to these deeper portions of cortex, we can get these basically attractor dynamics that can kick free of immediate engagement with the world, you have things like bonafide representations. And these are what you would use for things like doing imaginative rollouts, and exploring novelty. And so you can do these things all and actively. In fact, the nature has been doing it for, you know, hundreds of millions or billions of years. But the idea would have actually, these are the things you need for active inference and exploring the new.

Sheri Markose

So it seems, it seems like it looks like to both of us, embodiment and representation are one of the same thing, right. But there are a lot of people who think they’re two different things. So the other day when I was reading David Chalmers, I kept thinking, you know, the, the the, for me, there are two things happening there is the machine running and the record of a machine running. And that mapping is what is the important thing. And the mapping then fuses the syntax and the semantics. Because when the machine runs, it is gene expression is semantics. I read a quotation from somebody the other day, and therefore, and then it’s mapped offline. And then you have representation but a lot of people talk with embodiment. I mean, you know, I’m a newbie to this. They don’t have representation They think they’re two different things. I mean, orange sugars in sugar and I we’re talking about similar, you know, there is still a distinction between these two things that the people who are involved can somehow think they’re not representational, whereas, you and I, Adam, and are we not both? I mean, it’s both,

Adam Shroff

and I think there’s no clear slot to like this, it’s like, um, I feel like, you know, in the middle of the road getting hit by both sides, like, world has its own best model, like Brooks was saying, embodiment is doing a lot of the business that you would want from representation models. But then I also think, though, that

Sheri Markose

Yeah, so I have now realised, a battle on my hands, because when I keep talking about embodied learning and body cognition, and thinking of, you know, the machine running that code, and the representation in itself, ref for the self rep, in a lack of status, that you have to have bolts, one running in real time, and the other you record it somewhere else in it’s very expensive investment that was made. I mean, George has the whole thing, but how expensive this whole offline representational mapping is, you know, what, why the hell would you take anything offline, my represented there.

Adam Shroff

So the endogenous activity is dominating, it’s the bulk of the metabolism. It’s like the bulk of what’s happening, is this, this offline, self generating activity?

Sheri Markose

Absolutely. What’s the doing? So Rodney Brooks, this idea that, what did you say life is? What did you say that rule does

Adam Shroff

its own best model. And so the presentation or modelling because just the enact of coupling with the world, that’s information. And I think that’s right, like to the extent that you can offload, that’s what’s happened. But this imaginative process that lets you do, and all the things you get from it, that’s, I think, something additional, and I’ve among rad, rad people talking about radical embodiments, and then activists, which I consider myself to be or not representation this, they say no, and I, I’d hoped for something I think we could have like a more,

Sheri Markose

we have a battle on a hand position, I’m now finding that what I’m thinking and saying what embodiment is really not meets both things, you know, you have the body, and then an exact mapping from there, offline. That’s, that is the basis of cognition, you know, that the That’s what I keep? That’s what self references, you know, at least where I come from, realise that that’s not a lot of people thinking when they talk about radical bodied boy, you know, we just just said that they just deny any representation and symbolic stuff, right. So anyway, we’re hoping that this is just getting people, you know, to be part of a little group. And hopefully, you know, the next time we’ll have Jorgen and somebody else, you know more about the embodied self, because at the end of the day, we have to get that straight. And so, so we hope to have the whole it’s the same sorts of people, but it’s a bit of a, would you call it the same people who were there last time? We’re not here this time? And then No, but I think I widen the group a little bit getting George in, and then there are other people who are also working on similar stuff. So hopefully, maybe end of April, we’ll have another get go at this. Some bringing, I mean, today, for instance, I mean, did Stuart Stuart cough, but could have turned up I mean, he knew, but I told students become his own agent with the lipstick. You know, he said, a shooting and all sorts of things nowadays we do. And I said, you know, we still have to sort of justify our existence and give very deep reasoning about whatever it is that we want to do. But nevertheless, hopefully,

Adam Shroff

a little nihilistic like, just hearing with the opening eyes saying like GPT three is a little bit conscious, like, yeah, like or do you know, a deep mind, they say, like, reward is enough. And like, we’re already on our way. We just need to, you know, scale this look at they’re just,

Sheri Markose

that’s telling Rav. I mean, you know, what we need, you know, love is all we need and that sort of thing. He says, No, hell, you need some structure. No, hopefully we get

Adam Shroff

more sympathetic to love as always, but But yeah, I can’t stand being a little nihilistic about things. It’s a mean ultimate, but yeah, this the role of embodiment ultimately, I think it’s the thing that gives, there’s no semantics without it. There’s no like, it’s I think it has to be the common touch point for any everything and to a shocking degree. So that’s part of why I said like radical, even though I know like the term was taken, maybe like a, you know, surprisingly embodied the shockingly embodied the You won’t believe how embodied is a cybernetic brain? I don’t know. But it really does seem like I’m in all these discussions. The roles of embodiment in mind are that’s the key. That’s the Rosetta Stone. And so like, you know, right now, like Laocoon, and Bengio, they’re starting to talk about, you know, world models, and later on, they’re on their way maybe, but unless like the body the unless like the nature of embedding in the world, which is via kind of body that lets you interface and gives you actually gives you the content of the common ground. And I would suggest necessary priors in a learning curriculum. I don’t think any of the works I don’t think we get to AGI I think the enduring problems under and we’re it’s literally let alone Yeah, we don’t get the super intelligence without that. And, and but if that’s part of the Bootstrap, that would be a deviation from orthogonality. And that would be an opportunity to untangle intelligence and values were because of those path dependencies, those bottles, those developmental bottlenecks would be part of like a lot of the challenges people are worried about with safety. That’s how we would handle them.

share

Channel