twimbit-gif

Is AI Ready for the Real World?

2:15 pm – 3:00 pm GMT

Keynote Talk

Balaraman Ravindran (Indian Institute of Technology, Madras) – Is AI Ready for the Real World?

To watch the video on YouTube, click the ‘YouTube’ button above.

Transcript: –

Anindya Chakrabarti

Thanks, everyone for joining. And it gives us great pleasure to come back to our it’s our follow up from our earlier conference that we have held. And some of the talks, unfortunately, we cannot hold during that time. So there’s a follow up session. So it’s a great pleasure to introduce Professor Balaraman, who is from Computer Science and Engineering Department in IIT Madras. And he also hits the Robert Bosch Centre for data science and AI. And for quite some time, we have been trying to get him on board. And he has fantastic work on machine learning, especially in the domain of reinforcement learning. And it’s great that really good join us finally, and it’s a great topic, which is, is AI ready for the real world. So without further ado, I would request Ravi to give your talk and you have around 14 minutes, please feel free to take questions in between. And also towards the end of the talk, maybe we can have a discussion around some of the ideas that he will share with us.

Balaraman Ravindran

So thank you. And so I’ve tried to keep the talk, sadly, lightweight, and they’re not threw up some of the ideas that then I would really love to see us talk more about these. And so the topic is AI ready for the real world. And some of you might actually think, hey, AI is already everywhere, right? I mean, what do you mean? Is AI ready for real world? Right? So my title is done, right? And in fact, AI is doing all kinds of amazing things like not just use social media stuff, but you know, he is actually making a difference in the real world. Right? So AI is used in making bail decisions. AI is transforming the financial industry making lending and loan management decisions, right? In fact, AI in science, if I had to update the slide yesterday, and then there is another new article today on you know, how AI is helping solve some of the important question, right? So looking at chemistry, looking at nuclear fusion, what you’re asking these questions about, here is a thing like, yeah, I can even flip burgers. So what why are we even talking about this AI ready for real? Well, it seems to be already that right? So so let me talk about the prevalent thinking, right? The prevalent thinking in building AI models is that all you need to succeed is large volumes of data, right? A large volumes of examples from which system has been learned. And these examples are preferably tagged examples, and somebody tells you what the actual output is. And then you can learn from this. And for some problems, you only need a simulation model, you don’t even need this kind of tagged examples. You just need a way of generating interactions with the matrix. For example, the most popular of these on right now is game playing systems. Right? And as a traitor, a lot of buzz right. So AlphaGo was one of the earliest success stories from DeepMind. The pair have reignited interest in reinforcement learning, which is my primary area, right? And so this is always going to be like a landmark event for us when AlphaGo beat Lisa doll who is arguably the strongest Go player in history. And in fact, this did not just beat him, it beat him for one, right? It’s pretty, pretty strong. And it basically learned by, you know, through simulations playing against copies of itself and looking at some past games, but it was never actually given any examples. In fact that the cause it learned by itself, right, it could do things like this. So there’s this very famous movie 37 In the second game of the match, right? And the climax even the world’s best go plays and reading from this extract, right? And that’s a very strange move said one commentator himself a nine Dan Go player. And I thought it was a mistake, said another, right. And then Lisa Dahl took nearly 15 minutes to formulate a response to that, right. And so it’s like, it was something that people had not seen before. And it turned out that this move 37 was very crucial in establishing a significant advantage for the AI is right. And it’s basically doing things that humans hadn’t realised before. That could be done. It’s all of this sounds really fantastic, right? And so the a lot of investment is, you know, going into building you know, more data sources or more better detailed simulators, how to go from sim to real, or how to crawl the web right and use the data from the web in building models. Data is typically new oil, new water, and driving the new electricity

But one of the things that we really have to keep in mind is right, so this is not really how we learn, right? We have teachers, right? We don’t learn from examples, the teachers actually tell us a lot more, and they give us a lot more structure. Right. So learning this, just just from these examples, like causes some problems. All of this is fine when the going is good. Here is an example where AI is basically learning to describe a picture, a person riding a motorcycle on a dirt road, or two dogs playing the grass. A group of young people playing a game of frisbee, I think is amazing, because it managed to get this one for speed they made also. But then you get all kinds of craziness, like a skateboarder does a trick on the ramp that is neither there nor a skateboard. But somehow it catches a skateboarder and likewise, a little girl in a pink hat. Blowing bubbles, right. So that goes on. Now, I really want to know why. Why the AI did that, right? So when the going is good. It’s all fine. We are really roughly asking for explanations. But when the going goes back, right? Then we start asking questions. Why did he do that? And now this is fine, but then they’re going can really be wrong. Right? So So for facial recognition, we have heard stories of all of these red leads, the police are strong man, right? And likewise, you know, the main decisions going really, really wrong because the software was significantly biassed, right? And so, and likewise, DPP three reflects societal biases in gender, race and religion. And here is another famous example where you give it a picture of Obama. And it recreates a white man as being the actual subject of this image, right? So starts treating serious ethical questions, and then you have to ask these No, sir, going in asking these patients, right? Why? Why did this happen?

So the fundamental problem with a lot of this AI is that it’s learning associations from data, right? And but it has no knowledge of the structure of the world. And the data, in effect is, is imperfect representation of the world. And there’s only so much that you could expect to gather from the data. And it has no clue about the structure of the world, nor of the processes of the world, for example, a doctor could potentially tell you why some artefacts she saw in an image is the reason for a particular diagnosis. Right? Not not merely because oh, 30% of the people who had this turned out to have, you know, cancer, that you have cancer, that’s a very, very unsatisfactory response, right? But then AI tests can give explanations like that, right? And then a whole bunch of other things, right. So the AI wouldn’t be able to understand what causes bias in the data set, because it has no idea of what are the societal inequalities that are there, from the process that actually generated the data. And so all of these are really fundamental issues. And sometimes, you know, AI has been drunk on the success stories for the last decade or so. And slowly people are, you know, sobering up and actually starting to look at some of these hard questions. And we’ll talk about a few of these. So, so the prevalent thinking, like I said earlier, right. So you give you have data, you feed it into an algorithm, and then it spits out a model. And then you go ahead, and you’re ready to use the model in the real world. But that’s that’s not the case. So here are some examples of what happened, right? So here is data, right? And the data turns out, I mean, so here is not the data. Here’s some example of what happened, right? I don’t know how many of you actually come across this. But sometime back, there was a huge hue and cry in the Indian press. So if you search for the ugliest language in India, I came up with Kannada right.

Sheri Markose

People who speak kannada, kannada is spoken in Bangalore and

Balaraman Ravindran

40 million people. So Google actually had to apologise for kannada behind this for this. And this is the apology, quote unquote, apology, they came up with a spec kind of stretching the word. Sometimes the way content is described on the internet can yield surprising results to specific queries. You know, this is not ideal, but we take corrective actions when we are made aware of tissue. So basically saying that there is no way we can actually know proactively prevent all of these things happening. And the only way we can do it is you know, post facto fix the problem. So that’s not and here is another famous example that came from this the bail decision making problem right so there was this two people, so So we’re non prattle is white male, and he was classified as a low risk to be a repeat offender and brush up burden on the black female was classified as high risk on a scale of one to 10 but not got a score of three Fisher got a score appeal. Right? But then if you look at the records, right, so Vernon had two armed robberies and one attempted armed robbery prior. And then based on whatever bail decisions were made, right, so he was released, and then he was subsequently arrested for a Grand Theft Auto. And this happened after the A labelled him as low risk. While on the other hand, Boucher has basically had four juvenile misdemeanours. And after she was released, there was no further offences from her. So basically, what was happening is that, you know, black people were generally being thought of as being a high risk, right? So here’s the thing. So the AI labelled as high risk, but they did not reoffend. Right, so it made one kind of error, nearly 50% of African Americans on which it made a high risk decision, did not reoffend when only 25% of whites did not realise it, right. And similarly labelled as low risk, but yet went on to commit further offences. That was for 30% of African Americans and nearly 50% of the white. So you can see that there’s this real bias here. And it was literally affecting people’s lives. So and after the study was done, the US court stopped using this tool for making big decisions, right. And likewise, Amazon had this recruiting tool that was shown on analysis to have a significant bias against women. And then they had to go on. So there’s all these things, right. So you can take the data, you train a model, and then you end up having all these biases, right. And so 111 area that we should be looking at, right, in terms of improving AI to make it more reliable for the real world. Right. So we’re talking about fairness, right. And also the notion of how the, the social impact of AI would play out, right. And so we really need to formalise notions of fairness ethics, in, you know, in systems that we tend to deploy, and in the domains that we tend to deploy them, and of particular interest to me and our centre. This the fact that notions of, you know, fairness and, you know, equitable distribution and things like that, actually, you know, both legally and from accepted social norms differ very much from culture to culture, right. And there has not been much work done in, in the Indian context on what constitutes a bad decision making, right. And I would say that there’s not even been done outside the scope of AI. But that’s not the discussion. But this is something that we are very interested and keen on pursuing. And the other question, of course, is that, can you explain the resistance of the model, and apart from being mandated by some, some of the EU regulations recently, right, especially in the FinTech domain, it’s very important that we are able to explain decisions that we make, right? So it’s not enough to say your loan was declined. You need to say why, and how you can reduce right? So all of this is important. And an AI systems like I told you earlier, no, not not really a great situation for that. And here’s a quote from Finland, if you can’t explain something to first year students, they haven’t really understood it. So here is a self driving Uber car conspiracy in Arizona, right? They had no way of explaining what really went wrong. And then you look at this, right? So you start talking about billions of parameters in in models, right, now, it becomes more and more challenging, what is the billion parameter? based decision? Right? How do I explain that to people? Right. So Explainable AI, especially because more and more of the state of the art solutions are coming from these billion. So parameter neural networks, is becoming harder, right. And then there are other concomitant issues for that, right. So you also have this notion of, you know, debugging the AI system, right? So how robust is my network? And how robust is my deep neural network? And often, you need explanations for you to do the debugging, right? So you need to understand why it went wrong in some ways, so that you can actually fix it. So even if it is not explaining in the end to end user, you need it as a debugging tool. So that’s another area that you should be looking at. I’m interested in looking at the more human in the loop approach to this kind of expandability. Right. So because especially when I I’m not a huge believer in building end to end, standalone reinforcement learning agents, I believe that no AI agents would would be you know, coexisting, and co working in the workspace with humans. And at that point, you need to be able to explain decisions, first of all, to post facto explain why you did something. But if you’re going to be in a team, you need to apriori explain the decision so that the human gets buy in into whatever you’re trying to do. So explanations nowadays Mostly act like this, right? So I hear the original image and you say this dog and then you light up a few pixels in the image and say, Oh, I said, it’s a dog because of those pixels. And then you say it’s a cat. And then you light up a few more pixels here and say, Oh, I think there’s a cat in this image because of these pixels. Okay, but now, from a human who’s looking at it, this looks like a reasonable explanation, because it looked up the face of the cat, but here, it just lit up the two hind legs of the cat right? Now, I mean, when you’re trying to find out if there is a cat in the image, if the especially when the full cat is visible, you’re not going to hang your hat on the two legs at all. Right? So for a human to understand what these explanations are, is a challenging

Anindya Chakrabarti

There is a question from Shyam, would you like to take that

Balaraman Ravindran

song? Sure. I don’t mind taking it. I’m sorry. I turned off all the notifications so that my slides are completely visible on my site, I was not able to see the question.

Shyam Sunder

Okay. I think early on, you made a distinction between AI extracting information from data, versus humans having information about the structure and the processes. Now? Is it the case? Is it not the case that ultimately, humans also extract information about whatever they know about structure processes? From data? Is just a macro different level of analysis? And different level of data?

Balaraman Ravindran

That’s a That’s a great question. Right. So that that’s a fantastic question. And the point about many of the things that we understand about the structure of the world is that the analysis was not really replicated from the data by every human who is using the structure, right? I mean, there are the you have Newton who understood gravity, it will promote laws of motion, and everybody uses it. But it’s not that everybody who you know, sees objects in motion the data that they have, everybody is not able to derive the structure from that, right. The fact that we are able to communicate and transfer the knowledge of the structure from one human to another is what makes us so efficient in generalising and solving your problems. And they have in fact, the problem is, I don’t have a way that I can communicate the structure to the AI, if you’re a deep neural network and inefficient. So I completely agree with you. we don’t have a way of transferring the structure that I already know. Right? That some some other process as analysed it and the right structure, I don’t have a way of transferring or I don’t have a way of giving it a priori to the AI and say, hey, look, here’s the structure, you go ahead, and they don’t build on top of this thing. So the reason really,

Sheri Markose

you can’t you can’t transfer what is already learned from one AI to another, they have to learn it from scratch,

Balaraman Ravindran

very limited settings, you can write very limited settings, you can but not not in general

Shyam Sunder

what would be an example of something that I can try the ability to transfer a structural process knowledge in AI settings?

Balaraman Ravindran

Okay, so one, one example that people use a lot, right? So is that, you know, these deep neural networks, right? You know, have this kind of a process where the initial layers are trying to extract some kind of features, right? It could be like small textures, or repeating patterns, or ages and changes in colour, colour, colour gradients, and things like that. But like, it messes these kinds of patterns from the images. And then the later, you know, layers, kind of put them together to form objects. And then finally, it recognises objects, and it makes us. I agree with you on that. Right. But then I had to go back and ask what are the most successful, large scale deployments of AI that are out there? Right, and most of these turn out to be deep learning systems nowadays, right?

So I’m no far from it to argue that AI is deep learning, right? So in fact, I’m going to say that you shouldn’t be blindly following AI. So I was like, you asked me for an example of how friendship works. I’m just telling you, whatever is the most popular example of people quoting now, which happens to be deep learning. And even if you want to say that, okay, I want to use AI systems to do inference, right. So now on deep learning systems, where I’m going to transfer knowledge and things like that, again, I would say that it is limited in the sense of what is the language you’re not to describe the models in, right so and It’s true. Even humans are not perfect in that sense, right? So it depends you need to know the language of physics before we can adopt models of physics and transfer it. So that limitation is still there. But like you said, right, so, success also has currency. Right? And of course, these kind of explosions become more typical healthcare. I don’t know, since we have enough discussions going, I had a small fake dialogue here. So I’ll go run through it very quickly, right. So suppose I say that AI says that, you know, looks at an image and says, Hey, this is this. This I seems to have diabetic retinopathy. And then somebody asks why, right, and they come back and say, this looks like, you know, regions that I saw only in people or that particular neuropathy. And the doctor says, No, I this is not that particular, nobody, just normal age related macular degeneration. We say, well, brown Rhino saw that earlier. Right? You did not show that to me during training. So it’s your fault, not my fault. So from based on my training data, and we tell you that this is something that that particular property, so of course, AI is not going to say any of these things. But then the question that we really have past is, was it our fault in you know, it is obviously our fault in building, not making the data centre, representative. And of course, I can talk about robustness. So that’s a school bus, add some noise. Man, that’s an ostrich. Right? And so, in fact, the The peculiarity of deep learning, but still, it’s something that you really need to understand as to how do these adversarial inputs affect the performance of these deep learning systems. And likewise, here, so yeah, Panda plus noise becomes a given, and so on, so forth. So the AI is not most of these deep learning based systems that are being thing are not really robust. And here’s an example from a football domain, right? So here is an agent that is learning to pick the ball into the goal. Mostly, that’s fine, right? But then when the goalie does something completely unexpected, as lying on the ground and flailing his legs, right. And the player also gets completely confused. And then it just basically wanders off like a drunkard from the playing field. That’s because the goalie doing that unexpected thing is not something that seemed during training. So it’s not just a question of getting a small Limited model. It’s also a question of even though there is a simulator that is capable of doing all kinds of capable of simulating all kinds of behaviours, you typically don’t explore the entire input space and that for you to build pretend models. And that particular fragility that the desire in the RL case would have been better even if I had been using that lookup table for representation, which was something the system and of course then all this question about, you know, data confidentiality, right. So I give data to build a model then somebody can actually D anonymize. Right. So for example, this is a famous story back when Netflix released their data for building for the Netflix challenge, right. This researchers are able to reverse Netflix anonymization there is all these stories about other kinds of anonymized data being D anonymized activists and being released on public domain like the health records of this governor and so on, so forth. And so, when I when I start giving data to the AI systems day for building whatever they want, they have this whole this notion of not just privacy, but also trust in both how the data is being handled right and also in how how much I can trust the decision. So, there is all this issues with regard to privacy and and security that starts affecting how reliable or how much ready the AI is for the real world, right. So, here are a few things that I kind of put together from one of the documents that the Indian government produced on portions of building responsible AI and is largely derived from the principles of responsible AI put out by the World Economic Forum, right. So, so, you say that AI is responsible, if you can understand the functioning of AI that leads to more robust development. And the second thing is if you are able to explain the functioning to end users, that basically engenders greater trust in the operation, and they should have consistency across stakeholders which leads to fairness. And then you should make sure pay attention inclusion, make material nice service to anyone, not just in the the sense of partners, but also in the sense of accessibility and exclusion. Right. Then find then there are other issues like I know, accountability and liability, right, who is at fault. There are so many stakeholders involved in building an AI system and without clear chains have a clear chain of responsibility and fixing liability. commercialising building large scale commercial systems is going to be fraught with problems. And, of course, already talked about the privacy risk and security risks, and also societal concerns, in terms of, you know, using fake fake images and other things for maligning people, and also looking at the impact on jobs of use of AI. Right.

So a bunch of other things we are talking about, right? This is the transfer learning thing, it’s not like, you know, forget about transfer, in the classical sense, right, but the world is not fixed, right. And so there is going to be drift in the drift in the data drift in the processes that generate the data, and just the costs have put in put AI into the system is going to cause changes in the way the data is generated, right. So we have to worry about those issues as well. So the data has to be transferable, because we want to get, you know, domain information to build on and not having to reinvent everything. But more than that, you know, data is constantly changing. And we have to worry about, you know, I say that the whole bunch of software engineering issues for AI, right, that we are really not started thinking about when we start thinking about AI in the real world. So things like, you know, what is the actual life cycle for AI systems? Right? How do we maintain AI systems is that a way, so every AI system needs to have some kind of performance monitoring happen, and have to have make sure that there are ways in which you can continuously be adapting it, and so on so forth, right? So a lot, a lot of these issues need the need more careful thought, when you’re trying to build these systems. And then, then we have this whole thing about the delivery of the AI system, right? So building everything on the cloud is great. And just just like you saw, now, I was not able to connect from my primary network, and I have to use some kind of secondary network. So it’s fine for me if I because I have the choice. But quite often, we need to, you know, basically make sure that your AI is compact enough to run on the edge. So these are all engineering considerations that need to be taken into taken into account, right, when we are building AI systems. So the way forward or for AI is undoubtedly successful in very, very specific domains, right. So I always say that, you know, now of the the current big waves for success in AI, have been in domains largely where the cost of failure is not large, not very high. Right? So for example, you know, voice recognition system with my Alexa doesn’t understand me, I just say, repeat something a couple of times, right? That’s not a big deal. So in such cases, maybe I can be left out into the world on attended, and they will still perform okay. Right. But then there are cases where the cost of failure could be very high, especially in like finance, domain or legal settings. And health care, right? We still need to have some kind of human oversight on AI systems, not at a point where we can be ready to use the AI On the contrary, and I will always say the current AI based systems understanding of the world is about a two year old, this isn’t talking about largely the current popular system, right? Is it mostly the deep learning and our system, right? And we have a long way to human performance, right? Human like performance, but then I have to ask this question, right. So when I say human like performance, what human are we talking about here? Right? So so there is this moral machine experiment run at the MIT Media Lab, I’m pretty sure most of you know what I’m talking about here, right? So there is a car with some passengers in it, and then there are people crossing the road. And then you could either hit no barricade and you know, cause cause some kind of, you know, injury to the people in the car, or you can swerve and hit the pedestrians and cause some kind of injury to them. Right. So what would you do? Would you hit the barricade or would you hit the pedal shape, right? And it turns out that there are significant cultural variation. So look at the some of the samples I have, right. So people from Japan, Norway and Singapore were more keen on sparing pedestrians and the least emphasis automatic predictions came from countries like China, Estonia and Taiwan. Likewise some of the survey, okay. And that was most emphasis on adding young people late in France, Greece, Canada and UK, less emphasis on spelling bee in Taiwan, China, South Korea, just a number. So now I want to build a self driving car. And that’s going to make these kinds of decisions. So which one should it match to? Right It will depend on potentially on you know what Whatever space they are operating in, right? So this is a very simplistic example, but just raises the point, even when they say human like performance is not clear, right. But this is my fun slide to close off. But in the meanwhile, while we are working on all of this, we still have to handle all these AI systems with care. So this is like two years old, but it’s more like this skill than than any other to your role. Right. So that’s basically it. So my summing up for this is we’re closer than ever to functional AI. But I believe we thought this back in the 50s. Right, with the perceptron. So, again, so we still don’t know how close we really are to functional AI. But importantly, they started affecting many aspects of everyday life. Right. So we know that it’s for sure that it’s affecting many aspects, because lawyers and lawmakers are talking about it. Right. In fact, I don’t know about the UK, but our prime minister works in AI in at least a couple of talks every year.

I can just give me one second this my last slide I’ll be done with? Okay. All right, we can make it. So for me, the biggest danger from AI is humans rushing to market before the technology is ready. Ai itself is not all of us. And so we really need discussions on regulation for AI. Really, it wasn’t really your fault. Whose fault was it? And before we get AI to the outside, so my answer is AI ready for the real world?

Maybe not.

But that’s our centre. My quick plug for it. We check it out. Now I’m ready to take questions.

Shyam Sunder

thank you. Professor abundance. Just to follow up on this. I think lawyers have had to deal with this for a long time. I’m not a lawyer myself. So please don’t worry. When when automobiles and locomotives were introduced, almost a century and a half ago. People raise questions when they were accident, somebody was killed? Was it the car or the driver? Who was responsible for it? Was it the machine or the man behind it? That and I think that the argument that you mentioned about accountability of AI appears to be similar to me. And even today, we can see shades of that argument in the debate. The gun debate to the United States. The gun lobby argues as you can see, that transmits guns don’t kill people, people kill people. I’m not arguing that that’s a correct argument. But it is an argument. So that countability issue is not that new. And I’m sure it goes back way back before automobiles and locomotives. And what is different about here?

Balaraman Ravindran

So the problem with AI is that we don’t really still have a complete understanding of the other people who generate labels, and then people who build models with that, and then people who actually deploy it, right. I mean, who build systems around the models that will converse, right. And then finally, the people who use it for now, with automobiles, right, at least I haven’t. And I know that there are some experts somewhere who knows exactly what automobile is doing, why it is doing that, then I can. Can people still hear me and hear me say that my internet connection is?

So at least there I mean, there is somebody can who can sign off saying that can I’m completely satisfied with you know, that my system is set passing all safety checks, right. But when you’re talking about the modern generation AI systems that are being put out, I don’t know if anybody is willing to even sign up I don’t I even somebody signs up saying that the AI is 100% reliable and it’s basically the person who used it is at fault. I’m not going to buy Because I’m because we are working on your stuff, and we know how under labour and all kinds of that’s one part of it, right? So we have different regulations

But even if I go wrong, right, I have no idea why they even fall. So, so it’s just that the technology is not, at a point, robust enough for me to say that, hey, it’s, I’m happy releasing it to the world. And it’s the end users responsibility, you can always say that, if you put in there a disclaimer saying that, hey, I’m not sure how this thing will behave, when you actually start, you know, training it with real data. So this is something that happened a while back, right. So Google had this product that they built for diabetic retinopathy, they were fantastic, they were doing great, then they tried to come back and you know, use it in the Indian hospitals from that they had gathered some of the data. And it turns out that, you know, they had basically curated the data and pick the best of the images, and then they had actually built the model on top of it, when they tried to put it out into the real world, there’s so much variability in the data, that the model was not doing that. Right. So now, if Google had established that in very nice control about the conditions, or even in a Western Hospital, where there is a lot more quality control, and how those images are being made, then they can come back and tell people, Hey, I’ve done this testing, you know, go use it in the real world, and somebody uses it in the real world, and then they find that, you know, it’s not working. So is that the,

Shyam Sunder

what I’m trying to get at is, is it a quantitative issue, that AI is in an early stage of development. And we could wait until its reliability levels get to whatever their error rate is reduced to some specified value of epsilon or late less? Is it a quantitative issue? Or is it a qualitative issue of principle, that AI is somehow different from other technologies that we have lived with for 1000s of years?

Balaraman Ravindran

That is a strong quantitative argument to make to say that we still not ready to put it out there. But then there’s also a qualitative agreement. partly right. So when we start saying that AI is a technology, right, it’s not correct. There is actually a collection of technologies and subsets, right. So the kind of the checks and balances that you should have for you know, AI, that works with the vision, versus here that works with language, because the technologies are very different. In some sense, it’s not like, you’ve come to a point where we use the same same architecture, the same training regimes for working with one versus the other, right. In fact, they are a collection of technology. And I believe that for each, each vertical that you’re using, quote, unquote, the umbrella AI technology term, you would need to actually evolve regulation separately. Well, I think that’s the once we have that kind of clarity as to, you know, that we are actually talking about different things, you’re talking in different domains. For example, FinTech needs a completely different set of regulations versus what you would do in healthcare, for AI adoption, right? Then Then we are fine and ready to talk more about. So that would be the other differentiator for me, that’s not to treat AI as monolithic technology and try to deliver.

Anindya Chakrabarti

Excellent, thanks for sharing your wonderful to ask your question quickly. Yeah.

Sheri Markose

I can, I can see Vincent getting ready for his talk. But I mean, Robbie, the structure and learning right, do you see you know that that video you had about the the football or at the goalie? He said, Okay, when the goalie just laid, you know, flat on his back, defending the goal, the footballer got confused, right? Yeah. Because you said, Oh, that was not in my training. And then humans have similar problems. I mean, you’re the entire great financial crisis was because a particular instrument, you know, these were commercial paper. In the old days, we knew commercial paper, you would know what that is quite. But it is a it is it is one of those things that was called credit chains and vulnerable, vulnerable credit chains. But then we got a new avatar of commercial paper in the form of securitized assets. You know, it was classified commercial paper, suddenly that threw people off we couldn’t understand because you know that that could cause problems. So we didn’t then respond to that. So how, how is it any different? I mean, you’re you’re saying, okay, the I wasn’t trained, but humans make such big errors as well, because we are blindsided. We are fighting yesterday’s war. We also hide bound, isn’t it?

Balaraman Ravindran

Yes, we are, we are. But the problem here is that if there was somebody who can, you know, understand this and start explaining these humans are, can pick that up and do it right. So we are talking about very simple physics there in the football domain, right? It really doesn’t matter if the guy says, Can I have a much clearer path to the goal? And I should just, you know, slop the ball? Yeah. So but all this existing knowledge is what is not available to the mean, which is not being able to absorb the data. So that is the challenge. More than it says,

Sheri Markose

this is the point a lot of people say I doesn’t have common sense, instead of saying now now the pathway to the goal is completely clear. Because I don’t see a defender I should just kick it in, right. Instead, it says I expect to see a defender. I wasn’t I was trained with the defender standing upright. Suddenly, they lose. Yeah, I get it. Yeah, that’s excellent. Actually a very, very good talk. Any more questions? Vincent, do you want to sort of because both of you are talking on similar things? Do you want to say something to Ravi’s?

Vincent Müller

I think not at this point, because I think that would be unfair insight. I heard on the other little bit of the talk. So I can just say something about the bit that’s in the chat there. The guns, kill people, people kill people. That’s, that’s pretty standard discussion in, in the philosophy of technology, right? So we know that, yes, the moral agents that are responsible for actions are not the guns. But we also know that certain technologies enable certain kinds of things. And so it is clearly the case that it’s way more dangerous if you and me have a horrible argument. And we both have a gun in our pockets. So if we just we have no tools in our pockets, we will both go away with a bloody nose, right? If we have a gun, we have knives, we might injure each other. If we have guns, then very likely somebody is going to get killed. So technologies have affordances. And these influence the outcome of human actions quite significantly. So that is why we do think that there is the point of having an ethics of technology.

Sheri Markose

joining us from Canada. Georg works a lot on embodied self and self-structure learning anything you want to add, at this point?

Georg Northoff

No, not really, I think what is important that is to serve, but I think it was pointed out as more sort of a if it audits a computation, its structure, not just a specific content. And weather off the sort of computational structure, of course, is the second question. Of course, I would argue it’s this particular dynamic and topographic structure. Yeah. And the other thing, which is important from my point of view, and that itself is not just inside the brain or the body, but it’s sort of a virtual expansion. That the point of view what philosophers called the point of view, it’s not just the first person perspective, it angers you. Within the environment. Yeah. Affordances. We’re speaking about affordances. I think that’s a key point here.

Anindya Chakrabarti

So this is a follow up on this. No idea or structure. I don’t know about the mathematics of many of these machine learning techniques. One thing that has always surprised me is that if I show a picture of let’s say, a horse, to a very young kid, I show it twice or thrice. And after three times, the kid will figure out that it’s a horse and this is a cow and this is a code. Whereas if I have to train something like an ML kind of model, usually it requires 1000s of iterations. So does it mean that there is a separate structure from which we infer about things to begin with, or the learning is somehow fundamentally very different? Essentially the question is, why does it require only a couple of times for human brain to figure out that there is a pattern rather than 1000s of times?

Sheri Markose

Georg is the neuroscientist here. I mean, yeah, the belief is that yeah, we learned very differently to AI. That’s the whole point is that, that is the big debate between people who say, we need a lot of structure, it is already keyed into a genome or whatever, you know, all the structures are given written, like, ready to, in a language, like Chomsky talks about we are born with the whole structure of language. So, whereas an AI would have learned from scratch, or whatever we would, how we think we need to train it, you know, it’s our projection of the AI could sort of pick pick, pick up on it, you know, even large numbers of alphabets or words and then, you know, have it string together using statistics. So, I think that is the debate of the century did that.

Balaraman Ravindran

So there’s one other point little, as soon as vision tasks go recognising the horse from a cow is not the first thing that the baby is solving. Right? So even if the baby is not born with innate structure, I mean, it’s been looking at constantly absorbing visual stimuli throughout its life since its bond, right. Some of these you know, features that are needed for in order presenting the image has already been built by looking at other things. So this very, very natural strong transfer that humans are very good at. And that doesn’t happen with tabula rasa AI system. It literally you’re learning from like

Anindya Chakrabarti

so it’s more like a cumulative learning for us and therefore we sort of instruct the patterns better.

share

Channel