Living With AI Podcast: Challenges of Living with Artificial Intelligence

AI & Transport

Sean Riley Season 4 Episode 5

To solve the big questions of autonomy and transport, collaboration is essential. Our guest on this episode is Professor Siddartha Khastgir, Head of Safe Autonomy, WMG, University of Warwick.

Siddartha emphasizes some of the huge challenges in transport with regards AI and we discuss these safety critical scenarios.

Podcast production by boardie.com

Podcast Host: Sean Riley

Producer:  Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.



Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 4, Episode: 5


AI & Transport

To solve the big questions of autonomy and transport, collaboration is essential. Our guest on this episode is Professor Siddartha Khastgir, Head of Safe Autonomy, WMG, University of Warwick.

Siddartha emphasizes some of the huge challenges in transport with regards AI and we discuss these safety critical scenarios.

Podcast production by boardie.com

Podcast Host: Sean Riley

Producer:  Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

Episode Transcript:

 

Sean:                  Welcome to living with AI from the Trustworthy Autonomous Systems Hub. In this podcast, we’re going to talk about the sector of transport in relation to AI and specifically trust. I’m your host, Sean Riley, and the date we’re recording this is the 25 April 2024 and our guest for today is Siddartha Khastgir, Siddartha, thank you so much for making time for the podcast. Can you just give us a brief introduction?

 

Siddartha:        Thank you Sean, thank you for having me here, an absolute honour. I’m Siddartha Khastgir, I’m a professor at WMG University of Warwick in the UK where I lead the research on verification and validation for connective autonomous sytems which includes autonomous vehicles on land, autonomous ships in the sea and autonomous drones in the air. So anything to do with autonomy anywhere in the transport is my area of interest.

 

Sean:                  Excellent. That’s fabulous. Well we’re going to talk about some of the challenges and some of the things that are going well in autonomous vehicles but I was thinking, as I was researching before this chat, that we use it already right? Aircraft use autopilots. There are rail systems that use it. Why is there quite a lot of talk about it now and worry about it even?

 

Siddartha:        I think the trick to the answer to your question actually lies in the question itself. So the fact that you think that we use autonomy already is also slightly concerning because we use aspects of autonomy but we don’t quite use autonomous systems. The fact that I’m saying autonomous vehicle, there isn’t anything called autonomous vehicle that we have right now. We have some leves of automation in our vehicles but we don’t have autonomous vehicles that you and I can buy. So there’s a lot of misconception around terms, around understanding of what these systems do. So I think that’s the reason why there’s increasing focus now because as autonomy is increasing in our systems, we need to be very careful that we use them in the correct manner and using them in an incorrect manner can be actually more safety critical and more dangerous and defeat the purpose of actually making our roads safer. 

 

Sean:                  Well I think we’ve mentioned on the podcast a few times that actually, in terms of roads, anyway, it would almost be better if we could switch everything overnight and often the problem lies with the mixture of some autonomy and some kind of manual isn’t it? I mean, is that one of the big areas of kind of concern?

 

Siddartha:        I won’t say that’s an area of concern. I would say that’s the world that we live in. That’s the reality and the challenge that we have as researchers in industry is to make sure that we’re able to create a system that works for the world we live in. There’s no point creating hypothetical imaginery systems which doesn’t serve the purpose. So, in a sense, yes, you’re right to say if we switch- If you flick a switch and suddenly everything is autonomous tomorrow then it’s an easier world, I would say yes. But that’s not going to happen so we need to come up with a way where autonomy and human-driven systems or autonomy and human interaction can be done in a safe and responsible manner. 

 

Sean:                  And one thing that we often come across when we talk about autonomy and autonomous systems is that people hold them to a higher bar than a human equivalent. It’s not enough to be as good as a human. You have to be that much better than a human, or the system has to be that much better than a human.

 

Siddartha:        I think that’s a very good philosophical argument to make but at an engineering level, if I ask my fellow engineers what does it mean to be even as good as a human, we don’t quite know. As in what does it mean at an engineering level? What metric am I using? So I think it’s- I ‘ve heard this a lot in the- Not only in the UK ecosystem but also internationally, oh, we need to be better than human-driven vehicles or we need to be as good as human-driven vehicles. But what does it mean? So I think we need to move away from some of this very, very philosophical discussions to be a bit more practical and a bit more tangible in what exactly does it mean to say your system is safe. 

 

Sean:                  I mean, you know, obviously the classic thing to sort of judge all of this by, certainly and we’re focusong on cars a little bit here but, for obvious reasons, you know, if they crash less than humans and cause less injury or death, then that is, I suppose, a metric which we’re open to.

 

Siddarth:          So- But that is a very retrospective metric. So if I introduce an automated vehicle today or an autonomous taxi today, how do I prove that they will crash less? 

 

Sean:                  Yeah, good question, yeah. Well that’s the $100,000,000 question. 

 

Siddartha:        Yeah. So the fact that you’re saying if it crashes less is a metric, I fully agree, but that’s a retrospective metric. So that’s when you’ve already introduced them for, let’s say a year or two years or three years, and then you see how many accidents did the automated vehicle cause? How many accidents did the human vehicles cause? And then you do a comparison. That’s fine, but what do I do on day zero when I’m introducing it? How do I prove that? And I think that’s where we feel there needs to be a slight shift in the conversation to say the ambition should always be to be better or as good as human-driven vehicles but at an engineering level we need to move a few levels down from the philosophy to say at an engineering level this is my requirement that you need to meet and if you meet the requirement of X, Y, Z, then your system is safe and I’m going to approve you, be it the autonomous vehicle on the road or the autonomous ship or the drone.

 

Sean:                  I think there’s also- There’s an element here of okay, we’re talking for obvious reasons about safety and I think it’s really an important topic, but also if these vehicles are costing more in terms of resources and energy to drive than a human driver, then is there an issue there as well, you know? If we’re having to ship, you know, terabytes of data to the cloud to be processed to allow this vehicle to drive properly, then should we not just use a human? 

 

Siddartha:        That’s a really interesting point Sean, and I think a point that not- It doesn’t receive as much importance as it should. The carbon footprint of data is not something that the ecosystem wants to discuss. The whole hype and push for AI is, okay, it’s a great thing and we should have that, but it also has a carbon footprint to it, the data in the cloud service and the data storages, they have a green footprint to it. So that’s something we need to consider, absolutely. But I think the bigger question to answer over here is the fact that we right now are living in an ecosystem where data, or the amount of data, is increasing tenfolds if not hundrefolds as we speak and the way we actually are treating data or storing data may not be viable for the times that we have a much larger population of automated vehicles. If you speak to any developer today, they would say for one day of operation, they are getting terabytes of information and they’re not going to store terabytes of information for years and years, so the whole concept of data and data processing, managing, storing, will need to be reinvented when these systems become- Or go mainstream.

 

Sean:                  Makes a lot of sense. I mean, it kind of brings us to one of the kind of overarching questions of this podcasts, what are the challenges and what’s meant by responsible AI? So I suspect that kind of ties into that. But what else is under that kind of responsibility umbrella when it comes to transport?

 

Siddartha:        I think really obviously always start with safety when it comes to responsibility. From a- Responsibility goes beyond safety. So that’s the bare minimum you need to have. You need to be societally acceptable. You system needs to be societally acceptable, which brings in the concepts of ethics and morals associated with that. But I think your point about data and the green footprint is very important, because responsibility also means to have that green footprint. You cannot say oh, we should move towards electric vehicles but then spend huge amounts of carbon footprint installing the data that these system will bring in. So I would say safety, societally acceptable and carbon footprint would be, for me, the three key areas of responsibility I would say. 

 

Sean:                  We’ve kind of rather obviously gone straight in, as we’ve said, for safety, the potential harm, the concerns, the worry. What are the upsides? I’m sure there’s lots of them and I can think of them, you know, from just a very, very kind of self-centred going to the pub and getting a taxi back that doesn’t have to worry about whether- Or you know, a vehicle back that doesn’t have to worry if I’ve had two drinks of beer or not. But what else is there out there? What are the upsides to this?

 

[00:09:46]

 

Siddartha:        So I think for safety, we have this incorrect mental model that safety benefits of automation or automated vehicles as only when you can go from the pub back home after a few drinks. That’s not what it’s actually for.

 

Sean:                  Sorry about that.

 

Siddartha:        But if we’re able to design an automated system that can work very nicely and coexist with the human-driven vehicles, that’s actually a bigger safety benefit because if you look at the kinds of accidents that are happening in the UK, but also internationally, just in the UK we’ve got around almost 16, 1700 people dying on UK roads every year and that number hasn’t gone down for the last 10 years. Globally, it’s 1.3 million dying due to road accidents. If you look at the highly severe injuries that are caused on the UK roads, there are roughly around 29,000. The impact of that on the NHS because of the accidents goes into the billions. So, if the introduction of automation can help us bring those numbers down, that’s the safety benefit. Going back home from the pub is, I would say, a byproduct of that. So that, for me, the focus should be how can we actually bring automation and design it in a way that people can use it as part of their normal driven cars while helping them in areas where they’re not, I would say, at their peak performance.

 

Sean:                  This has always been a kind of selling point of any automation or automated system, is to take away those repetitive, very boring jobs where people will finally make a mistake because they, you know, have to do the same thing over and over again. I know that you’ve been working on a TASO project, verification and validation of safety research in autonomous vehicles, was that right?

 

Siddartha:        Yes, yes. I might say automated vehicles, but yes. 

 

Sean:                  Automated vehicles, yes sorry. Close, close. I was nearly there. So is that an ongoing project? Can you tell me a bit about the project and how that’s gone and what’s going on there?

 

Siddartha:        Absolutely. I’m very happy to. It is an ongoing project because if I had solved it, then I would be a billionaire today actually. But we haven’t yet. But we are on the track to solving it. So our research focuses on how do we actually prove that automated vehicles are safe. What are the things that are needed, what are the building blocks that are needed to actually put together the argument to convince you and your mother that these vehicles are safe? And we’ve been doing a lot of work in the UK but also internationally with our partners in Europe, in the Far East, in Japan, in South Korea, Canada, US, but a big part of our work is also on working with the policymakers and regulators because a lot of the safety questions are answered and need to be answered by regulators who need to consider the wellbeing of the society. You cannot let the developers or industry decide how safe a system should be. That decision needs to be made by regulators and the policymakers who have the best interests of the society in their hearts. So that’s the reason why we do a lot of work with policymakers, providing them with the research evidence on what they can frame their policy. So we’re looking at research in the area of scenario generation, as to what kind of scenarios should you test these automated systems on, so be it road vehicles or autonomous ships, or drones, what kind of scenarios. You will not be able to test everything in the real world, so you’ll need to go into simulation and virtual reality. How do you actually qualify or prove that what you see in the simulation or the virtual world is how they would react in the real world, because only then you can use those evidence that you generate from the simulation as part of your bigger safety argument. And then finally, is this concept of at an engineering level defining how safe is safe enough. And those thresholds and the metrics for those thresholds. Sorry, the metrics and the thresholds for those metrics is what we do. We also do a lot of work in the area of standards because not one- There’s no one single developer. There are multiple developers in multiple countries and all of them need to speak the same language, so we do a lot of work at international standards organisation ISO, SAE, and ASAM standards bodies to get people to speak the same language. Yeag. That’s where our research focuses around.

 

Sean:                  I was thinking when you were talking then about simulation that, you know, how many simulations are enough, you know? How many times do you have to run different scenarios? And I suppose there are equivalences in some of this in a different field, but the generative AI where we’re asking generative AI to come up with pitches that have never been done before, and it seems able to do that. So is there a parallel here, you know? This kind of, I think it’s known as a one shot isn’t it? Where you give it something it’s never seen before and hopefully it’ll be able to deal with it. Am I talking the right language there? 

 

Siddartha:        Partially, I would say, partially, partially, absolutely. I think your first question about how many simulations, that is again the million dollar question that we’re trying to answer. That goes back to the bigger question of how safe is safe enough. So how many scenarios are enough. So a lot of our work is actually focused on that itself, as to how do we make an argument that X number of simulations is enough? So that’s our research ongoing right now within our team. We have published some work in this space also. But I think the bigger issue in this space is what might be enough for you and me may not be enough for another person on the street. So this concept of safety is a very emotive and a very personal thing, and how can we get an agreement on this concept at a social level is a bigger question that engineers cannot answer and should not be allowed to answer. And that’s where I said the policymakers have a big role to play and I would like to see that collaboration between policymakers and the researchers and industry to say- It’s up to the industry and the researchers to provide the evidence to the policymakers, who ultimately make the decision. 

 

Sean:                  And there is actually an equivalence here as well, actually, which is that a human who is going to drive a car has to pass a test and then there is experience that informs their kind of abilities as they get more and more experience and they encounter more things. There’s presumably an equivalent here is there, with automated systems, that they’re going to in theory learn and get better with time?

 

Siddartha:        Yes, that would absolutely happen. That would absolutely happen. And if they’re not, then we’ve done something wrong actually. One thing I just forgot to mention in your previous point, this whole concept of gen AI that you raised earlier and I think it’s a fantastic piece of technology but what we try- What we often forget in this space is it’s a different concept when you put that in a safety critical application as compared to asking it to generate another image for you. So, it’s- Whenever you bring in concepts like gen AI or AI in general in a safety critical application, the amount of rigour and the evidence that you need to provide to prove that it’s safe is a very different ball game as compared to a recommendation on Netflix or on Amazon. It- I would not die if somebody gave me a different book than I- Recommendation than what I wanted. But if my automated vehicle didn’t detect a pedestrian then maybe I’m not dying, but another person could and I think that’s the huge difference between using AI in safety critical applications and a normal recommended application. 

 

Sean:                  Absolutely yeah.

                            

Siddartha:        And that’s the bit that I would like people to appreciate because in my conversations in the space of AI, people do not appreciate the difference of using AI in safety critical and non-safety critical.

 

Sean:                  Understood, yeah. I mean I suspect- My reason for mentioning it was more the thought of nobody else may have asked for, I don’t know, a pink kangaroo jumping over Sydney Harbour Bridge, or something, before. So it has to kind of come from nowhere without actual training of that specific thing. 

 

Siddartha:        Absolutely, and it is useful for those kind of aspects absolutely, but maybe not for making the decision itself. 

 

Sean:                  Yeah, okay. I mean, the other thing I did wonder is if some of these autonomous systems had different layers, so you know, you might have a system that is quite creative but then it’s being bounded or contained by a system that’s more safety critical. Is that the kind of architecture that gets used?

 

Siddartha:        That’s an architecture that has been doing the rounds a few times now. A lot of people have said that. I think in a lot of peoples’ minds, that’s the only way they can prove these systems are safe but I think this area is so fast-developing, I would not make that claim that this is the only way right now. But yes, that’s an architecture, absolutely. We call it a kind of a watchdog model. You’ve got the more doministic core- Sorry, more doministic out covering and you have the AI core. 

 

[00:20:00]

 

Sean:                  So what do we see coming down the line with regards AI and transport? You know? What are we going to see in the next few years?

 

Siddartha:        I think one thing’s for sure Sean, the technology is coming. AI is coming. Nobody can stop it right now. Regulators can be ready, or get themselves ready, I would say rather, I don’t think any regulator is ready. So regulators can get themselves ready. They don’t have a choice. So they just need to get their house in order and in order to do that, I don’t feel regulators can alone do that, and that’s where academia, industry have a responsibility also that this needs to be a collaboration. A three-way collaboration between the parties, for them to regulate AI in a manner that is both safe and responsible, but also enables the developers to innovate the way they want to innovate. And I think that’s the bit I’m right now slightly worried that the hype is letting a lot of innovation without any regulation. 

 

Sean:                  There is always an issue of regulation lagging behind though, isn’t it? As you’ve said before, this technology moves very, very fast, you know? Regulation, legislation is often very slow to get put in place, never mind get it right first time. So I mean, there must be something on the industry to try and self-regulate a certain amount, must there?

 

Siddartha:        So I think that’s where a balance between standards and regulations and guideline documents are very, very helpful. So people tend to think regulation means a black and white, this is the document. But you can have pre-regulationary documents, you can have standards, industry standards, you can have guidance documents, code of practices and so on and so forth. When I say regulation, I’m actually using it in a very loose manner, encompassing everything. So what I’m looking for is more like, there’s a broad agreement of this is the way to AI in a safe and a responsible manner, more so for safety critical systems because that’s my area of worry right now, because I see AI being used in safety critical applications but not with the level of, I would say, thinking in the area of safe and responsibility- Responsible use to the extent that I would want.

 

Sean:                  If people want to get in touch and collaborate, I mean we can obviously put any links and things in the show notes. Is that something you’re interested in? Collaboration?

 

Siddartha:        Oh absolutely Sean. For every presentation I’ve made in any forum in the UK or internationally, I end up by saying we, on our own, as Warwick or WMG, or UK on its own will not be able to solve this. We have to collaborate. There is no other way. You collaborate or you die. That’s the only truth in this space. So, for anybody who would like to focus around the topic of safety or responsibility of AI, or for safety critical applications especially, please feel free to reach out to me. There’s a lot of different areas of activities that we are doing and we can explore further. 

 

Sean:                  So collaboration in its own right is not, you know- Unless, there’s something kind of tangible that happens as a result of it, then collaboration is wonderful but often it’s people chatting at conferences and then nothing else happens. How do you go from there to maybe making something actually happen that’s tangible?

 

Siddartha:        I think that’s a really good point Sean because in a lot of cases, nobody in any forum, in any public forum would say I don’t want to collaborate. It’s like nobody would say that. But there’s a difference between saying let’s collaborate and actually leading a collaboration and providing a tangible output and I think that’s where sometimes I feel we’ve got some of our conversations in this space, and in the academia space, but also in the space of AI in general, there’s a lot of- The good thing is that there’s a lot of conversation. The not so good thing is there’s a lot of hollowness in those conversations. So my appeal from this conversation that you and I are having, actually, is to ask the ecosystem to add a bit more depth to those conversations. Focus on tangible outputs that we can then use as an ecosystem to move the ecosystem forward rather than just having those conversations over a few biscuits and tea and coffee, and maybe a pint of beer somewhere. So just focus on a bit more depth and creating something tangible.

 

Sean:                  Siddartha, thank you so much for sparing the time to be on the Living With AI podcast. 

 

Siddartha:        Thank you Sean, a pleasure. 

 

Sean:                  If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS hub website at tas.ac.uk, where you can also find out more about the Trust for the Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited and it was present by me, Sean Riley. 

 

[00:25:27]