
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Exploring Trust & Driverless Cars
00:30 Paurav Shukla
00:39 Gary Burnett
00:44 Christian Enemark
00:58 Sean Riley
01:42 Amazon's first Robotaxi (The Verge)
03:00 Google/Waymo Ride Sharing (Tech Crunch)
03:35 Connected Everything Driverless Pods (Computerphile)
06:25 Small Computer Industry aside became viral joke 'Bill Gates vs GM' (Snopes)
08:40 Mohammad Mousavi
09:17 Honda Tier3 Car (Autonews)
12:25 Internet of Things Problems (Computerphile)
22:15 Will your driverless car be willing to kill you to save the lives of others? (Guardian)
42:30 Unexpected Item in the Bagging Area (Twitter Thread)
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 1, Episode: 7
Exploring Trust & Driverless Cars
00:30 Paurav Shukla
00:39 Gary Burnett
00:44 Christian Enemark
00:58 Sean Riley
01:42 Amazon's first Robotaxi (The Verge)
03:00 Google/Waymo Ride Sharing (Tech Crunch)
03:35 Connected Everything Driverless Pods (Computerphile)
06:25 Small Computer Industry aside became viral joke 'Bill Gates vs GM' (Snopes)
08:40 Mohammad Mousavi
09:17 Honda Tier3 Car (Autonews)
12:25 Internet of Things Problems (Computerphile)
22:15 Will your driverless car be willing to kill you to save the lives of others? (Guardian)
42:30 Unexpected Item in the Bagging Area (Twitter Thread)
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to another episode of Living With AI, the podcast on Artificial Intelligence and its impact on us. Maybe you think it's a bonus, maybe you have concerns about the extent it governs parts of our lives. We try to look at all aspects of AI, but particularly centred around trust. Our feature today is driverless cars. Would you trust one? We're about to speak to Mohammed Mousavi, Professor of Data Oriented Software Engineering at the University of Leicester, but first I'll introduce our panel members for today. Joining us are Paurav Shukla, a regular here on Living With AI, Christian Enemark and Gary Burnett.
Paurav is Professor of Marketing at the Southampton Business School, Gary is Professor of Transport, Human Factors and Head of the Human Factors Research Group at the University of Nottingham and Christian is Professor of International Relations in the Faculty of Social Sciences at the University of Southampton, currently leading a project on ethics and drone violence, and his hobbies are running, reading and travel. And throwing the odd ridiculous question in while dividing the time as equally as possible will be me, videographer, lover of snowboarding and sausage rolls and occasional computer science spectator Sean Reilly. As we record this, it's the 17th of December 2020, so if we all seem a bit keyed up you can blame the mince pies.
So this week we're talking about autonomous vehicles, but I believe there has been something about autonomous vehicles in the news. I mean I saw something which looked like Uber were stepping out of the field of autonomous vehicles in a recent report in The Verge, but Paurav, you spotted something about Amazon and autonomous vehicles, did you? Shall I start saying AV instead of autonomous vehicles? It's getting a bit-
Gary: Yes it would make sense.
Sean: So Paurav, you saw something about Amazon and AVs, did you?
Paurav: Yeah, so Amazon is, on Monday I think, this is, today being the 17th of December, on Monday Amazon-owned company called Zooks unveiled an electronic autonomous vehicle as a part of whatever is likely to be their attempt to create Robotaxis. So, you know, a company, a retail giant is also coming into this AV business. What a fascinating story in its own way.
Sean: Yeah, it's, everybody wants in on this and obviously those are the deepest pockets, the Amazons, the Googles. Mind you, we've seen big tech firms get into what appeared to be AVs in the past. We saw a Google car, didn't we, a few years ago? Whatever happened to that, you know? Gary, is this-?
Gary: Well, that's still going, but under the guise of Waymo.
Sean: Ah, so it's still there, but it's been spun out.
Gary: Yeah, so Waymo is sort of, it's significantly owned by Google and essentially it was, yeah, they've continued very much doing that research, particularly in America, especially on the west coast. And yeah, they've got millions and millions of miles of autonomous driving experience now. I'm pretty sure there was a story recently about them starting to allow the members of the public now to use those as sort of Robotaxi, early Robotaxi type services. So yeah, so the world of Johnny Cab is not as far away as you might realise.
Sean: Well, I remember there is a trial project in Milton Keynes actually doing that, isn't there? I can't remember the exact name of the project, but there are autonomous vehicles, but they're on tracks or something, aren't they? Or are they on a very specific path they have to choose?
Gary: Yeah, the Connected Places catapult has had sort of work with sort of pod type vehicles in Milton Keynes for many years now. And they're very slow speed vehicles for just getting, sort of ferrying you around between the train station and the shopping centre, on the sort of, the various footpaths that exist in Milton Keynes. But there's some great videos online that you can see of these pods being terrorised by children and animals and, yes, what things will they stop for and what things will they not stop for? And these are good examples of what you would call sort of SAE Level 4 vehicles, where they're very much designed for a particular purpose, but there is no human driver in there, but they are, they're very limited in their use cases. And so they're, yes, the beginnings, I suppose, of our understanding of these things.
Sean: Christian, your kind of expertise, ethics, I mean, what are the sort of headline ethical problems with any AV?
Christian: The ethical problems are the mystery about how decisions are going to be made by a system when human life is on the line. That's from a kind of immediate user perspective. But having said that, the ethical story about AI-controlled cars includes the important ethical importance of dealing with a big road safety problem that exists around the world, causing the injury and death to millions every year.
And, of course, the huge environmental problem that comes from inefficient traffic flows, again, all around the world. So when we're thinking ethically about AI-controlled cars, we're thinking about why we're doing it for ethical reasons, but also we're thinking about what are the possible ethical uncertainties and ethical downsides to bringing AI-controlled cars into transport systems.
Sean: One thing we've touched on a few times in this conversation is the fact that these are big tech firms who seem to be kind of pioneering this AV research and trials. But there was a famous story of big tech and big automotive getting a little bit tangled up a few years ago, wasn't there, Paurav? Didn't Bill Gates say something and get some kind of response, I seem to remember?
Paurav: I think it was Comdex 1997 when Gates made a remark and he said that, you know, if we compare, we shouldn't compare. In a way, his original content was saying that he shouldn't compare different industries because the PC industry is so different than other industries. But people only remember the later part of it, which became a funny joke. He said that, if we had the same development state as PC industries compared to the automobile industry and the cereal industry, then the automobile would be available at $27 and cereal would be available at a cent. But that's not the case. And to that end, then after the internet really took on and I think a kind of a response was created by the General Motors employees or engineers, it was saying, it was all alleged, it was just created by people for the fun part of it.
But then they came up with ideas like, you know, would you want to drive a car which crashes twice a day? Or would you, you know, I think every time when you repainted the lines on the road, would you have to buy a new car every time? Or for that matter, I still remember one of them was that, you know, the oil, gas and alternate metre and all those other metres, everything would have one line, and that would say general car default warning light and you have to now decide whatever that is.
So, and the last thing, which I remember still, was that the airbag system would say, “Are you sure?” Before going off and I think, you know, these were really interesting problems that were being posed to the PC industry in some ways. But I think it also captures the phenomenal growth that we have seen in computing industry. And now that computing industry is slowly moving into this automotive industry. And I think we are going to see some very fascinating ideas emerging out of that.
Sean: The Bill Gates automotive industry joke I seem to remember is something about pressing start to stop the computer. And I'm not even going to mention progress bars, which were extremely flaky in that era. This week's feature is driverless cars. And our guest is Mohammed Mousavi, Professor of Data Oriented Software Engineering at the University of Leicester. His speciality is testing. So welcome to Living with AI, Mohammed.
Mohammed: Thank you very much. It's a pleasure being here.
Sean: I must confess that driverless cars is a real interest of mine. Now, I'm not an absolute kind of expert on this, but I do just have a real interest in it and even from just kind of simple assistive technology in the cars that are there now all the way through to fully autonomous cars. We did mention on the podcast recently, an announcement by Honda about a Tier 3, I think it was car. What do these tiers mean in the world of autonomous cars?
Mohammed: Right. So there are different levels of driving automation that was defined by an association international association, just to distinguish different levels of autonomy that you put in your car. So anything beyond Level 3, you are basically in charge as the driver. Anything above Level 3, and it goes up to Level 5, the car is in charge. At Level 3, it's a bit of a tricky level because the car may ask you actually to take control back from the car and hence you should be alert. So you're not allowed to watch a movie or sleep, but the car does the driving for you. So that's where the actual autonomy kind of starts, right?
[00:09:58]
Sean: The assistive technology I'm accustomed to using in my car is things like Lane Assist, which sort of gives you a nudge if you get into close to the edge of the lane or Adaptive Cruise Control, where it tries to sense if there's a car in front of you travelling slowly and therefore adapt your speed accordingly. I must say it's not without challenges. I have come up to the, say, a filter lane where a car is slowing down because it's pulled over to a filter lane and my car slammed the brakes on thinking that there's an obstruction up ahead, when I was just going to continue along in my lane. So this is the only thing that concerns me about fully autonomous cars is when, you know, how far are we along with getting past, let's say, bugs like that?
Mohammed: Well, I think this is a very challenging issue anyway and some of the challenges, as you pointed out, are in the technology. So we should improve our AI technology. We should improve our image recognition. We should improve our user interaction. But part of it is also about the social side of the story. So actually, communication and interaction is non-trivial with human beings. And one of the most challenging periods is where you have a combination of autonomous and non-autonomous cars on the road. Then the autonomous car should be able to communicate and negotiate with a non-autonomous driver and actually be able to make itself understood, which is a very non-trivial task.
So there are some technological issues which may be solved in a few years. So we are already on the right track to solve them. And our programme is actually part of the solution to those technological problems. But also part of our problems are about the social side of the story. So we should make sure that the communication goes smoothly and that we have the right means to establish a kind of trustworthy communication between human beings and the driverless cars.
Sean: That's the classic thing, isn't it? The hybrid situation. It's one thing to set some rules and allow systems to work based upon those rules and that's one side of it. Yeah, when you start having hybrid situations with, let's be honest, frankly, unpredictable humans in the loop who don't necessarily follow those rules or don't follow them in the same way, that can be a problem. I think the other side of it is, a few years ago I did a Computerphile video with Professor Ross Anderson and we discussed the idea, and I think this hopefully comes around to the idea of trust that we're going to discuss, but the idea of the car or vehicle MOT, which, in the UK anyway, is the test that the car has to do after it's three years old, every year you have to test whether it's roadworthy.
So it's simple things like, do the brakes work? Is the steering in order? And are the emissions safe or within certain safety levels? How does that work when you've got the possibility of software versions and the liability of, you know, we're all used to having pieces of software that aren't supported anymore, or mobile phones which go beyond the support period. What's that going to look like with like a five or ten-year-old car?
Mohammed: Right, so I think there are lots of very interesting questions that you raise here. One is the liability issue in itself. So the more autonomy you're putting in your car, the more you are releasing control to the developer and it's only natural that the developer of that component or the manufacturer who takes responsibility also picks up the liability for the driverless car, right? So there is an inevitable trend that the liability will move more and more towards the manufacturer and from the manufacturer perhaps also spill over to the suppliers, right?
So yes, there is certainly a trend in that direction, that's one point. The other important point is that the whole sector is being disrupted in the sense that car manufacturers used to be mostly mechanical engineering dominated companies, maybe some electrical engineering as well, but anyway they were by no means software engineering companies. While if you look at the current value of the car, more than half of the value of a modern car, even a non-autonomous one, goes to the computer systems, the combination of hardware and software and it's very likely that very soon software is going to dominate the cost of the car.
That's why you see lots of software companies like Google and Apple are entering that domain. And this is disrupting the sector because companies that were traditionally mechanical engineering dominated companies are now becoming software engineering companies. So, they have to learn how to develop trustworthy software, which is not the expertise that they currently have in house, and the liability issues and the MOT and all that will extend to software.
Just one more point and I'll stop discussing because there are lots of different aspects to it. Even the existing cars do update themselves on the fly. So first of all there's a huge amount of software on the car itself, much more than what you get on an airplane for example, and the other aspect is that an airplane does not get updated while it's flying, while a car does get updated while it's running. So, you get lots of interesting testing, verification, safety assurance issues about this continuous life cycle of software in the car, which are very interesting research questions that are being discussed and studied.
Sean: I probably did throw about 20 questions into that last question that I said there. I do also have a vision of cars in the future with more and more software. If you imagine the early days of computing, a computer was a piece of equipment that was in a guarded room with attendants looking after it, and then the evolution of that comes to these smartphones which we can carry around in our pockets and we can download and customise different things. I'm concerned about what happens when cars are able to have apps on them and the sort of trust implications of what happens when you download something malicious?
Mohammed: So first of all, you should be concerned already because the cars, as I said, do download apps and actually some of those apps are quite mission critical. So certainly there is this issue of continuous integration, continuous safety assurance about cars, even nowadays, before we move to full autonomy. Secondly, what you pointed out is a very important aspect because although much of the software on the car is not mission critical, it's about infotainment and lots of other things that do not concern the basic functionality of the car.
But there is certainly a security aspect, even in those non-mission critical apps that could spill over to the mission critical behaviour. So lots of the attacks that we have seen recently on cars actually do exploit the infotainment system, for example, to get access to the other aspects of mission critical behaviour of the car, including steering and brake and stuff like that. So what is another trend in this field, which is very interesting and very challenging, is that safety and security, which used to be two kind of different and isolated fields of research, are now coming together and a safety case for a car will actually involve lots of reasoning about the security of the car, because if you don't make the car secure, then the safety is going to be jeopardised. And that's a very interesting aspect of what's going on.
Sean: The Task Hub is obviously tasked with investigating trustworthy autonomous systems. How can we trust an autonomous car? I mean, do we have to rely on- I think that the problem is that we're used to, and you've kind of alluded to this anyway, we're used to big tech companies like Google, like Apple, creating bits of equipment that we're accustomed to seeing from them with software on, like mobile phones, like tablets, like computer devices or search engines. We're used to companies like Volkswagen, like Ford, like Mercedes-Benz coming up with vehicles. Where's the middle ground there? Is that Tesla? I mean, how do we trust something when we are so used to it being one thing and it's becoming something else?
Mohammed: Right, so trust has been a subject for much research recently, and trust in autonomous systems indeed is a very interesting new topic that is arising and lots of research is being dedicated to it. So trust is about, at least part of trust, a major part of trust is about the state of the mind of the users, right, so that they believe the system is not going to harm them, it's going to be helpful, it's going to do what they think is the right thing, right.
And much of the research has gone into finding out what are the contributors to trust, and actually that is also part of the research we are carrying out in this programme. We were looking at things like design transparency, so is the intention of the designers clear to the users or not? Often you see that when you buy a car, something is advertised on the car in terms of emission, what I mean is particle emission, so environmental aspects. You see lots of signals on your dashboard, and the question is whether they actually do reflect what the designer has put into that car, right.
[00:20:04]
So our research is about making this connection between the design and what is being communicated to the users. If that is a transparent process, which often currently is not, right. So, currently you advertise something about the car which when you measure in real environment, you see something is slightly different or sometimes very drastically different actually, right. So, design transparency is something that does contribute to trust, and more so in the autonomous system setting, because the autonomous system is going to take very complicated decisions for you, and if you don't know how it is going to make that decision, you're unlikely to trust it, right.
So that's one aspect. The other aspect is of course those assurances of safety and security, if you could communicate them to the user, that's likely to lead to more trust. A third aspect is how the system communicates all those aspects to you, and whether it explains itself properly to you or not, right. So there are all sorts of interesting contributors to trust that play even a more prominent role when it comes to driverless car and autonomous systems, right. So to answer your question, can we trust a driverless car manufacturer? We can only trust it if it builds sufficient evidence and builds sufficient means to first of all make its design intentions clear to the users. Secondly, if it can build sufficient means so that the decisions of the driverless car are explained to the user in an understandable way, probably using many different types of media and means, if it provides understandable and trustworthy evidence, and I'm using the word trust in two different meanings here, so things that you can actually check yourself that the car has been tested and is safe and secure under various challenging circumstances. So these are possible ways to increase the trust in users, and definitely there is more to do in this field.
Sean: There was an article a couple of years ago, and it was obviously a very clickbait-y headline, so apologies if I kind of paraphrase it here, “Would you buy a car that's designed to kill you?” And you may recall this was a story about if a car was coming up to a, I don't know, some kind bus stop and there were several people there, and the way to save those people's lives because there's an oncoming articulated lorry or something was to drive into the lorry and kill the passenger of the car, ergo they were saving more lives by protecting the people at the bus stop than the person inside the car. And obviously this is kind of like that one in a trillion kind of chance, but those things do happen. Was there any truth in this kind of, or truth is a horrible word, was there any kind of validity in this article?
Mohammed: So I think there is certainly a thread of truth in the story you just described, namely it's upon the manufacturer to provide a very clear description of the high-level decision making that is built into the car, right? So the car should be able to transparently describe upon which ethical principles, upon which legal principles, it has taken very high-level decisions autonomously. And this is certainly a very important task. You should be able to understand what ethical principles have been used to take high-level decisions in such challenging scenarios for sure.
Ethical principles are just one example of high-level decisions that you take in such autonomous systems. But it's kind of a prototypical example in the sense that you should first of all make them very clear to the user and there are also lots of interesting discussions about whose ethics to implement in those cars. So when you move a car from one-
Sean: Might it be a choice?
Mohammed: Yes, well I mean first of all there should be some agreed upon principles, perhaps even as a legal basis, right? But if you import a car from say Japan or China to another country which may have slightly different norms, is the car going to be reprogrammed? Or vice versa, if you import a car from Europe to Japan or China, is it going to be reprogrammed, right? So there are all sorts of interesting challenges before us. But I think the prerequisite to deal with those challenges, which is also part of our research, is to make those design intentions clear first, right? So you should be able to explain upon which ethical principle the car has been programmed. And then of course later the question arises whether you can reprogram it and which ethical principles to choose for your car.
Sean: Because actually those sorts of versions happen right now, okay, with existing cars, don't they? I mean there were certain emissions criteria for California, there were certain safety considerations. I know some sports cars 20-30 years ago looked very differently in America because the bumpers had to be a certain size to comply with the local safety regulations. And then you start thinking, well are those safety regulations for the passengers and the driver or for the pedestrians, etc. etc. So perhaps we will see different versions.
There is one other thing to this that again plays into the trust or you know kind of like strays into the trust area, which is if I go and test drive several of these autonomous cars and I find one that feels comfortable, is it comfortable because it's not as safe as the one that was slamming its brakes on every five seconds? And how many miles would it take to prove that? Does that make sense?
Mohammed: Right, so I think you're touching upon a couple of different subjects. One is a concept that we could call drivability, which is slightly different from safety and security that we discussed before, right. So a car might be very safe but not very drivable, not very comfortable in the sense that it can brake very suddenly or it cannot move at all in certain situations, right? So certainly there is room and there is ongoing research about adapting drivability to the autonomous setting.
How could we make a car that takes the right priorities when it comes to the compromise or the trade-off between safety, comfort, agility and aggressiveness and also within that acceptable range, whatever high-level framework we were working in terms of ethical norms, the legal frameworks and things like that, that it chooses the right level of drivability according to the passenger, right. So again, there are very interesting ongoing research, part of which we are also involved and we are looking at, to make sure that the car is not only safe but also drivable for the passenger that they're using.
Another point that was in your question, and I think it's definitely a very valid point, is that for autonomous systems the safety and security assurances have to follow a different type of regime and use different types of techniques than we are used to. So in the past, we would develop a safety case, we would enumerate a number of, say, scenarios that we think are typical uses and misuses of that function, and that would constitute a safety case for your function, for your car function, right.
Now, in this case, enumerating those scenarios is simply impossible. An autonomous car will be confronted with an infinite number of scenarios, and even some of which are completely unknown to us from the beginning. So thinking of those scenarios and going through them manually is out of the question. So what we are doing currently is to come up with automated techniques that would find those classes of scenarios that are most challenging or most interesting for the function you're considering, and then for that we define different metrics of being challenging, being interesting, and push the verification safety assurance case development towards those classes of scenario and provide some evidence that it has covered them.
So it's impossible to drive the car in all such scenarios and come up with a safety assurance case. hat we will do, we will first make an intelligent analysis, and for that we may use AI as a means. So AI is not only used in the car itself, but it's also used as a means to provide safety assurance cases, right, to explore those parts that are most challenging or most interesting or most relevant for the function that you're considering. And then once you have proof of coverage of all those cases, that will be an automatically generated safety assurance case, right?
Sean: One thing that we've touched on before in the podcast is the idea of, you might find it, you know, a horrible case of an autonomous car having a crash where there was somebody who's killed, and those situations are horrendous and horrible and you wouldn't wish it on anyone. But are there fewer crashes or, you know, problems with autonomous cars than there are in the equivalent of traditional cars or vehicles?
Mohammed: I think this relates very well to the last question you asked. So in order to answer those type of questions, we should either have statistical data on the past, say, safety of those autonomous vehicles, which is not responsible, so you wouldn't want to put them on the road and then come up with those statistical data, that wouldn't be the way. Or rather, you should come up with some kind of assurance cases that are exhaustive in some sense, right? Exhaustive in the sense that they did cover those cases that you have seen in the past in the actual, say, accident scenarios of cars.
And unfortunately, we have that data available. So for many different sources, that data is being kept and published, even publicly. So, we could use that actually to inform our process of developing safety case assurances automatically in the sense that I talked about before. So we are actually doing research on this, but I think this is at the moment quite a mature, at least quite a maturing field. So we are entering the phase where we could actually build this into tools that will be used in actual industrial cases.
Sean: This is something that's new and whenever something's new, obviously, problems with things that are new, they're often out of proportion to the reality. So when you bring in a new technology that people aren't sure about, the smallest problem with it can lead to lots of trust issues. And is it possibly a marketing issue as much as anything, rather than an actual physical problem? You know, getting people's trust, I mean, of these vehicles?
Mohammed: Yeah, so I do think that the issue of trust plays into this as well, right? So we should have design transparency, we should have means of communicating the safety assurances to the users. And this will take some time. So, it's not too long ago when we started actually researching the issue of building safety case assurances for these autonomous systems. And we are still in the research phase, I think very little of that has propagated into industry. So, I think we will need some time to come up not only with the technique, but also the way of explaining that to the public that the technique actually does work in practice, right?
So I think it's partly a technological issue that is being researched, also in the task programme and we are developing new techniques. But part of it is also a social, interesting social problem, where we can translate that into a language that can be understood and discussed by the public so that we can together come up with a conclusion that this is now safer, on average, or in challenging cases than the actual human driven car.
Sean: Thinking of all these kind of questions, and we've touched on marketing, we've touched on AI development, we've touched on all sorts, how do you bring all that together in your research?
Mohammed: Right, so in this particular node project that we have just started within the Trustworthy Autonomous System Program, what is unique about it is that rather than being just one individual project where you focus on one aspect, be it say testing, verification, or be it AI and training, or be it human computer interaction, what we do is to bring a whole range of expertise in a team together, so that we can look at the different aspects, of course, but also communicate those different aspects, and build a platform where these aspects can be translated and checked against each other.
So we not only do the safety case assurance research, but also we look at the human behaviour aspects, and how human beings trust autonomous systems, and how their trust changes against different types of evidence, and we build models of that, and then use it in the verification of the system itself. So, once you see a robot, you may behave differently than once you get used to a robot, right? And what we do is to build those models, and check them against each other, and make sure that the system is still trustworthy after those type of evolutions as well.
Sean: I would like to say, Mohammed, thank you very much for joining us on Living With AI podcast.
Mohammed: Thank you very much, it was a great pleasure.
Sean: Time to return now to our panel. One of the things we mentioned briefly in that interview, or in that chat, was the idea of the hybrid system, autonomous cars and human-driven cars functioning on the roads at the same time. I mean, is it more likely we'll see more autonomous cars with humans ready to take control under certain circumstances? Is that where we're headed with this, at least to start with?
Paurav: In a way, that is what shown Level 3 is about, to be truthful. Level 3 of autonomous vehicles is about where the driver has to be ready, always. It's only that they are just sitting, not touching anything. But at any given point in time, the vehicle may say, “Take control.” And that means that that in itself creates anxieties. Can you imagine sitting there, waiting, when it is going to tell me? When it is going to tell me? I would be freaking out almost all the time.
Sean: This is something Gary's got some research on. This is directly your research, isn't it, Gary?
Gary: Yeah, so we've done quite a lot of stuff with Level 3 type vehicle scenarios. One set of our studies was where we had people in a driving simulator in a future Level 3 vehicle situation for an extended time period. So it was like a simulated commute. So, they would come back every day for a week to the driving simulator and do a simulated commute to work, where they would start off driving manually from their home. Then they'd get onto the motorway. They could choose to have automation. The car would go into Level 3. And then when they came to come off the motorway, they would then go back to manual drive and do the last bit to get to work to their virtual work.
And they did that every day. We thought they might think that an emergency takeover would happen on Friday. So we made it happen on Thursday. And then we could see how their trust sort of built up during the week, or how it was affected by this emergency takeover. And what becomes clear is that you will get panic responses in that situation. But the human machine interface in the vehicle can go a long way to mediate this through providing some feedback on the status. So you're not suddenly dropped in. You've got some awareness that there might be some issues with the vehicle. And that helps.
And the other thing is that it really showed the importance of having some degree of training so that people had the right sort of mental models or mindset, if you like, of how they should interact with this vehicle. And this was our sort of big thing, I suppose, that there needs to be some new forms of training for people to operate these sorts of vehicles. Whether or not it's Level 3 or a Level 4 vehicle that you could still drive but don't necessarily suddenly get brought back into driving. But where there's a view of you understand your relationship with that vehicle.
So that we sort of coined this phrase, this mnemonic, CHAT, where basically check yourself, check the objects that you have around you, that you've discarded those, that you've got yourself into the right sort of state. Assess the situation that's around you by in the mirrors, by looking where the other vehicles are, then Take Over. And don't just go straight in to Take Over and grab the steering wheel because that's what people will naturally do.
We found there were a lot of people who would just stick with their non-driving related tasks for an extended time period and then just go, phone, phone, phone, phone, phone, steering wheel. And you really need people to sort of to bring themselves back into driving appropriately.
Sean: My kind of experience of going, say, go-karting and then you come out of the go-karting and you get in a normal car and initially there's a temptation to drive in the same way as you've just been driving the go-kart. So these mode changes or mode switches are really important. But it's interesting as well what you're saying about the panic thing because there are certain driver aids in the car that I drive. It's not a fancy car but it has certain driver aids and I mentioned them in the feature chat.
One of the things it does is a very loud bleeping noise if anybody up ahead is stopped. And I've found this induces more panic than it helps because I'm often halfway going for the brakes and then suddenly the bleeping noise and the big thing appear in front of me, and I press the brakes much harder because of it. In actual fact, causing panic in the car.
Gary: So this is the importance of the human machine interface, about not inducing that sense of panic. And an auditory tone can easily do that, because biologically we respond to these sort of fight or flight, isn't it? We're ready suddenly to emerge to this as a threat. So you have to be very careful. So when Paurav mentioned before about the car saying, “Take control.” Then that may not be necessarily the most appropriate thing to do. Particularly if this is an extended time period, so you've got maybe 20-30 seconds before you actually need to take control. Then you can bring people into the driving situation more gently.
[00:39:52]
Seam: And kind of a bit like we see on the sci-fi where it says things like emergency imminent. Real gentle voice telling you, hey you're about to crash. There was something we got into a little bit in that feature which was to do with this idea of the clickbait-y title, “Would you buy a car that's designed to kill you?” I'd like to bring Christian in on that one because we did discuss this idea of, well we'd like to discuss now I think, the idea of how you make that decision as to what the car does in certain situations. And it feels like an ethical thing and also a safety thing, right?
Christian: That's right. I mean a car is not going to kill you in the sense that the word kill carries this sense of intention. But we do have to contemplate a situation where an AI-controlled car, a fully AI-controlled car will make a decision to cause the death of a human and that could be the human who is sitting inside the car or for that matter someone outside of that car. I think that's kind of where we're heading. I mean this earlier chat we've just had about a kind of human-machine-team scenario, it seems to me is at best a kind of stepping stone.
I'm with Paurav in saying having to be this kind of emergency co-pilot in an otherwise AI-controlled car sounds like hell on earth, I'd rather drive it myself or be completely driven. So I think the idea of being completely driven by the car itself is kind of where all this is heading and that's where the much more interesting ethical questions arise, because on what basis indeed will the system make a decision when human life is at risk or at least injury to humans is at stake?
Sean: Thinking about the laws that have been brought in about humans and you know alcohol, don't drive with alcohol, don't drive holding your phone, you know the amount of attention that we are having to give to driving that car and then saying well actually you're going to be able to not put that attention on driving the car. I mean one thing I find if I'm doing a long journey is it can be relatively tedious and you can almost switch off unless there's things going on, unless there are decisions to make about the route or about you know the traffic that's going on. It can be more difficult on an easier road because you can't concentrate quite so well.
But there's one thing that was a twitter thread a few years ago that you guys I'm sure have heard before. How can we trust cars if we can't even get automated tills in a supermarket to work? You know unexpected item in the bagging area, the hellish kind of sentence of doom for anyone who's trying to operate one of these things. We can't get that to work and that's been as far as I understand in development since the 1980s. How are we ever going to trust autonomous vehicles?
Paurav: That is something which worries me to be truthful. Our human ingenuity to crash out. Let me put it that way. In a way we are very, very good at, we are pattern seeking animals. So we quickly set up certain sets of patterns but that is where our genuinity lies and then after we become very reactive psychologically. So whenever a new problem arises, we add that. So, for example, unexpected item in your baggage area, that is something that a software engineer understands so adds that item, adds a new item, adds a new item into the skill set. So the skill set keeps on developing but here you don't have a second take. You are on the road. It's not a film. It's a real life.
So this becomes quite a tricky situation in that sense from a psychological perspective because unique situations may occur. You put the gas on the pedal and suddenly it doesn't respond. We all remember those type of situation and then it responds and suddenly a relief comes through. Or for that matter, one of the things that was mentioned in the interview was that there are going to be other people on the road who are going to be driving.
Anyway, remember when we are sitting with our spouses and friends in the passenger seat and they are driving and we are in the passenger seat and when there is some sort of a tricky situation, have you not seen yourself putting that pedal, your own leg down and all those kind of things and holding your seat and the kind of patterns we develop. So in those kind of scenarios, we know that we don't trust other people even whom we trust so much who are driving the car right now. But now we are going to trust other people who are driving a different vehicle? We are going to trust ourselves sitting there thinking that no everything is going to be hunky-dory. And then we are going to trust an autonomous system all together and say that, you know, “Take over.”
Sean: Every time we drive though we trust everybody else on the road right to follow the rules. Gary, you're itching to say something here?
Gary: Yeah, I was just getting to this whole thing about not necessarily trusting other people driving when you're in a car with them. A lot of that comes from us having very over-inflated views on our own driving abilities. There's been some great surveys on this, about 90% of drivers think they're better than the average driver, and it's statistically impossible. But this always happens. I think we do very quickly trust technology and I think one of the things I've, and in human factors, it's been talked about for many, many years, it's not about having high trust or you know getting as much trust as possible. It's about getting appropriate trust, calibrated trust to the levels of reliability that we're talking about for the technology.
Now you know as you go, and this development of trust that we have with technology is not a linear relationship. We'll tend to sort of, when we have an early experience with technology we'll have very low levels of trust and you can see in surveys now. People haven't actually experienced autonomous vehicles so they give them a survey and they'll all say, “Oh I'll never get into one of those.” But then when you have experience what tends to happen and that experience is positive, then your trust quickly goes from next to zero to a hundred percent and we develop these very high levels of trust.
And this is exactly the same thing that happened with sat-nav technology. When it first came out, you know, people would go, “Oh I'm not sure. I like my paper maps.” And then before long everybody uses it and doesn't know how to use a paper map anymore. And it becomes quickly something that not only do we have very high levels of trust on, which then cause problems and we all know the problems of the people nearly driving off a cliff or the wrong way down a one-way street or into weirs and all these sorts of things. But we very quickly become reliant.
So, what's the most important here is a level of trust that is in keeping with the level of reliability of the technology. And of course, you know, we want our autonomous vehicles to be a 100% reliable. But the reality is that they will never be a 100% reliable, there will always be. You know it came up in the interview is about edge case scenarios, things that the programmer, the AI programmer never realised that come up because of the complexity of our traffic situations.
Sean: One thing I don't know if it's got any bearing on this whatsoever but we mentioned about, you know, pressing the brake when you're sitting in a passenger seat and you know whether you trust that driver. I mean obviously some of those things are kind of animal gut reactions. The other thing that happens is, if there is a tricky bit of driving coming up and you're not driving, you know to keep quiet to allow the driver to concentrate. Your conversation will pause, well for most people in my experience. I was just thinking of that idea of passengers in the car and then suddenly you're having to have that concentration on the driving?
Gary: So the key thing here is that these sorts of vehicles will almost certainly have an occupant monitoring systems, and so there's an awareness of the status of the individual in there. Particularly, well for a number of reasons, for cars that can be driven that may well be in order to then make a decision about whether or not you would be allowed to drive in that situation. For vehicles that can't be driven and more of the sort of Robotaxi type scenario, then we're talking more about sort of knowing that the right person is in the car, knowing, monitoring for lost property and for you know for dodgy things going on inside the vehicle. But all of that will come with those sorts of vehicles and, yeah, and I think that's the sort of thing that will be increasingly expect I think.
Sean: One of the other things we discussed a little bit in the feature interview was the idea of the MOT and for those who are not in the UK, this is the yearly test I think I might have mentioned it in the feature anyway, but the yearly test that a car must go through after it's three years old to prove it's roadworthy, to make sure that all of the parts of the car are in a working order within certain guidelines. The question then came up about, what about the software? How do you check that nobody's hacked it, modified it, downloaded a patch that wasn't appropriate?
[00:49:58]
You know we've all got different flavours of computers running different versions of software, different apps and different phones with different apps. How do you keep a track on whether in a in an automotive sense you're not going to be causing problems by having your software crashing because it's, I don't know, running the wrong software or something?
Paurav: One other thing shot I found in the interview was the whole idea of the power that the software is going to wheel into the new vehicles, and that, that is not surprising. But at the same time it is it kind of puts people on the edge. because we have to remember as you rightly say, that people are people are differently capable when it comes to software’s and software themselves are differently capable about themselves.
So, what you have is that if software’s are going to become more powerful and they are actually going to cost more towards the build-up of a of an automobile, it really raises some interesting questions in terms of compatibility, in terms of different sets of compatibility and opinions each software would have and how it will operate. So I feel a little worried about that and I feel also a little worried about, I think you know what Christian was saying earlier in terms of the ethics associated with it.
Christian: The thing no one wants to go anywhere near yet is which ethics will be the foundation for system decision making? And you see this in in in policy documents and in reports and even in statements of principles produced by government organisations and NGOs and the like. They'll say things like, “And yes decisions should be based on shared values and consistent principles.” And then they proceed not to tell you what they are. And at some point, I hope fairly soon, someone is going to have the guts to stick their neck out and say, “right these are the principles upon which when human life is at stake, the system will make a decision to kill one human rather than another, or to kill these humans rather than those humans.”
Now obviously we don't want this to be a prevalent thing, largely the whole point of doing this is to improve road safety. We don't want to be obsessing about it. And yet in terms of public trust, people are going to be obsessing about it, even if it is a tiny risk they will obsess about it. So who will stick their neck out finally and say, for example, “This AI controlled car will be a utilitarian.” Or, “No this AI controlled car will use deontological duty-based ethical reasoning.”
Now if you're reasoning in a utilitarian fashion then that will include a scenario where the car decides to cause the death of the sole passenger in the car rather than, for example, the group of three small school children who've just run across the road against the lights. Because the greater loss of life would happen if the car struck the school children, so the car is deciding to kill the one passenger which may well be the owner, the purchaser of that AI controlled car. Now that makes perfect sense in terms of utilitarian ethics but I dare you to sell that to people on the market. But maybe you can, maybe it can be done. But people will need to be well aware that that is a possibility if utilitarian ethics is what is driving the system's decisions.
Gary: My best understanding of these sorts of ethics is through a document from the German ministry of sort of 20 ethical principles for autonomous vehicles that was sort of developed over several years with various experts. And yes, and it has that sort of that sort of principle, Christian, there as a sort of minimising, yeah, essentially providing protection to human life, as the sort of, regardless of whether or not it's your vehicle or not.
I suppose that you know this partly gets into whether or not vehicles when they get to this level of sophistication will be consumer products or not and they may well not be and they're actually more, vehicles that are sort of shared vehicles, that will have a different a different view for people. But one of the things that it just got, when I was starting to think about this again it reminded me of some public engagement workshops that I was part of recently, with the Department of Transport and with Science Wise.
We had all these workshops around the country, and I remember being in one where there was a little demo of a driverless car with a safety driver in place on a test track. I was there with members of the public and the biggest issue that affected their acceptance was that this vehicle would not stop for animals. The ethical principles associated, you know, lots of people talk about human life but you then get into this whole thing about animals and when does an animal become? In the German ethical document it talks about higher animals, so animals that have a greater priority.
Sean: Like dolphins yeah?
Gary: Legal framework, yeah, yeah.
Sean: But absolutely, they're edge cases, but they're important to people aren't they? You know, people have that issue.
Gary: Well people, when, during the test, this safety driver had to take over because a pheasant ran out, you know came out in front of the vehicle. The people in the car would say, “Well it would have stopped for it wouldn't it? And they go, “Well, no it wouldn't. Then it was like, “That could be someone's cat or someone's dog.” And you get to, at what point does this go? And these are the things that are the Daily Mail stories of the future that will influence public's acceptance, I feel.
Sean: Absolutely. And the other thing I was thinking and kind of alluded to when we talked about the MOT, is, you can imagine somebody if these are privately available. at the kind of buying a Maybach or a Lamborghini level of money, paying a software engineer to say, “Make this thing never kill me. Adapt it so that I am the priority here.” You know it's kind of like James Bond villain stuff. But it’s possible right?
Gary: It's a bit dystopian that, isn't it?
Sean: It is. We mentioned there about the idea of animals and whether a car would stop for them. There's a bigger point here isn't there which is kind of like, we've almost leapfrogged it, which is the idea of pedestrians and their interactions with these vehicles. So an AV and pedestrians, it's not going to be the same as me say, for instance, driving my car along the street and spotting that there's a three-year-old toddler there and slowing down because I wonder what's going to happen. There's like a relationship isn't there between pedestrians and drivers where you can see them and you can talk to them. What do you think Gary?
Gary: Yeah, so this is sort of a big area and there's lots of research studies going on as to what form of, what they call external human machine interface might need to be in place as a new form of, way of communicating between a vehicle and other road users. You know, we have lights, we have indicators, we have sort of formal methods. But we have informal methods when you might flash someone to let them go. Some interesting issues there, because it's limited bandwidth so you might flash someone and they think you're telling them that they're going too fast or they're being they're being angry with you, when actually they might be telling you that you haven't put your lights on or something.
So this sort of confusion is interesting but to communicate intent, so that you know that that vehicle is giving way, you know that vehicle's in automated mode, you know that vehicle is about to set off again. All those things might need new lighting systems, maybe some certain sounds in order to- you know, sort of the most bizarre one I’ve seen and there's a few concepts, I’ve had, there's some cars that have got eyes, so that the eyes follow you as you go across the crossing. Yeah, to basically say, “I’ve seen you.” The problem there is what happens when you've got multiple crossing, we've got multiple people crossing, is this car sort of cross-eyed, boss-eyed? It doesn't work in the truly social system that we operate in here.
So, yeah I think ultimately there will have to be a standard external HMI. You can't have different car manufacturers having different systems that people need to learn. There will have to be some form of standard for what this is, whether or not a combination of lighting and possible sounds.
Sean: Yes, I remember when they brought in the trams into Nottingham a few years ago and they all have a very distinctive bell sound. So, you know that now, if you're a regular in Nottingham, you know that noise means, there's a tram coming. So it's kind of like a learning process. It doesn't stop the people who uh pulled out too far and expected the tram to swerve around them which did happen a few times in the early days. That just about does it for us today on Living with AI. I'd like to thank Gary Burnett, Christian Enemark and Paurav Shukla for joining us today.
Gary: Thank you everyone, been, yes, been good fun.
Christian: Indeed thank you.
Paurav: Oh thank you very much.
Sean: If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS website at www.tas.ac.uk, where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub. Audio engineering was by Boardie Limited and it was presented by me, Sean Riley. Subscribe to us wherever you get your podcasts from and we hope to see you again soon.
[01:00:38]