
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Am I Safe in an Autonomous Vehicle? (Projects Episode)
In this 'Projects' Episode we've chosen a few projects that tie in to the umbrella of safety in AVs (Autonomous Vehicles)
Inclusive Autonomous Vehicles Project: Mohammad Naiseh, Post-Doc Research Fellow, University of Southampton
SA2VE Project: Associate Professor Helge Wurdemann
RoAD: Dr Jo-Ann Pattinson
Inclusive Autonomous Vehicles: Tom Webster, Connected Places Catapult
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 2, Episode: 3
Am I Safe in an Autonomous Vehicle?
In this 'Projects' Episode we've chosen a few projects that tie in to the umbrella of safety in AVs (Autonomous Vehicles)
Inclusive Autonomous Vehicles Project: Mohammad Naiseh, Post-Doc Research Fellow, University of Southampton
SA2VE Project: Associate Professor Helge Wurdemann
RoAD: Dr Jo-Ann Pattinson
Inclusive Autonomous Vehicles: Tom Webster, Connected Places Catapult
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: This is Living With AI, a podcast where we get together to look at how artificial intelligence is changing our lives. In many cases it already has. AI is ubiquitous whether it’s using intelligent sat nav or simply setting a cooking timer via voice control. Welcome to our second season. I’m Sean Riley, host of the Trustworthy Autonomous Systems own podcast. In case you’re wondering, we just call it the TAS Hub. We’ve already done a whole season of podcasts so if you’re new to Living With AI, check out the back catalogue.
All the links will be in the show notes or search TAS Hub using your favourite method. A quick reminder that we’re recording this on the 5th May 2022 so if you’re from the future, cool your jetpack down, park the hover car, pull up an antigravity beanbag because here we go. Today we’re doing something slightly different. The TAS Hub launched about 18 months ago so we’ve got actual data and some running projects to discuss. So the three projects we’re going to talk about today all centre around autonomous vehicles or AVs.
Representing each project is an actual researcher. Joining us on Living With AI today are Mohammad, Helge and Jo-Ann and joining us from industry is Tom. So I’ll get each one of them to introduce themselves and the name of their project and then we’ll delve a little deeper into each of the projects in turn.
Jo: My name is Jo Pattinson and I’m a research fellow at the Institute for Transport Studies at the University of Leeds and I’m with a project called RoAD. That’s a collaboration between the University of Leeds and Oxford and RoAD is about looking at the responsible use of autonomous vehicle data. The object of RoAD is to consider how data is recorded by autonomous vehicles and what happens and what should happen to that data.
Mohammad: My name is Mohammad Naiseh. People in my project call me Mo so you can call me Mo as well. I am a postdoctoral researcher in the University of Southampton. I am working in the TAS Hub mainly in many projects and one of them is the inclusive autonomous vehicles where we are trying to understand the public perception under some risky scenarios. So we are proposing some risky scenarios to the public and we are trying to get their trust perception and their intention to use. I will be speaking more in detail later.
Helge: I’m Helge. I’m associate professor at UCL. I’m working on SA2VE which is actually led by my colleague Bani Anvari and we are looking into understanding the effect of situational awareness and takeover request procedures on trust between drivers and highly automated vehicles.
Tom: I’m Tom Webster, an engineer at Connected Places Catapult. So here at Connected Places Catapult we’re the UK’s innovation accelerator for cities, transport and places. Our mission is to harness UK innovation to drive growth, spread prosperity and eliminate carbon in the way people live, work and travel. So a large part of doing this mission is helping to connect the cutting edge research done by academia such as the people here and connect them to business and to public sector leaders.
So part of the work I’ve been involved in in this space is working on building the enablers for self-driving vehicles on roads, to help deliver benefits to society and build the economy so hence I’m here so to speak.
Sean: Helge, you just happen to be at the top of my screen so we’re going to start with you. Tell me about SA2VE. Is it SA2VE?
Helge: Yes, it’s SA2VE and I’m Helge. I’m associate professor at UCL and together with Bani Anvari and colleagues from Kings College London and the University of Southampton, we’re working for about 12 months on the SA2VE project which is standing for situational awareness and trust during shift between autonomy levels and automated vehicles. I think it’s worth mentioning here that there is a difference between automated vehicles, highly automated vehicles maybe and autonomous vehicles because most of us think what is a human driver doing in an autonomous vehicle that is autonomously driving across our countryside maybe.
But on the way towards fully autonomous vehicles, we will be facing vehicles that have a certain level of autonomy so we can drive along a straight road, maybe also along a curve. We can maybe do lane changing. At one point on your journey from A to B, the autonomy level will go to a lower level of autonomy and then the human has to take that control because the AI and the sensors that are feeding data into the AI component of the vehicle cannot steer and guide the vehicle. That’s where our project was focusing on, how is the transfer of higher autonomy levels to lower autonomy levels.
How is this situational awareness of the human who is driving because when you take that control from the robot if you wish, the driver has to be in a position to understand what the scenario is, who is walking from left to right, how can I manoeuvre the vehicle safely to manage the situation that the autonomous system cannot do. So maybe you have been driving a Tesla or another vehicle that has different levels of autonomy and you are well aware that there are different types of feedbacks. So one feedback is the visual feedback.
On the dashboard there is a little light that is flashing at the same time you will hear a beeping sound. It’s sometimes also difficult to understand if this beeping sound is coming from the environment that you’re driving through or that’s actually your vehicle. So we have been looking into a haptic driver’s seat and how a haptic driver’s seat can support to increase the situational awareness of a driver and also build the trust between the autonomous vehicle or highly automated vehicle and the human. The haptic seat, just to explain that, is a seat that gives you tactile, mechanotactile stimuli. So it gives you the sense of touch on your body, so a different mean to auditorial or sound feedback.
Sean: Fantastic. I just had a mental image of it tapping me on the shoulder then but it is probably going to be a bit like rumbling controllers on consoles and things. Is that more like it?
Helge: Yes, that’s a very good example. So those of you who play game consoles, they probably know that if you’re driving for example a vehicle away from the street on to the open environment, you get a vibrotactile feedback on the console. There are vibrotactile actuators that you can implement. We have chosen mechanotactile feedback and these are literally lower frequency, so it doesn’t have a vibration but it just gives you if you want a little poke which is very gentle though.
Sean: So it is a bit like being tapped on the shoulder then?
Helge: Yes. In fact, we are looking into one application, maybe some of our audience know when you’re overtaken by another vehicle and in the side mirror you will see a little light showing that there is another vehicle approaching you from the right hand side or back right.
Sean: In what we’d call the blind spot, isn’t it, so it goes between your rearview mirror and your wing mirror. Yes, you get a little orange light or something.
Helge: Yes, absolutely. In order to understand that there is someone overtaking you from the blind spot essentially, you will have to move your head and you have to move your eyes away from the main road essentially. With our haptic feedback system, you could just tap the person on to the shoulder as you said and inform the driver without moving the eyes away from the main road. There is someone coming on the right hand side, please do not change lanes now.
Sean: Jo, could you tell us a little bit about RoAD and what you’ve been doing?
Jo: Sure. So in relation to today’s topic of am I safe in an autonomous vehicle, well the data from autonomous vehicles can be used to help make vehicles safer and this is because the vehicle data can be used to determine the cause of accidents. It can potentially be used to tell drivers if they were in a near miss situation. It can be used to determine liability and causation and that can hopefully lead to improvements and safety.
So specifically at RoAD, what we were interested in, we’re interested in looking at what type of data needed to be recorded to ensure safe autonomous vehicles but also we were looking at who should have access to that data, under what circumstances, for what purpose should the data be collected, what format should it be presented in and who should control it. So how we went about this is we carried out expert interviews and stakeholder interviews with manufacturers, insurers, the police, the aviation industry used as a case study because air accident investigation is very data driven in their accident investigation.
We talked to engineers and academics but also we spoke to the public. We interviewed cycling groups and horse riding groups and we also did a public survey of approximately 800 people. The findings were really interesting. From the public perspective we found that people were, in terms of AV data, people were far more concerned about safety and the potential for the data to be used to improve safety than they were about other issues such as privacy.
So we found that they are extremely happy to have data recorded about them and even video data recorded about them, whether or not they’re a pedestrian or a driver or a cyclist, well actually particularly if they were a cyclist. The more vulnerable they were on the road, the more keen they were for data to be recorded. This is because they were very keen to have that data recorded to determine the cause of any accident should one happen or to determine liability accurately.
The only time they drew the line at this is they weren’t very keen on video being recorded inside the car or audio being recorded inside of the car but anything outside of the car, we found people were very supportive of that. Now in terms of the international regulations about this at the moment, of what needs to be recorded, the regulation on international level, there are frameworks there at the moment but the number of data parameters which must be recorded according to the legislation is actually quite narrow.
There’s not that many parameters that must be recorded. So speaking to the experts and assessing the public expectations, so we came up with additional data parameters that we thought should be necessary to be recorded and stored and accessible for the purposes of accident investigation, for safety and liability purposes. Specifically the main headlines were near miss data which is not currently recorded, video data and location data. So these parameters are not currently require under international legislation or national legislation.
Now when we spoke to manufacturers, they are currently designing vehicles and systems which might produce some of all of this data. They might produce video data and near miss data and location data but it’s not readily recorded and stored and in an accessible format. They’re not doing this because it’s not mandatory. It’s not required by legislation. So they’re not currently designing these vehicles and systems with that in mind.
Now the type of data that we need, we’re talking about huge amounts of data. So the storage of this data, how this data is transferred and in what format, that is a real issue. Also in terms of there are no strict rules at the moment about who can access this data. So if you were hit by an autonomous vehicle, you’re not going to be able to get hold of that data very easily. Even the police might even struggle to be able to easily access the data. You compare that to the aviation industry where every single scrap of data, if there’s an accident, goes to an investigating body and they control it and they’re responsible for it and they work out why the accident happened.
Now the difference between autonomous vehicles and the aviation industry is scale. There potentially is going to be a lot more vehicles than planes so it’s not necessarily a solution to look at the aviation model but it's certainly one that we have looked at at RoAD in terms of coming up with potential solutions to this. So that’s some of the major findings from our project and I look forward to hearing from others about their projects.
Sean: Great stuff. We look forward to discussing that in a bit of detail shortly. Mo, can you tell us a little bit about inclusive autonomous vehicles, please?
Mo: Yes, sure. So the name of the project is Inclusive Autonomous Vehicles; The Role of Perceived Risk and Trust. So with having autonomous vehicles on the road, it’s not like in the future we are not expecting fully autonomous vehicles, all of them, there will be some semi-autonomous vehicles and there will be some vehicles with low automation or there will be also some fully autonomous vehicles. This different level of automation on the road, we were very interested to study how the public will face different scenarios and how will they react to these different scenarios.
This is because we know that handing over the control to another entity such as the autonomous vehicle, is one of the deepest fears in humans in general. So we usually have sometimes psychological, sometimes physical also, issues when we try to face uncertain scenarios. That’s what we actually did in our project. We designed some scenarios and we called risky scenarios. When we say risky, it doesn’t mean that it could be physical risk, it could be financial risk or it could be psychological risk, so any risky scenarios that the public could face when driving autonomous vehicles, we tried to- so for example, I’ll tell you one of these scenarios.
So we asked the public if you are trying to travel abroad and you’re trying to get this autonomous vehicle and you are trying to go to the airport and on the road to getting to the airport there are some football match finished earlier than expected and there will be a lot of traffic on the road. You have three cars, one with low automation, one with semi-automation, one fully autonomous. In this scenario we asked the public okay, this trip is for, for example, for Europe which is you bought a non-refundable ticket worth let’s say £100 so the financial risk here is a bit low.
Then we manipulated this financial risk, okay you’re not going to Europe, you’re going to somewhere far away where you bought for example £500 ticket and it’s non-refundable. Also manipulated the risk more where we have for example you are losing £1,000 and we were trying here to address how the public perceive the trust or intention to use these different levels of automation will change when we have financial risk. It was mostly experimental approach.
What we were trying to understand in this project is the output of this project let’s say is try to design some sort of model to try to capture this trust in different levels of scenarios and try to have a model where we computer trust actually when we have some values for trust in different scenarios. That should be based on some real data coming from the public.
Sean: What’s sorts of things did you find? I mean it’s interesting you’re talking about that choice between those different vehicles and then the various risk factors. What were people leaning towards?
Mo: Yes. Actually we found very interesting results that when the level of automation increased, people’s intention to use higher level of automation decreases. So that means people still do not really trust autonomous vehicle that much yet and actually we were also measuring another factor which is a plane. We said to the public okay, let’s say you are late to the airport, you couldn’t catch the flight and you lost £500. Who would you blame? Would you blame the robot who drove you there?
We found actually the public will blame the robots more when the level of loss let’s say increases. So that raises an interesting question for us as a researcher and also as a system designer, how we can mitigate that perception of risk so people don’t feel unwell when they use these kinds of systems.
Sean: On the podcast before we’ve had a professor talking about robot surgery and prostate surgery. The fact there was that people were choosing to have the robot do the work rather than the human. I think the suggestion there was that that’s an element of experience/repeatable, high level of very high quality results. I mean it’s possible that over time, as people trust these robots to do the job for us, they’re going to choose them over themselves. What do we think about that? I mean, Tom, is it a good time to bring you in as somebody who’s been dealing with autonomous vehicles for quite a while by the sounds of it?
Tom: There’s lots of challenging areas here with regards to self-driving vehicles. So a piece of work I did a little while ago [unclear 00:20:11] was a piece of work we did with Department for Transport where we were looking at a comprehensive framework for how you would actually go about assuring fully autonomous vehicles on the road. So for example, the requirements that a regulator would set for self-driving vehicles, how do you actually define the finer points for what those requirements are?
So from the work I’ve done somewhat anecdotally, we built up the perception that society will not accept the same level of accidents for an autonomous driver that they do for human drivers. So anecdotally, if a human driver makes a mistake and that results in, they do some bad driving mistake and that results in an injury, they can be taken to court and punished. But you can’t do the same thing for an autonomous system. You can’t punish an autonomous system so people aren’t going to tolerate mistakes.
So you need to have a different threshold for requirements. Then this is where this kind of academic work that Mohammad is doing and others are doing in terms of actually getting an evidence base to help make those decisions for what those requirements actually are.
Sean: Thinking back to what Jo was saying about the RoAD project and aviation versus autonomous vehicles because we have autopilots and things like that working in aviation all time. Where is that blame/fault in that instance? I don’t expect anyone to be an expert on this but there is that, again yet another parallel of an industry where automation or autonomous vehicles in a way are used extensively, aren’t they?
Jo: Well I mean my limited knowledge of the aviation industry, I don’t think this is the correct term. It’s a no blame culture in the sense that when there is an airline accident, what the investigators are looking for is they’re looking for the cause of the accident but it’s not necessarily, the aim isn’t to assign blame to say the pilot for example or to a particular person. It really is to just get to the bottom of things and make sure that never happens again.
I think certainly the Law Commission, when they had been doing the consolations that they have on the autonomous vehicle industry, certainly I think that they would like to take the same kind of tact in that really the priority here is to make these vehicles as safe as possible rather than assign liability, rather than worry about assigning liability to an actual person.
Sean: Understood, yes. I think it’s interesting that they don’t then go and try and take that same kind of outlook or same kind of approach when we’re talking about things like the data then, isn’t it, because actually what are they saving there? A little bit of, I don’t know, flash memory. If they’re saying we can’t save that data because of the scale, actually they probably can but they don’t have to so they don’t want to.
Jo: Exactly. Yes, certainly. Just to tell manufacturers, you’re going to have to save this data, they would. It’s just that they don’t have to at the moment.
Helge: I think there are also other systems. For example if you look into London, there is the DLR, the Dockland Light Railway that is also fully autonomous I believe. I think there is not even any interaction really with a human driver. They might take over control to park the vehicle at the end of the day but you hardly see anyone really sitting at the front of the train and driving this. I mean I’m not working in aviation and the railway industry but when comparing them on a blank sheet of paper, you can clearly see differences in terms of scale, what Jo-Ann already said.
So we will have a vast amount of vehicles that will either drive highly automated or fully autonomous. They are usually steered by trained peopled maybe who should have a driver licence but they are probably not as trained as a person working for the DLR or a pilot.
Sean: But presumably there’s an element here of potential for disaster. It scales up in that sense as well. If I crash a car, heaven forbid, and four people, hopefully there are fewer people going to be injured or killed than a train and aircraft potentially crashing into something. So there are elements there as well obviously of scale in a different way.
Helge: And safety drivers. So when you are driving fully autonomous, you expect to go to the back of your campervan and do a cup of tea, which is of course not recommended and should not be done by anyone but this is the expectation, that your vehicle is driving fully autonomous. I think in the aviation industry there are always two pilots, one main pilot and a co-pilot and they are both monitoring, essentially safety officers, for the system.
We are in fact looking, as part of a different project, in replacing safety operators within fully autonomous shuttles in cities in Europe. We are really facing pushback from authorities who say if you want to drive or if you want to have a fully autonomous vehicle shuttle inside our city, inside our country, we require a safety officer in this vehicle who can push the emergency button.
Sean: Sometimes that comes down to job protection as much as anything, doesn’t it, potentially anyway. People don’t want the robots taking their jobs.
Helge: I think that one of the reasons is that if there are accidents happening and these accidents are fatal, policy makers will step in and restrict the use, as we’ve seen with drones. So we have restricted areas where we can only operate drones because they have been misused for other purposes. So they will come and restrict the deployment of these systems and then of course the whole industry and more jobs are at risk. But also that will have an effect on trust I believe.
Sean: Definitely. I can see actually even having a safety operator, having a haptic chair might be very useful because the less you have to do, the less easy it is to keep your eye on what you’re supposed to be doing. Again anecdotally speaking, if you have a very quiet road, you can drift off whereas actually if you’ve got jobs to do and things that take your attention, you’re often a lot more alert I would say.
Helge: Yes, we have to keep in mind that of course driving is a very visual task. So our eyes should be really on the road where we’re heading to rather than anywhere else as we know by legislation of course. I’d like to also bring to the attention of our audience of course that this haptic feedback seat can provide one way of providing inclusivity. What I mean by that is that as I mentioned earlier on, the ultimate goal is to head towards fully autonomous vehicles. Anyone hopefully can use them. It doesn’t matter what condition or disability they have.
But in our society there are certain people with certain conditions, for example hearing impairment who would maybe be able to drive now, maybe also to use a fully autonomous vehicle but on the way towards fully autonomous vehicles and highly automated vehicles where there is audio feedback required, they would be disadvantaged and excluded. So we are looking also into aspects of inclusive feedback and inclusive systems and how to use for example a haptic seat for these purposes.
Tom: I’ve got a couple of titbits to contribute following that discussion. So the first thing is I see one of the key differences between some of these different industries is the shape of the ecosystem so who the customer is is different. So in aviation and in rail, the customer who is operating that vehicle is a business entity whereas for automotive, although there are actually some models where the operator of an autonomous vehicle could be a business entity, what we’re used to is it’s a member of the public.
So when it comes to things like data sharing, for aviation and rail you’ve got a business entity. They have this tightknit relationship with their insurer and they are all going to get together and work out how they’re going to manage the data, whereas for automotive, traditionally they’re like an end customer, member of the public so aren’t going to get round the table in that discussion. I think that’s where Jo’s work is very important, really very important because a manufacturer who’s selling a vehicle to an end user, they have a duty to prevent the privacy of their customer.
They do also have a duty for safety and cooperation but in effect they’re only going to share that data when it’s regulated. Again, you need the evidence base for what data does need to be shared for what purpose and objectively, how much do the end users actually want that data to be shared when it’s all fully explained to them because if you just off the cuff say do you want all your data shared, people are going to be tentative and say no. But when you actually do a more thoughtful experiment where things are explained to the customer, their opinion is going to be different.
In terms of what industry is automated to what level and this kind of thing, so I think it’s generally helpful to consider levels of automation as describing the split in responsibilities between systems and humans. So high levels of automation is not necessarily about having high levels of complexity or capability but it’s about the system has a high level of agency, that they are taking on lots of responsibility and that the humans are taking on diminishing responsibility.
In the aviation industry, the level of automation that is out there in the market is not as high as for rail. So for rail you have vehicles that are completely automated, they can operate and when failures happen they can fail safe without intervention from humans. Whereas for aviation, outside of military and this kind of thing, in civil aviation there is always a pilot who is responsible. So the automated systems, they’re only assisting, they are not taking over responsibility.
Then this is then similar for automotive. So on the 20th April there were these announcements with regard to changes to the Highway Code for self-driving vehicles but the first instances of self-driving vehicles that will be brought to market will have some limitations. So those are going to be so-called level three vehicles in a technical sense.
Sean: Would you be able to just run us through those levels just for people listening?
Tom: Most people refer to levels that are described in a standard published by the Society of Automated Engineers called J3016 and there’s levels that go from zero to five. Autonomous vehicles, in effect, come in at level three. Levels 0-2 are assistive systems so the driver is completely responsible for driving and they’re being assisted.
Sean: Things like cruise control and more advanced versions of that?
Tom: Yes, so adaptive cruise control, yes, is a level one system. Level one is where it’s assisting in either longitudinal or lateral and level two is when it’s doing both. Then level three is where the autonomous so to speak comes in where the vehicle is actually taking responsibility for the driving task but for level three, the driver is still responsible to be receptive to takeover.
So if there’s some kind of issue, the vehicle alerts the driver. Then the driver is responsible for taking over. Level four, now there’s increased responsibility in terms of the system can actually take responsibility not just for driving but also falling back if there’s an issue. Level five, the only difference really is now it’s, in effect, got the equivalent of a human driver’s licence and it can drive anywhere.
Helge: It’s always very interesting that the SAE I think in short, that defines these autonomy levels, always defines this from the technology point of view, so what type of technology do you have in your vehicle that makes your car autonomy level x? Very interesting, and that’s what we are trying also to do with our project is to understand from the human point of view, what does level three mean?
Sean: What can it do for me?
Helge: What can it do for me and what do I need to do in order to make it. So how much of situational awareness do I need to have? How much of engagement do I need to- can I actually take my situational awareness away from the road and do as secondary task for instance. I think that is starting with level three, four and then of course you have fully driverless cars at level five.
Tom: This is a very important topic because these products are starting to become quite complicated and explaining it all to the end user so that they understand is easier said than done. So referring to those announcements that were made in the UK 20th April, people are confused. They’re like so I can watch a movie on the on-vehicle display but I can’t use my phone. Why is that? There is joined up thinking behind it because if there is a system fault and the vehicle needs you to take over, it can of course alert you on its own in vehicle systems but not necessarily through your phone.
So there is joined up thinking behind the rules but to make all those rules make sense to the wider public when things are paraphrased and also because you’re not expecting a member of the public to really understand the whole system of systems is quite tricky.
Sean: Also there’s the external element of course, the fact that pedestrians need to understand what’s going on as well. I mean we mentioned a little bit earlier about, I think it was Mo talking about inclusive autonomous vehicles and the signal from the change from one level to another. It was just making me think of things like those signals we have out on the roads now like L plates for instance which tells every other driver on the road that that car is being driven by someone who’s a novice or maybe it’s their first time in a car, who knows. It just gives you that visual cue. I mean is there something like that for autonomy potentially, an A plate or something?
Mo: I read some experiments about what Sean is describing now about if you have a car with this virtual cue where there is a robot inside or you have this visual cue where there is a human inside and you are a pedestrian, how would people actually trust this car is [unclear 00:38:00] and stuff. So yes, actually there is some work around this. What I actually wanted to add to what Tom was saying about this kind of six levels of automation, this is what’s very interesting to us in our experiment because we were trying to understand the public intention to use different levels of automation.
When we started designing our experiment we thought okay, telling the public about level one, level three, level six, I don’t know, they don’t really understand that. So we came up with some sort of novel solution to address this. We presented three actually levels of automation. We told the public okay, you are not in charge in this car and the car is fully responsible for the driving. You are the supervisor of the charge, of the car. You have to take responsibility in certain situations. The other one is you are fully in charge.
You have to take the- we found when we did our experiment, we found that the public in this sort of classification let’s say, they have a really clear understanding of what they should do in different type of cars. Maybe what does this result tell us that the communication between manufacturers, autonomous vehicle manufacturers and the public need maybe a bit more clarity about this kind of automation maybe.
Sean: It’s a language thing?
Mo: Exactly, yes. So people will actually understand their responsibilities, as Tom was saying abut having the phone or watching a movie in the car.
Jo: What I was really interested in, Mo, is it may be that people understand okay, in this level vehicle I am a supervisor but how much do you think people understand about their own I guess physiological and psychological restrictions in terms of how long it’s going to actually take them to go from looking at the entertainment system, to actually regaining the type of situational awareness that they need to actually take over? That’s quite a complicated concept to explain to people I think.
I really think that that’s quite a big stumbling block when we’re talking about these different levels of automation. I was just wondering whether or not, if your research talked about that at all, looked into that at all?
Mo: It is indeed actually, it’s very difficult but we haven’t reached that point yet where we actually have some sort of scenario as you described. No, actually we didn’t really look at that point yet.
Sean: Because actually the interesting thing there is, I mean we see there are emergency response people who have to go from 0-100 percent in a very short period of time. But when you’re driving usually, you get in a car, it’s very slow because you’re turning the car on, you’re accustoming yourself with the controls and you’re setting off slowly and then by the time you’re five minutes into the drive, hopefully you’ll, I don’t know, warmed up I’m going to say and maybe hopefully you’re alert and you know what’s going on.
Whereas if you’re halfway through watching the latest Marvel movie and then something actually in real life happens, who knows what you’re actually going to respond like, potentially completely wrongly.
Mo: Do you want to miss the scene or do you want to miss the accident?
Sean: It’s fine. We’ll just bump into that because it’s a really important bit of the movie.
Mo: I have got good insurance. But this point that Jo-Ann was saying, what is the time really required to go from your secondary task to going and fully drive again. So we have been actually looking into this as part of our studies and we are basing our models on to the Endsley Model which is actually dividing situational awareness on three different levels, so there’s perception, comprehension and projection.
All of these happen of course in a fraction of seconds before a decision making process has been concluded and an action is actually done by the driver. But you can imagine that all of these processes are happening from getting the sensory input. Most of this is of course through vision. The brain is processing all of this. Then there is an intuitive action that should happen to resolve this situation, that of course the autonomous vehicle cannot handle.
I’d like to mention one interesting statistic that we found and we did a survey of all accidents that are happening in the state of California I think between 2014 and 2020. You can imagine that there is a high density of autonomous vehicles in that state. They’re all publicly available. We looked into the incidences or accidents that were either caused by the autonomous vehicle or not the autonomous vehicle. Each of these categories, we looked into what mode was the autonomous vehicle in, was it driving manually so that was zero.
Was it fully autonomous, level five or was it in a transition mode so where you don’t really know is that the human’s fault or is the human in charge of the entirely driving situation or the vehicle. When you look at the fault that the other party is at, when you look into the category where the other party is at fault so not the vehicle that can potentially go autonomously, then I think about 60 percent of the accidents that happened in that category were caused when the vehicle that can go autonomously was at full autonomous level. So in a way you can say there is an attraction of other people bumping into autonomous vehicles and the question is why that is.
Sean: Well I look forward to a future project that ascertains the reason for that. I think that’s just about all we have time for on this episode so it just remains for me to say thank you to our guests this week. So thank you very much, Jo.
Jo: Thank you very much. It was a pleasure to be here.
Sean: Thank you, Helge.
Helge: Thanks, Sean.
Sean: Thank you, Mo.
Mo: Thank you very much.
Sean: Thanks to you, Tom as well.
Tom: Thank you. Yes, it’s been very interesting being part of this, thank you.
Sean: If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited. Our theme music is Weekend in Tattoine by Unicorn Heads and it was presented by me, Sean Riley.
[00:45:28]