Living With AI Podcast: Challenges of Living with Artificial Intelligence

Legal and Ethical Problems with AI

Sean Riley Season 1 Episode 12

Featuring Richard Hyde on the legal and ethical ramifications of trust and AI and panel members Alan Chamberlain and Paurav Shukla on everything from selling your home cooking on WhatsApp through to wondering how much we want to know about how clean it is in the factory that made our food.

 Alan Chamberlain

 Paurav Shukla

Richard Hyde


Home Cooking Food Standards

Passengers Movie (Internet Movie Database)12 Angry Men (Internet Movie Database)

Nestlé, Carrefour Put Mashed Potatoes on the Blockchain (Techmonitor)

Uber stripped of London licence due to lack of corporate responsibility (The Guardian)

Texas Energy Crisis (The Verge)

Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.



Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

 This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 1, Episode: 12

Legal and Ethical Problems with AI

Featuring Richard Hyde on the legal and ethical ramifications of trust and AI and panel members Alan Chamberlain and Paurav Shukla on everything from selling your home cooking on WhatsApp through to wondering how much we want to know about how clean it is in the factory that made our food.
 

 Alan Chamberlain

 Paurav Shukla

Richard Hyde


Home Cooking Food Standards

Passengers Movie (Internet Movie Database)12 Angry Men (Internet Movie Database)

Nestlé, Carrefour Put Mashed Potatoes on the Blockchain (Techmonitor)

Uber stripped of London licence due to lack of corporate responsibility (The Guardian)

Texas Energy Crisis (The Verge)

Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

Episode Transcript:

 

Sean:                  Welcome to Living With AI where we discuss artificial intelligence, how it's changing our day-to-day lives. Our feature today surrounds the idea of the legal and the ethical aspects of autonomous systems. We will be joined by Richard Hyde, Professor of Law Regulation and Governance in the School of Law at the University of Nottingham. Before that, let's meet this week's panel chatting all things AI. This week it's Paurav Shukla and Alan Chamberlain, Alan is Senior Research Fellow in the University of Nottingham's Mixed Reality Lab where he mainly works in human-centred design. Welcome back Alan.

 

Alan:                  Thanks, hi.

 

Sean:                  And regular listeners will know Paurav is Professor of Marketing at University of Southampton's Business School specialising in luxury goods. Nice to see you again Paurav.

 

Paurav:              Hello, hello. 

 

Sean:                  Well caught in the middle of all this intelligence it's me Sean Riley, usually holding a video camera but I’ve still got this old microphone leftover from when I used to play in a band so it’s, it’s good it's getting a little bit of use and we're recording this on the 18th of February 2021. Well we yeah, we normally start with a little chat about what's going on in the world of AI, well one thing I noticed was that in Texas the energy grid’s suffering some severe problems because they're having some unseasonable weather. 

 

Now the- what's the word, the cynical among you may wonder if there's some connection between the fossil fuels, renewable fuels, misinformation that's going on and the fact that Texas is the Lone Star State and appears to have isolated itself from all the other energy providers in the area. But I'm wondering if perhaps AI couldn't have provided some kind of solution here you know, we have some smart infrastructure in the UK, we don't have the problems they’re having but mind you, we don't have the extremes. Anyone got any thoughts on that? No, that's good then let's move on.

 

Paurav:              I think Sea,n one of the things I would like to point out here is it's not just the Lone Star State type of an issue, it's its impact on the global value systems and global supply chains is humongous. For example companies like Toyota their supply chains are right now really hampered because of this outage. The same is happening with others, automobile suppliers like Honda and others who have plants in and the surrounding areas of Texas because they are unable to connect with their own systems. 

 

And so the global world that we live in now when major [unclear 00:02:34] parts of those world in a way like I, you say sometimes to my students, “When Shanghai sneezes you know, London freezes too.” And so that is how it is happening nowadays and that event we cannot see it in isolation that you know, a Lone Star State is you know having some trouble, we are going to see its global impact as we move along. And I think that is something, that is something we need to think about with AI also that you know, AI in one area failing could have its own global impact throughout.

 

Sean:                  And throwing a global pandemic on top of all of this and yeah, I mean this is the, this can be seen as a, as a, a problem with having kind of corporations who are obviously running things leaner and leaner and leaner. Alan what's your thoughts on this?

 

Alan:                  Well my, my initial thought is that AI needs- is a digital thing really and needs some energy to produce the energy to, to you know, to control the computer, to power the computers and such like. So if you get a big outage for a long amount of time in a factory can you imagine how long it's going to take to restart all the stuff, reset it? I think, I think my other thing was I was a bit surprised because I, I remember a few years ago I did a, did a keynote at a workshop. And the, the project was a European project and one of the people got up there and discussed how the energy systems in the UK that they have to be defended a lot of the time because you can hack into them. 

 

And when you can start messing around with people's energy or you know how much somebody's consuming and you, and you can then guess, I suppose their patterns of behaviour it gives insight into the public and, and you can kind of infer things or assume maybe about the way those people behave. But I, I guess, I mean batting it back to Paurav, I mean how fascinating that you might be able to steal your competitor’s production methods, if you can see how if, if you can see what machinery they're use and assume or infer and then make your production methods a bit leaner and a bit cheaper and you could be anywhere in the world.

 

Paurav:              Very much so but that's what is happening already Alan in the global automotive industry. They are learning from each other continuously, they are learning from each other’s supply chain, there are many competitors who are actually using each other's supply chain so they are working together. Many times if you see some of the cars being towed away into you know, in those big trucks and so on and so forth you will find that many of these competitors are working together in terms, not just in terms of you know, selling and supply chains but also in terms of design elements, in terms of production. 

 

So if you look into some of the cars, especially mid-size cars in many of the Asian manufacturers or American manufacturers you'll find exactly the same design as in chassis and you know overall body outlook. What you will see, a different logo or a different grill design and a different seat interior and you would, you would start asking that question. I don't want to name those companies but you know if you just go into any car showroom of multi- you know, utility vehicles and you will find that suddenly you start thinking, are these actually competitors? And that is also something which is happening in automotive world?

 

Sean:                  But thinking of that, that, the, the hacking angle of that I mean I was just thinking then when you think of these places like Texas and these big open spaces over in America - this is a slight segway but there's automated agriculture out there isn’t there? I'm now worried about hacking combined harvesters, I'm making a big leap from automotive to, to agriculture. What do you reckon to that because often these are, these are guided aren't they by computers?

 

Alan:                  Oh well food, I mean we were just discussing this earlier off- sort of before the session. Food and agriculture it's a ginormous sector and it’s, it’s, it's- I suppose what you start- I think of the world more as- and AI more as things are part of the system. So when you think about Internet of Things or you think about data coming from somewhere and then you think about combine harvesters and, and satellites reporting weather imagine if you could- this, this probably isn't going to happen but if you screw up your competitor’s data that they've got about the best days to harvest, harvest you, you could effectively cause some sort of famine or you could knock prices up of wheat or- and of course this has a knock on effect on the, on the stock exchange, it has a knock on effect on currencies, has a knock on effect on, on employment, poverty, so on. It, it's- I don't know it's a scary, it's a scary world really isn't it that we're in when we think about what can happen. Ugh yeah, anyway-

 

Sean:                  Let's also think about the benefits of this because it's easy for us to jump into the killer robots are going to take over the world side of things. But of course as you say you know these, these organisations are getting data in which helps them work out the optimal time to harvest. And I know- slightly different thing but on a small scale here, some breweries are sharing the information of how long hops might take to you know, I don't know what the technical term is, but you know ferment is that right ferment? I don't know.

 

Paurav:              Yeah, yeah.

 

Sean:                  But yeah and, and when the optimal time is to do things based on the climate. So if one small producer is working out that now it's the time to, to  I  don't know decant the beer I suppose, I don't if that's the right technical term, they will share that information with their competitors. Again it's similar to the automotive industry, so it's great to know that tech can help do this as long as, as long as we trust the decisions being made I suppose.

 

Paurav:              Yeah, and I think this is you know importance of agriculture not just when we think about United States particularly in emerging markets like India and you know where small-scale farming is still the norm. At that point in time the Internet of Things could really, really you know, bring productivity up because Internet of Things can tell the farmer in terms of the land, and its you know, the soil in terms of its humidity and its terms of temperature and moisture and you know the crop health and so many of things that can be picked up through that. And that could lead to a real revolution in agricultural science I think and that is already happening we are, we are already seeing parts of those happening in different, different areas of world including Africa and I think this is, this is you know the next green revolution as we may, we may be seeing.

 

So as we as we're talking about this you know, farming and, and Internet of Things one of the things that all of this is about data. Who controls it, who uses it, where does it get stored and so on and so forth. And so it has so many ramifications because a small-scale farmer would not have the clout like a large-scale farmer or a company with that AI company or that IoT company which is dealing with that data. So how are we again you know, you know we are talking about AI as the next great equaliser, is it going to be that because of that data? And so we certainly need some sort of regulation and ring fencing around it.

 

[00:10:12]

 

Sean:                  I think it's interesting that- I don't know, do a lot of the small-scale agriculture in India, do they form cooperatives or? I'm just wondering, I know in some, in some kind of industries agricultural industries like wine for instance that small producers join a cooperative and would it be the cooperative that holds that data? I don't know, I'm just kind of spit balling I suppose.

 

Paurav:              Yeah, so what we, what you find is in, in some states, in some areas and you know small-scale farmers do create cooperatives. But at the same time there are still so many lone farmers because there are village hierarchies, status hierarchies, there are so many complexities involved in this village life when you go into emerging markets. And this is across the emerging markets, so we are talking about Latin America or you know, Africa or in, in the Indian subcontinent or parts of Asia. Wherever there are small farmers and there's village communities there are, there are at times cooperations but there are also frictions and so there are going to be you know, different, different takes to it.

 

Sean:                  And they'll always be the, the old hand who says, “I'm not trusting that newfangled computer I know when my crops are ready.”

 

                            The feature interview today is with erstwhile Living With AI panellist and expert in law regulation and governance Richard Hyde, welcome back to Living With AI Richard.

 

Richard:            Hi Sean.

 

Sean:                  Richards research spans food and regulation through to regulation in advertising and marketing. I do hope I'm not overselling you Richard but welcome back, it's such a large topic to, to cover this you know, regulation governance where's the best place to start with this?

 

Richard:            I think it is, I think Absolutely right It’s, it’s a huge, huge topic to, to kind of think about. I, I think what might be worth kind of starting out with is sort of a, a broad sketch of, of what it kind of we're talking about here. So we’re, we’re talking about how we as a society choose to ensure that what we do with AI and broadly with autonomous systems is going to be helpful to us right? It’s, it’s ensuring that in terms of safety these sorts of systems are not going to be responsible for injuring people, in terms of kind of the economic interest in our pockets it's not going to lead to us you know, suffering losses or being misled or I don't know, being in some sort of way put upon so that we make decisions that we wouldn't otherwise do. And it's also thinking about you know, how we can regulate these systems to ensure that the benefit that we're actually likely to derive from autonomous systems can properly be realised and we don't have this sort of system where we have autonomous systems getting such a bad rap that, that they can never be adopted. 

 

And so, so the balance and the, the challenge for kind of, policymakers, for lawyers working in this area is to kind of think, well what do we want and how do we write our laws and our regulations to make sure that we actually get what we want out of these autonomous systems? Which of course we know that it's difficult right? It's- because also not only are we, we writing laws to govern things that are quite complex we're writing laws potentially to govern things that don't exist yet. But and so we we're facing quite a challenge as lawyers and hopefully we're up to it but we'll see.

 

Sean:                  I think that's, that's interest you mentioned there that it struck me straight away when you said we're writing laws to govern things that perhaps don't even exist because you know, traditionally the law is often quite a way behind for want of a better word real life and I don't mean that in a kind of derogatory term. But what I suppose I mean is it's often fighting to catch up. So for instance social media was governed maybe by the laws for television when actually it's so much more dynamic and so much more I don't know, easy to do than to get onto a television programme that it, it needs different laws right? So how do you start to approach something that doesn't even exist?

 

Richard:            Well it this is the thing you're absolutely right that the law has a history of being quite reactive to things and you only have to go back you know, social media is great example but think about cars right? There wasn't a speed limit until the 1930s partially that was because cars couldn't get up to a decent speed at all, but it was just that the law was kind of reacting to, to these things. And similarly radio waves right? When radio started to come out there when Marconi invented the radio there wasn't laws in place to deal with what you were going to do with radio waves, that came later and so the law has a history, it's a lot easier to write things for problems that you've identified. 

 

The problem with that is- and we've seen it In some ways with social media, is that by the time you come to write the regulations kind of behaviours are baked in and, and things start happening and it becomes a lot more difficult. So one of the things that you have to do when you think about how should we regulate in, in future is we kind of have to have a level of imagination and that's, that's you know really kind of difficult somehow sometimes. Particularly when lawyers are not necessarily totally in tune with what's happening with the kind of designers and the people who are programming in this sort of area. 

 

I mean one of the, the things that I think is, is really exciting about the TASHub is actually that I get to talk to people who are doing these sorts of things, who are thinking about you know, what it's going to be like in 10, 15, 20 years and get to kind of have that preview to think about what the law is going to be in that, in that period of time. But generally if you're sitting in, in government or you know, in the European Union thinking- or in the US or wherever thinking about how you're going to regulate these things in the future it takes a lot of talking to people and a lot of thinking about what's going to happen in the future to try and future proof your laws, and even then it's not necessarily easy to do that's why-

 

Sean:                  And it's presumably a, a moving target anyway while you're trying to ratify something it’s, it’s still changing?

 

Richard:            Oh absolutely and I was, I was chatting about this this morning to, to one of my colleagues about the regulation of misleading advertising and the regulation of these sorts of things. The underpinning law for that was written in or started to be written in 1992 by- when the internet was you know very small and confined to a few universities and the US Department of Defence and, and that sort of thing. By the time it came into being much- sort of in the UK in 2008 there had been a big shift in our consumption practice, how we buy things, how we, we, we find these sorts of things. 

 

It was really a massive shift between these two, two different types of, of where we, where we were and that has caused problems because you know, the way that we advertise to people now it's not the same as we advertised to people in 1992. And it’s, it’s really difficult to do these, do these sorts of things and manage to kind of hit this moving target and that means you've got to be agile right, in terms of being able to, to, to make changes to the law. 

 

One of the things that it pushes you towards potentially is having regulations that are kind of softer, that aren't like really hard statutes passed by Parliament etc. Things that are created by you know, expert bodies or, or by even by industry themselves you know self-regulatory codes which can be done much quicker. But often it's kind of as, as a mode of governance it's a different mode of governance because the people who are writing them are not you know, your typical legislators or your, your typical policymakers. 

 

They are kind of people who understand the area and perhaps people who are interested in the area and genuinely want it to, to, to go forward. And that in itself poses issues because you have to bring in you know, some other perspectives we have in the TASHub into, into that sort of thing about you know, this is good but is it the right thing we want to do? Is it the way we want to develop? And that's the kind of real challenge in dealing with these really fast moving areas of the law and fast moving areas of technology and ensuring that we get what we want to get out of it at the end of the day. 

 

[00:20:06]

 

So it's not you know, the easiest thing to do, and the final thing I’d say on that is look at the example of autonomous vehicles right? The government passed legislation, the Autonomous and Electric Vehicles Act to, to kind of allow it to govern autonomous vehicles and, and self-driving cars and things like that. Within about two years they'd engaged the Law Commission, the, the big UK sort of law reform body to do a massive project to figure out whether they’d got the law that they passed right. And basically the Law Commission have spent the last three years looking at this, this question of, “How exactly should we regulate autonomous vehicles?” Two years after the law, that the law had initially been passed.

 

Sean:                  Well it’s, it’s interesting you mentioned that because I, I was thinking about how these new laws- I'm, I'm seeing the law as a kind of jigsaw puzzle and are these new pieces or are they adapting old pieces and how much of that- you know even if it is a new piece it has to, it has to for want of a better word segue with some of the stuff that's out there. So if you make a, a law about autonomous vehicles that has to fit in with the traffic laws that exist exactly already right?

 

Richard:            Yeah absolutely and it’s, it’s a really interesting puzzle to kind of how, how you, you fit all these things in together. I mean one of the things that's been a real challenge around you know the law and the autonomous vehicle side of things has been a lot of our traffic laws are written to think about judgement, to allow us to as humans make decisions about what we're going to do. And the Law Commission gave like, a couple of examples of where it's it, it requires judgement and, and one of these is kind of a reasonableness standard. So you've got to drive in, in a reasonably careful way right? So the general standard expected of a, a car driver is that they’re are reasonably careful and skilled driver and that means that that reasonably careful and skilled driver sometimes is going to make mistakes that, that won't be necessarily something you wouldn't expect from that driver. 

 

But they're just going to happen, accidents are going to happen, one of the questions in the current Law Commission consultation about these things is you know, “Should we expect that standard from the autonomous vehicle or should we expect more? Should we expect the autonomous vehicle to be more careful than we'd expect our standard driver to be? What about if we're a pedestrian, should we expect the autonomous vehicle to stop where perhaps it wouldn't be reasonable to expect a, a non-autonomous vehicle to stop?” And so I think that's a really difficult question of how you fit these autonomous systems into what's, what's out there. 

 

The other thing that they kind of brought up was the Highway Code right? So the Highway Code obviously codifies what you expect these reasonably careful skilled drivers to do and you can see that there's all these sorts of different standards in there. One of the things that it talks about in the Highway Code is you know, you shouldn't go on pavement unless you're moving out the way to kind of let emergency service vehicles through and things like that, and it's reasonable to do so. And you shouldn't edge through crowds where you've got lots of people in front of you but you might choose to if that's safer than staying where you are.

 

But how do you write that for an autonomous vehicle? How do you write the Highway Code which is all about you know, judgement and reasonableness and taking these decisions and, and difficult sort of standards based judgments that humans do quite well? I mean not perfectly, I mean look at the, the road traffic accidents we have, but they don't do it perfectly, but we're conditioned to kind of do these sorts of things. And, and, and we all know right, that if we're driving a car and there's an ambulance coming in the other direction and it can't get past us, we're going to kind of edge over and, and get out of the way. 

 

How do you tell an autonomous vehicle to do that? How do you write or provide enough training data to allow an autonomous vehicle to learn that that's what it's supposed to do in the circumstances but only do it in those kind of really limited circumstances that we, we want to do? So it's really difficult to kind of fit it into this jigsaw of pre-existing law that. that we have and, and so it's yeah, it's, it, it's really challenging.

 

Sean:                  I, I did a little bit of research before we started talking and I looked at, I looked at a legal document I- forgive me for forgetting the name of it but one of the things, and it sort of made me remember is that when you look at anything that's legal there's a huge amount of the document devoted to defining and definitions of phrases or words. So if- I'm just going back to your example there, reasonable has been defined in the legal document then there needs to be a whole set of kind of explanations as to what the definition of reasonable means. Which isn't massively different to computer code in some respects.

 

Richard:            Yes and, and this is a real challenge is, is, is, kind of in some ways there is a similarity between writing code and writing law. But the problem is that a lot of the, the kind of the syntax of law and the way that you build up over years- I suppose it's like compilers right, and you, you, you build up this stuff over years and years and years is very difficult to distil down into a single sort of command line that you're, you're potentially thinking about. And it, it's one of the, the things about, about laws as well is similar to you know, how you'd you know, go through a kind of Beta testing and Alpha testing process and you know kick out the bugs of your computer programmes.

 

A lot of what sometimes happens is when the law comes out you realise it perhaps doesn't work for the, for the purposes that it has and we can you know again, the Law Commission having to go back and look at the Autonomous Vehicles Law two years after it was done was kind of, it's kind of a good example of that. But it, it's difficult to, for lawyers to kind of think in that sort of structured and logical way is something that we are, are trying to do. But part of the thing is a lot of it is about the build up of the of the law that comes from cases, that comes from other statutes that comes from deferring to kind of human judgement at times. 

 

Over time putting that all into, into a system it is a complicated thing to do, it would be nice if we got to a, a point where all the law was machine readable to start off with and then we could kind of kind of go from there. But, but yeah, you're absolutely right there's a lot of genuinely interesting thought at the moment about how you codify some of, some of the law to kind of ensure that that our systems and potentially autonomous systems themselves can read and apply the law as it is. I mean one thing that I- even when I was in practice sort of 20 odd years ago I, I was desperate to do was to kind of find some way to, to get a, an autonomous system to do the, do the job that I was doing. Which was looking at advertisements and figuring out whether they were legal or not, whether we could put them out in the middle of Coronation Street on a Tuesday night or, or whether we, if that happened we were going to get slapped on the wrist. 

 

And so, so that's the, the kind of the thing that you might see in future in developments of kind of autonomous systems if you can get the law in a kind of machine applicable vein. It's not just in terms of building autonomous systems to, to, to work in a way like autonomous vehicles and obey the law but also to have autonomous systems that judge compliance with the law and judge whether that you are you are meeting your legal standards. And that starts to get potentially problematic when you, you start to think about you know AI judges and that sort of area.

 

Sean:                  Well it, it did make it made me wonder actually, that what- and this is kind of wondering from a kind of, not knowing the procedure or the process. So giving you a for instance before I even explain what I'm trying to say, if I do that thing where there's an ambulance so I creep onto the pavement and you know, technically break some rules and a policeman watches this happen, that police- sorry a police officer watches this happen. That police officer is, is probably unlikely to do anything about it from a kind of a legal point of view in, in terms of kicking me off or, or arresting me or prosecuting me or, or- sorry wrong terminology but you know in in terms of charging me for anything. 

 

However what happens when we take that to the next stage? If there are- and again I'm going to use that analogy of it being, the law being code. If there are bugs in the law or things that cause the law to crash, presumably the ultimate on that is it ends up in court because something goes wrong and then they judge- how do you, that, how does that feed then back into changing that law or does it?

 

Richard:            Well obviously I mean if the- obviously you've got problems or you start to see problems in law there is this kind of feedback mechanism. If we're talking about criminal law and, and people start being, starts going to court and people start to get, being either wrongly convicted or you know, wrongly acquitted in the view of the government of the day there will be this kind of feedback mechanism to kind of say, “The law should change.” Sometimes that can happen quite quickly but parliamentary, particularly if we're talking about statute or parliamentary past law it's, it takes ages. It's like really, really difficult to, to pass statutory law without significant amount of times being taken up on debate. 

 

[00:31:37]

 

Now of course if you absolutely need to you can pass it in a day but it, it's likely to take a lot of time and that's why this kind of, this push towards kind of more agility to kind of hope to put things in guidance right? And to put things in kind of soft codes or standards and things like that that you hope your, your system will, will meet because those are, are easy to change, they're also easier to kind of write in a less formalistic manner. So think about if you're, if you have them expressed as design rules rather than kind of the sort of statute things that you have. 

 

So I mean the, the easiest way to or the easiest example perhaps to think about is, is websites right? Websites are designed in particular ways and there are certain things that you don't want people doing with their websites, [s/l dark pattern 00:32:46] kind of things that are on the websites to force people to make bad decisions or not ideal decisions I suppose. If you want to kind of get to the designers perhaps it's easier to create a, a set, a set of design rules to say, “You should do this, you should do that, your design should do this, it shouldn't do that, it shouldn't do that.” Rather than having this kind of complex statutory framework and requiring them to interpret it. 

 

And so one of the things that I think people who are, are somewhat legally savvy and somewhat tech savvy play an important role in kind of doing is translation right? Is, is finding this, this place where you can talk between the different communities of the people who are working in the kind of areas of, of designing these fantastic systems that the law perhaps hasn't thought about and who understand the law. And sort of think about how the, the law as it exists can apply to these things in the future and kind of translate the concepts that we're having into these sort of guidelines and, and, and softer ways of expressing it. 

 

And perhaps into a kind of thinking about how you express it in code or what sort of training data you'd need to provide to allow systems to kind of learn the law right? Because and, and that's the sort of thing that again, I think that the TASHub’s great about doing is, is sort of allowing that opportunity to have that exchange of views and, and just get involved in that sort of thing.

 

Sean:                  I think that's- with, there's obviously a scale here because something like the website example is, is great for sort of explaining that having kind of design code or whatever. When it comes to things that can impact on, on you know lethal force for example like drones, like autonomous vehicles it's harder to understand how, how, how kind of like signing up to an industry code would, would keep, keep on top of that. To just to step away from the autonomous vehicles for a minute when I, I used to work at the BBC and we used to have a- in news, and we used to have a, a sort of idea that if something impacted your heart or your head it was newsworthy. And I would involve in that and add to that that if it involves your stomach so just trying to sort of crowbar in your food direction and autonomous and autonomy and food Standards and, and, and things how, how, how do we govern autonomous systems in the world of food?

 

Richard:            It, it’s, it’s really interesting and really complicated. I mean okay, so there are some autonomous systems that, that do the kind of functions that you need to have in a big food manufacturing factory anyway right? So you've got things like robot vacuum cleaners for example that, that are buzzing about in factories potentially cleaning up, doing autonomous stuff. What's really interesting about those is obviously as a, a food enforcement person going in to check whether the, the factory is sufficiently clean if their autonomous vacuum cleaners are there you can pull all the data off them. You can look where they've cleaned, you can see how often they've cleaned and potentially you can see what sort of stuff they've pulled up from the factory and you can go, “Okay now I know a lot more about your cleaning practices than before.” 

 

Because before when you went in you were, kind of saw somebody had signed the sheets to say, “I have cleaned the factory at two p.m. on Tuesday.” and that's yeah, and that is all that you saw. Now you've got this data from these autonomous systems that can kind of verify that and that you know, might be a good thing. Whether you'd want to necessarily see that as a consumer right, who was buying the food, to see how often has my, I don't know, baked bean factory been cleaned while they're making my beans? How often has my other piece-

 

Sean:                  Your Weetabix for instance.

 

Richard:            Yeah, Yeah how often are they cleaned or is there likely to be any contamination here because they have to have- they picked up all the contaminants and so that's going to be you know, a really kind of useful thing potentially for that sort of thing. Similarly lots of lots of factories have got cameras in them now. If you're again thinking about, how, how clean is my factory or how are people working in my factory? Is everybody wearing their gloves, are they- whatever, could those videos build an autonomous system that can use computer vision to pick out some of those behaviours that you might find problematic. 

 

Now that might help you as a, a food business to say, “Hang on a minute there we've got somebody who is clearly not doing what they're meant to do.”, “Oh they're working on these, this line. We might have to do something about that, test it, check it whatever.” Or, or similarly again, if you're a consumer do you want to see that, do you want to see how the sausage is made in this case literally to check whether you think it’s, it’s appropriately hygienic or, or whatever? Similarly put something on a tap to sense when the tap's being turned on or, or whatever. 

 

So that's, that's kind of the safety side of things, similarly you can put kind of sensors in places to check that you think, what you're getting is what you think you're getting. So you can potentially build sensors relatively cheaply now that can detect whether this is horse meat or not in your lasagne or whatever and you can- rather than having massive, destructive testing that used to be in place you can put in place a system that will make sure that what you think you're buying for the price that you're paying for it is the sort of thing that you're buying. 

 

And then we go down to the kind of smaller scale, robot chefs right? You go to your favourite restaurant and there's not the, the human chef there anymore there is the robot chef and it's potentially you know, it can do 6,000 recipes and it can cook eight things at a time and all these sorts of things the advantages that you might have of a kind of embodied autonomous chef that, that sits in your kitchen. 

 

But how do you make sure that it's doing what it's meant to do? How does it follow the recipes and the hygiene laws that it's meant to be doing and, and things like that? It becomes a really kind of difficult question of how much data do you want as a regulator to see, to make sure that this autonomous chef is doing what it says it's meant to be doing and he's, he's, he's complying with, with the hygiene laws?

 

[00:40:23]

 

Does it start to learn to take shortcuts for example? Because the training data is that people who are you know, top pizza chefs or whatever perhaps don't always wash their hands between applying different ingredients to their pizza and it, and then do you have the skills to interrogate that data? If you are a regulator are you the sort of person who can go through lines of, of data to be able to understand whether this robot chef is complying with its requirements or not? 

 

And generally I, I think that there will be a, a shift in the kind of skills that, that regulators will need or as these factories and restaurants etc. become more autonomous to kind of, kind of understand them. And if I, I think if I, I were a food business I might be a little unsure whether I wanted as much data flowing out from my connected factory or my connected restaurant kitchen than there is, than there might be if you put in place an autonomous system to kind of, kind of help you. Because it’s, it’s a lot more transparent and you know, happening all the time than it perhaps is at the moment. 

 

So, so I think that autonomous systems have the potential to kind of change the, the enforcement in, in food but more broadly as well, I mean health and safety as well. Think about health and safety enforcement and how the Health and Safety Executive they can only go you know, after an accident takes place but if you go when, go virtually through the data when an accident took place that's a lot more information about what's, what's happening there and potentially shifts what you do as a, as an enforcer.

 

Sean:                  That was a, a roller coaster of positives going into negatives back to positives, there are two sides to every coin right?

 

Richard:            Yeah, no absolutely and I, I think that it depends who it's a benefit for I think and I, I genuinely think that the greater availability of transparency of about what's happening when our food is being made is a good thing and I, I think that whether we, we want that information I think is a, it's kind of a different question. But the, the more that we can open the light on kind of a process that has been relatively hidden for a, a fair amount of time is- the more we can open that up I, I think things are better for us particularly now that a lot of us are buying from restaurants for home eating. 

 

We can't necessarily see into that, that restaurant, we can't go and see the open kitchen at the front of the restaurant where we can see what people are doing. Perhaps it, it is better for us to, to have a view of what's happening through sensors and be able to make judgments on what people are doing and obviously there are privacy concerns that, that kind of arise out of that. But you have to balance them off in in some ways with you know, the safety concerns and the, the health concerns and, and things like that. So, so I think possible benefits there.

 

Sean:                  And, and obviously some of these things are kind of happening as, already you know you can rather than go and have a barista make you a coffee there is a Costa automated machine that is supposed to be as good. I’ve not tried once so I wouldn't know, but we've taken our autonomous car to our autonomous burger joint, it's been a pleasure to talk to you Richard today and hopefully we might discuss some different topics on a future Living With AI. Thanks for coming.

 

Richard:            That's brilliant, thanks very much Sean.

 

Sean:                  Anyone fancy a robot made burger to eat while you relax in your autonomous vehicle on route to a gig played by robot musicians Paurav, Alan?

 

Paurav:              I, I, I, I really in a way, in some sense I'm excited about it but at the other end I’ve, I’ve also you know, when Richard started talking about the regulations behind it and the complexities that are involved it suddenly dawned on me, that how complex some of our simplest gestures could be and what kind of consequences and regulation consequences it could lead to. Yesterday evening funnily enough, I was watching this Hollywood movie called Passengers and it is about two people you know- anyway there is a person who gets awake in, in interstellar journey. They are not supposed to be awake for the next 90 years, they get to know it. And now they are only person awake in a robotic world wherein there is one robot who is you know, serving drinks behind the bar and then there are other robots who are just cleaning and that's about it, that's really his interaction.

 

He wakes another person up and so on and so forth, interesting human complexities around it but in that there is a restaurant wherein they serve Spanish food and Japanese food and Thai food and so on and so forth. And the, and the language changes by the robot and I was thinking you know, when that robot brings that food at that point in time I did not think much about it, but now after listening to Richard's interview I'm like, what would have happened behind closed doors when that dish was prepared? Was the fish really a fish or was it something else? And so you know it, it brings in so many different types of imageries but at the same time how difficult and complex this area is in itself. 

 

One other example if I may, if I may talk about is very recently I came across this piece on BBC that the, at home food selling has increased dramatically in this post, in a way in this Covid environment. A lot of people are cooking at their home and possibly putting it in a WhatsApp group or some other app that, “I am a good cook and I cook this things and you know if you want to have it or if you want to buy it please do so.” And you know in a way neighbourhood communities sharing food has always been the case but when you start thinking about this as some people starting a small business like that without knowing the regularities and regulations around it, they could be in deep troubled waters because of this and you know the-

 

Sean:                  Yeah there's, there's a minefield isn't there with ingredients, with allergens, with all sorts of things, food hygiene standards in the kitchen that it's prepared in. I mean you know, if you go into any catering establishment you'll find there are multiple wash basins for different tasks etc. etc. and most homes don't have this.

 

Alan:                  No it’s, it’s a difficult thing isn't it because it's- I mean we, I could invite you all round to my house and I could cook you know, I could cook on the backyard with a with an old spoon and a, in a tin can it might taste fantastic and none of us would get ill. But I certainly couldn't do that and sell it I, I possibly couldn't teach an AI system to cook it that way and, and make fantastic tasting food. There's, there's, it's- I mean Richard’s stuff is- I don't know I find it sort of fascinating, scary and some of the stuff I almost want to throw out the window and say, “You know what, how can you make a law that's going to govern the microchip or the battery or” because the, the, the way that AI is talked about is so high level. 

 

And then when, when we look at this stuff some sometimes it feels a bit blunt to me where lots of projects talk about- and we, we all do it, like autonomous vehicles. But then when we start to unpack, what's part of driving it's like, okay Sean's parked on the pavement outside my house it's my house, it's my pavement he can do what he wants. If you do that on the main street, on the public highway that's illegal but if I ask you if it's illegal or not you might not know because you don't remember. And then if you're, if you've got an AI in your car where it doesn't allow you to park on the pavement. Well the, the car has to understand what a pavement is and, and I know there's kind of like a uniform idea that we've all got as, as humans of what, what a dog is, what a chair is. 

 

But how on earth do you get the, the- how do you create algorithms to, to deal with legal situations about is this tin of beans cooked in your front room for your friends or is it cooked to serve in a restaurant, or is it- you know are you using the right sort of ingredients, did this person clean up with the, with the, with the red cloth instead of the green one? Is it, I, I, I don't know it’s, it’s, it's sort of almost, you always want some kind of AI blockchain with humans involved in it that can govern the system.

 

Sean:                  There's, there's sort of two- well, there's more sides to this than two but there are two sides I'm thinking of when you, when you talk like that about that, and one side is sort of depends on who's making the, the rulings. So if there's a problem who's going to rule on that, because they need to know what data was put in to see what data came out. But the other side is we're just trying to keep everyone safe right?

 

[00:50:00]

 

Alan:                   Yeah.

 

Paurav:              Absolutely. 

 

Alan:                  Yeah I, I think as well, I mean it’s, it’s like we, we discussed blockchains earlier on didn't we before the meeting? And some of the things that we talk about in terms of laws they're kind of generalisations aren't they? And you know sometimes we, we take our driving test, we follow the rules, 20 years down the line there are some things that I possibly can't remember-

 

Sean:                  But generally.

 

Alan:                  But generally because I'm learning all the time and, and following the way that other drivers drive I’ve got a rough idea. I, I also know that I can park on the pavement and the police are not going to say anything, perhaps if I did it and there was a football match and cars couldn't get by I, I’d get-

 

Sean:                  Yes, yeah well this is it, it, it comes down to those sorts of situations doesn't it? But also, I mean driving is a good example, there's one thing that always comes back to me and that is that we had- in the UK anyway, a very generalised rule about driving with due care and attention. Which is kind of down to again the, the judgement of say a police officer if they see that they think it's- something that you've done is outside due care and attention. 

 

But in recent years we've seen loads of very specific laws like you can't touch your phone while you're driving, which previously- or you can't eat or whatever it is, previously would have been covered by this catch all rule of due care and attention. Whereas it, it's sort of thought that if you are driving a car you shouldn't be I don't know, checking your text messages because you're not giving due care and attention to the road. Having said that why have, why has somebody thought it's necessary to bring in these very specific examples? And perhaps this is a little bit like teaching the AI, we can't give it that general advice so we have to give it the, all the specificity- is that a word?

 

Alan:                   Yeah.

 

Sean:                  Specifics, anyway Paurav you're nodding, nodding at my specific- I can't say that again. The other one I struggle with is anthropomorphism but I got it, through it that time. Anyway sorry Paura what, what do you think about these you know, these rule changes and, and you know specifics?

 

Paurav:              Some of, some of the things you know in some sense law is by nature common sense, and it is more of a societal common sense, that's what it leads to, and what Richard was talking about is building some sort of proactive common sense into the law. And I find it amusing and also a little troubling at the same time because how do I know, like what Alan said, how do I know, what exactly do I mean by pavement parking? And in some places payment parking is allowed some places it is not allowed you know?

 

Sean:                  Can- I'm just going to pause us there, just briefly because if we have got listeners in The States it might be worth clarifying that in the UK the pavement is the same as the sidewalk, because I know that in The States you call the road the pavement. So over here when we're talking about parking on the pavement we're talking about parking at the side of the road, on the footpath. Okay sorry as you were Paurav I just thought that was worth-

 

Paurav:              Fabulous, thank you very much as, as a cross-cultural researcher I think you know, this is what makes it you know, why research is also, so much fun. Because people, for example people understand the idea of luxury in, in very different way- the idea of data itself in a very different way in different countries and what am I ready to share or what am I not ready to share is, is so fascinating. But yeah, coming back to the point I think the, the law being the definitive word and at the same time not so definitive like what Richard was talking about you know, reasonable. What is reasonable and how do we decide reasonable? I still remember that fantastic movie 12 Angry Men about you know, the reasonable doubt and that reasonable doubt is very, very difficult to define. 

 

And then there are legal ramifications not just around food as you rightly said Sean, there are legal ramifications in terms of who was driving, if it was an automobile who was in charge and so on and so forth and if AI goes into that, an accident you know it's a, it's a minefield of its own kind. So I think personally speaking you know, AI and law are going to have very good relationship in things that can be bound into a certain boundary or a scope. When it, that boundary goes fuzzy in some ways suddenly that relationship is going to become really acrimonious. That's how I see it.

 

Sean:                  And we didn't really get into liability. I mean that is, surely that's a series of podcasts in its own right when it comes to autonomous vehicles and liability.

 

Alan:                   Yeah.

 

Sean:                  But yeah the, the idea of you know, defining reasonable and the idea of, of pre- not pre thinking, but trying to put things into place that might cover situations we haven't even got to yet, I mean the laws are usually retroactive at best isn't it Alan?

 

Alan:                  Yeah well I, I it's, I think the legal stuff I mean it's, what is legal is not always right, what's right isn't always legal so it's got- ethics as a big area of philosophy you know? But it's things that that I was, I was fascinated in was like so I mean think, thinking about what I might, what I might call luxury. So I might go out and buy a car you know, and it can do 100 and 80 miles an hour but if it's an intelligent car, we can't go over 70 miles an hour in the UK so therefore it would come capped. So this what, what I, I could get a Ferrari or I could get you know, an old Ford Focus it's still only going to do 70 miles an hour. 

 

As soon as, as soon as I went over that there might be some detection that tells people that I’ve actually broken the law and it's, the AI is, is correcting me. So it might be more to do with the way that it can help us make decisions if you want to see it like that, but at the same time I might have a really- what, what on earth is going to happen when you give the you know, when you give the car industry like some creative tools that let you design like the ultimate chameleon car? So, so you get this fantastic Ferrari, it comes in black, you drive down the road it, it realises you're doing 75 miles an hour reports you to the police then changes colour to white. And then, and then the number plates- 

 

Paurav:              Oh my God.

 

Alan:                  -you drive somewhere else and  the number plates swap because it realises you've gone over the border into a different county in the States and your licence plates change. And then you know what it, it wants to know whether, when it comes to court were you there or not, “No I wasn't there.” and, “I will look at your telephone data to see if it was, where the nearest cell mast was.” and it switched your phone off because it knows from what is legal, what people use in terms of evidence and data how a conviction is created.

 

Sean:                  And suddenly the, the liability switches to the manufacturer of that vehicle though right? 

 

Alan:                   Yeah. 

 

Sean:                  For perverting the course of justice or I don't know-

 

Alan:                  Yeah or maybe Paurav’s in Southampton hacking it so it looks, so it looks like I’ve done it.

 

Sean:                  Well on the, on the previous podcast we, we talked about this kind of dystopian idea of yeah, what happens if your car, you have downloaded some software or, or paid a developer to change the software so that it, it does save your skin if you like?

 

Alan:                   Oh yeah. 

 

Sean:                  And, and these sorts of things then you know, unpicking that in any court would be very, very difficult indeed to work out exactly where the problem lies and, and whose fault it was.

 

Alan:                   Oh that’s such a- 

 

Sean:                  Because that's what often these things come down to, who we're going to blame or whose fault was it?

 

Alan:                  I think that's a great point that I’ll bat over to somebody else about how do you unpack all this data? If it's about food and provenance and the blockchain and you know who's, who's putting all this stuff, who's- how much is that going to cost if you hire a barrister with expertise in that area?

 

Sean:                  Yeah, and I'd be really wary of this blockchain thing being a catch all solution when actually it, it is proven to have its own difficulties. You know that the moment you start using blockchains for something they grow and they grow and they grow in terms of the amount of data in them and they become unwieldy. And sadly the, the word blockchain has just become a bit of a, a bingo word for- sorry to say this Paurav, but marketing purposes as much as anything.

 

Paurav:              Very much so. You know what, what was designed to create what we call democratising of technology has almost become a marketing buzzword nowadays in the way companies are using it. However there are some good examples, there are some good things happening in terms of what Alan was talking about Luxury particularly. We are seeing some very interesting technologies developing around setting up the provenance of a product using blockchain technology. So you can actually see wherever the product was made, how the leather was calibrated or even sourced and from what and so on and so forth. 

 

Similarly even in FMCG as we call it, fast moving consumer goods industry we are also seeing some very interesting take of this blockchain technology. So Nestle very recently decided to work with Carrefour and they have created a blockchain solution for a puree which is generally given to kids. So as parents we, whenever we are putting some food to kids we are generally very careful, we want to be extra sure that this food has not been tainted in any ways. 

 

And so it, these puree boxes come with a, with a bar code which if you scan- and this is only available in France right now. But if you scan them what it tells you is, is that in what factory of Nestle, at what date this puree was made and what ingredients were used and how it was transported to Carrefour warehouse and how then that went into Carrefour store and how, when did you make it available for yourself and so on and so forth. So you can actually see the whole journey which gives you some sort of confidence. 

 

[01:00:19]

 

So one thing what this could do from a marketing perspective it gives authenticity to the consumer or to the product for the consumers mind, and that way possibly companies will charge a little more. Because they would say, “Look here is a thing where you know something.” How many times are we going to check that again and again I doubt we will do it one time for the peace of novelty. But then after we are going to make that trust will come in picture, and so the trust in that AI system that blockchain technology would say, “Oh yeah, I know it will be okay.”

 

Sean:                  Or at least I know I can look for it if I need to, but then who rings the customer careline in the same way? It does remind me of, of Richard when he said, “How much do you want to know about what's happening I a factory?”

 

Paurav:              Yeah I think, I think that funnily enough I worked on a project to do with kind of IT use in rural places and that, that was something that- oh it immediately you, you rubbed up against the way that you thought things were produced when you see I don't know, Uncle Shaun's pork pies versus to the, the, the way that they actually are produced. But what, what I love about this and what I like about the, the TASHub is that it allows you to think about these things in a much more experimental way. And, and I’ve never thought about the way that Paurav has just, is saying, “Right well actually if we had these systems set up it engenders trust in the product you're buying.” 

 

Which might be- I, I don't, I don't do use the word dumb product but you know, it might be a tea bag and, and I might have ethical concerns I, I might sort of want this tea because it’s, it’s a luxury tea, it might, it could honestly be costing me like £50 a packet. But I want to know it's from that region I want to know how it's made, how it's produced, it's safe and that I'm, I'm getting that good experience that I'd get from that tea which is a luxury item. 

 

So, so I, I- it creates a brand and you trust that enough to realise it's an authentic thing and then you spend money on it and that's not directly the artificial intelligence, that's the way it's been applied and used. And I, I think, I think the more that we look at those sorts of scenarios the more people will start to understand, ah that's the way that technology works and I think that's really, that, that's why these important- these projects are so important because it brings different people together and you know, let's us buy expensive tea hopefully.

 

Sean:                  Sounds good to me.

 

Paurav:              Hopefully, hopefully.

 

Sean:                  I think that's, that's, that's great. Yeah, and it's, it moves you from that AI being the marketing buzzword and you know, thought of the hour onto it being a useful thing in the same way as our first iPhone showed us apps that could make you pretend you were drinking a pint of beer and yet we've now moved on to ones that actually are useful for us.

 

Alan:                   Yeah.

 

Paurav:              That's very true, that's very true Sean. Some of the things that are happening around that for example, is that it is said- it was very recently only last month there was a report published or I think it was this month when it said that all AI automated software market is going to be approximately 43 billion US dollars by 2027. So within a short span of time we are looking at a humongous increase in this area and that's why Richard's point about you know, regulation and proactive regulation I think it's very, very important. But at the same time there are going to be so many nooks and crannies and like law evolves I think AI-related law will evolve in its own, own sense and own fashion. But it will always be doing something what Richard also said it would be a catch up.

 

Alan:                  Yeah, it's fascinating to see these, these- actually be there when these fields are starting to develop, even though you're not necessarily part of that academic field. It’s, it’s like all these discussions that we're having, you're seeing- I mean I, I’ve already started to think about sort of AI policy and, and how policies- because of Covid you're getting like sort of policies that have changed like there's, there's the- in Wales as an active travel policy which is all about funding cycle paths to go from home to work. 

 

Well when people stop working at a workplace or, or stop going to school then those policies need to be sort of changed quite quickly. And it makes you wonder whether there's a mechanism there, whether you could have something slightly intelligent that tells policymakers, “Actually we can see an issue coming.” and it, it could relate to anything you know speeding, food. So as new technologies are being developed they almost need to inform policymakers very, very quickly.

 

Sean:                  Well policymakers need to be guided by some AI right?

 

Alan:                  Well you know data, data is quite important isn't it? So providing the right sort of data which is evidenced and, and may, maybe is kind of authenticated somehow is, could be a good step to give you some basic tools.

 

Sean:                  Yeah, and there are all sorts of kind of ways this could apply. I mean for instance I was thinking earlier about our street outside where we have cars parked. I mean I call it the, the long-term storage car park at the moment because the cars out there haven't moved in weeks or months. I mean the amount of times neighbours have asked to borrow jump leads or whatever because they haven't started their car, and obviously this is a situation of Covid etc. 

 

But does that then lead to, well do we need to have 20 cars on this street? Maybe we need five cars on the street that are pooled and that AI divvies up and you take one when you need it, and I know things like this do exist but perhaps there'll be much more widespread when, when you know, the need is much reduced for vehicles. But that is a complete tangent so I don't know how to pull that back in.

 

Alan:                  Well it’s, it’s, it's not really because you, you, you think it is but again, it's like Paurav was saying it's, this is, it's another trust thing isn't it? Because you'll be sharing a vehicle with people that you don't really know but, but maybe it's making us look at smaller communities where AI might be used and, and that could be a club of Ferrari owners, it could be a, a club of people who want to share a car on street.

 

Sean:                  Or they could be a, a variety of types of car depending on your mood. You know, the estate car for the long journey and the Ferrari for when you want a weekend and yeah.

 

Alan:                   It's like my, like my life.

 

Sean:                  It’s like Paurav’s luxury life.

 

Paurav:              I wish, I wish. But, but one of the things I, I think there is a very important point here because in, in some sense we have, have changed our relationship with our community reflections. AI driven technology has brought us Airbnb, AI driven technologies have in some sense brought us Uber and we have started trusting these sharing technologies in a way which we would not, we would, we never thought we will. You know, we will not allow any stranger in our house to stay overnight and you know, our house would become a guest house. Suddenly regulation had to catch up in London and others, mega cities it was then decided that these were cheaper options to hotel industry. It had a massive impact on hospitality industry and so 90 day rule was brought in so law is going to catch up that way.

 

But the food example we were talking about earlier, how do you regulate you know, a little food thing which is happening on WhatsApp? It's a nightmare at the same time that our-

 

Sean:                  That does, that does connect quite nicely to the Uber because there have been problems with things like Uber you know, unlicensed drivers or you know, uninsured drivers, there have been all sorts of problems. I mean obviously the, the licence was revoked in London wasn't it for a period of time?

 

Paurav:              Yeah.

 

Sean:                  So yeah, maybe the food can learn from what happened with Uber etc.

 

Paurav:              Absolutely and also for example newer types of food-related AI systems are emerging. For example there are companies like Eatwith or Visit as it was called wherein you can invite strangers into your house for a dinner. Now you don't know what they're, you know, you are inviting them for a fascinating dinner conversation and food and all that but you don't know what kind of allergies they have, what kind of health conditions they have and so on and so forth even if you know it, you may not be able to cater for it. And, and so in a way food and any such areas where there are fuzzy boundaries, where there are human elements involved we are going to have a kind of a frictional relationship between AI and, and law per se.

 

Alan:                  What, what I like is the, I like the fuzziness of it all and the messiness and I, I remember somebody doing a little study where they'd got a group of people who shared an apartment block. This is in the UK, so quite different from other places in Europe and they'd ask what they'd share and people said, “Oh we'd share our car, we'd share our bike. You know, we might make food for each other.” The thing that they wouldn't share? Their washing machines.

 

Sean:                  Oh.

 

Alan:                  And I thought, I thought yeah would you want people to wash their clothes in your machine if you didn't know them? So once you start, once you start scratching what people are willing to do versus what a system might think they're willing to do and then you apply it to a different culture like sort of- I’ve been in, in Scandinavia quite a lot and you can, if you go to an apartment block they've got shared washing machines. So what, what a difference between the UK and some-

 

Paurav:              Yeah.

 

Sean:                  But people, people- things yeah, different people kind of evolve in different ways to, to worry about different things don't they?

 

Alan:                   Yeah. 

 

Paurav:              And, and, and that cultural aspect of law again kicks in you know in different cultures, different ways, how laws would be interpreted or laws would be made and so on and so forth. I remembered it as Alan you reminded me. You know Scandinavia and sauna time so for example I go to Finland twice a year you know, now flat blocks have a common sauna room.

 

[01:10:18]

 

Alan:                   Yes.

 

Paurav:              And everyone has their timing, you know every flat has a time block.

 

Alan:                  Yeah you can put- you've got the key, yeah you can lock it in on the thing so you can yeah-

 

Paurav:              Yeah.

 

Alan:                  Funnily enough- so we're going off topic a little bit here but does relate to the intelligent car, is when you're in the UK have you noticed you can cross the road where you want to, people don't care? Now I, I once, I was in Finland and Denmark, I was with my wife and she went to step across the road and somebody held her back, you cannot cross when the light is on red. 

 

Sean:                  Absolutely.

 

Alan:                   You kind of have to wait for the little person walking over.

 

Sean:                  That’s like in America obviously they have the jaywalking rules which are similar aren't they? And I, I suspect they probably- I don't know the history of this I'm just, I'm riffing here but I suspect they evolved off the back of protecting the automotive industry in some respects.

 

Alan:                   Well maybe.

 

Paurav:              Possibly.

 

Alan:                  I, I just love the, love the idea where it's like yeah, “So what's the AI like in the UK?” Oh well you can get away with all kinds of stuff don't worry about it.”

 

Sean:                  Yeah, will there be yeah, when we talked about kind of different ethics for different markets on a different on a, on an earlier podcast. Would it, would there be these different kind of standards in different markets? You buy your car in, in the UK and it, it has to be super careful because people step out left, right and centre. You buy your car in Denmark it’s, it’s fine it's just going to crash through everybody because nobody steps off in front of a red, red light. Anyway you see  where I’m going with this.

 

Paurav:              I have a fascinating also story if you want to play this. But so, this is- so let me tell you the story and then if you, you may decide up or down of it. So in 2018 if I'm not mistaken, a study was done by an AI firm in law. It is called Lawgics and what it did was one of the common most contracts in law is called non-disclosure agreements NDAs right? So what it did was, it created in a way these Lawgics people created five contracts and they sent it to 20 top lawyers who had studied in you know, top US law you know, law schools. 

 

And they were sent to these people, 20 people and how much time they took to do the kind of you know, examine the contract and any flaws there wherein. So the lawyers did this, took on average 92 minutes and they were 84% accurate the software, the AI system did it in 20 seconds or 26 seconds to be precise and was 94% correct. So AI and law certainly has a very fascinating relationship where there is standardisation like non-disclosure agreements and so on and so forth we are going to see tremendous amount of AI capability being used, but where there are those fuzzy aspects you know, new things will emerge.

 

Alan:                  Paurav’s thing made me think about, I was reading some examples of somebody who was doing some sociological studies in, in law courts. I think it's in, in America in the 50s and one of the things that, that he noticed was  tried to try, to understand how a jury came to a decision because it's not only the judge who makes decisions. So you've, you've got the police there, you've got people representing the, the, the, the plaintiff or whatever and then you've got people represent- there’s this whole, whole thing goes on the, the show is there. And, and people started saying, “Well of course the aeroplane was flying a little bit low.” and after the judgments have been made he went, actually got these people to show them what, what they meant by low and nobody could really describe what they meant the, the witnesses. So I just thought, hold on how, how much of this stuff when, when you present it in a court is kind of based on some kind of intersubjective kind of, “Oh it was really scary at the time.” Or, “Yeah, he, he ran at-

 

Sean:                  Define scary, is 20 metres scary, is 25 metres scary? Yeah.

 

Alan:                  “It, it flew at me in a rather dangerous way.” and it's like, like Paurav was saying earlier on it's kind of what, what do people mean reasonably careful? 

 

Sean:                  Yeah, yeah, yeah, yeah well absolutely yeah.

 

Alan:                   It's like, it's sort, it was sort of hot when I tasted it but then it burnt my throat.

 

Sean:                  I think that just about wraps it up for us today unless we've got any legal guidance to the contrary, I will say thank you for joining us today Paurav.

 

Paurav:              Thank you very much Sean.

 

Sean:                  And Alan.

 

Alan.                   Thanks a lot, bye.

 

Sean:                  And yeah we hope to see you again on another Living With AI Podcast. 

 

If you want to get in touch with us here at the Living With AI Podcast you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, Audio engineering was by Boardie Ltd. and it was presented by me Sean Riley. Subscribe to us wherever you get your podcast from and we hope to see you again soon.

 

[01:15:32]