
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Trusting AI With Defence - It's Not About Killer Robots
This episode is all about AI & Defence. We spoke to Ministry of Defence's scientist Steven Meers
Panel Guests:
Professor Jack Stilgoe - Professor of Science and Technology Policy, UCL
Nik Bhutani - Science and Technology Lead - Northrop Grumman UK and Europe
Hector Figueiredo - Head of Technology, UAS Business, QinetiQ
Professor Meers is speaking in a personal capacity. His views do not necessarily represent the views of the Ministry of Defence.
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 2, Episode: 5
Trusting AI With Defence – It’s Not About Killer Robots
This episode is all about AI & Defence. We spoke to Ministry of Defence's scientist Steven Meers
Panel Guests:
Professor Jack Stilgoe - Professor of Science and Technology Policy, UCL
Nik Bhutani - Science and Technology Lead - Northrop Grumman UK and Europe
Hector Figueiredo - Head of Technology, UAS Business, QinetiQ
Professor Meers is speaking in a personal capacity. His views do not necessarily represent the views of the Ministry of Defence.
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to Living With AI, a podcast where we get together to look at how artificial intelligence is changing our lives, altering society, changing personal freedom and the impact it has on our general wellbeing. Today we’re looking at an extremely controversial topic, AI and Defence. My name is Sean Riley and normally I’m hiding behind a camera and pointing it at somebody incredibly clever for videos for the You Tube channel, Computer File, but I’m sticking to audio for today’s podcast.
Shortly we’ll hear from Professor Steven Meers from the DSTL, that’s MoD’s Defence Science and Technology Lab. But before that, I’ll introduce our panel, well I’m going to get them to introduce themselves. So let’s start with Nik just because of where he is on my screen in front of me. Nik, introduce yourself.
Nik: Thank you, Sean. My name is Nik Bhutani. I work for a company called Northrop Grumman and I head up their science and technology activities here in the UK.
Sean: Fantastic. Hector, can you introduce yourself, please.
Hector: Hello everybody. So my name is Hector Figueiredo. I work at QinetiQ. I am a capability lead responsible to autonomous systems. As part of my role I also have a delivery hat with a particular focus on [unclear 00:01:14].
Sean: Fantastic. Jack?
Jack: Thanks, Sean. Jack Stilgoe. I’m professor of science and technology policy at University College London. I’m a member of the Trustworthy Autonomous Systems Hub advisory group and I’m interested in all sorts of emerging technologies, how we get more of the good stuff that society wants and less of the bad stuff.
Sean: That sounds like the perfect balance to me. Just for absolute clarity, we’re recording this on the 23rd May 2022 so if you’re from the future, wherever you are, whenever you are listening to this, just bear that in mind. So thanks to the panel for joining us today. First let’s here from Steve, our feature, and then perhaps we’ll have a chat afterwards. Time now to hear from Professor Steven Meers. Welcoming to Living With AI, Steve.
Steven: Sean, really great to be here and thanks for having me today.
Sean: It’s wonderful to speak to you. Can you tell us a little bit about yourself and your job role. What’s the link with the TAS Hub? Fill us in.
Steven: So I lead the work at an organisation called the Defence Science and Technology Laboratory which is the science and technology arm of the Ministry of Defence within DSTL as we call it. I’m responsible for the AI and data science work. So I work across the whole of DSTL and pulling together all the different kinds of work that we do around AI and data science. As well as my role at DSTL, I’m also a visiting professor at the University of Southampton where I’m attached to the TAS Hub and in particular I co-chair the skills committee at the TAS Hub alongside Dame Wendy Hall.
Sean: Fantastic. I mean we’ve got to address the elephant in the room. Let’s get it out of the way. There’s this assumption when people think of AI autonomous systems, military applications, it’s all going to be war games or Terminator or even Short Circuit. I mean how far off the mark is that?
Steven: So as soon as you say the words AI and Defence in the same sentence, everybody’s minds instantly jump to killer robots, the film Terminator. For me, I find that really unfortunate because while autonomous weapon systems are a real concern and something that we take really seriously and actually the UK government has been really clear that it is not and has no intention of developing weapons systems that can take a human life without appropriate human oversight.
For me there are just so many other applications of AI in Defence that aren’t getting the same amount of attention. I think they’re absolutely as important as weapon systems. Anyway, I can give you a few examples if you’re interested but yes, it is absolutely not all about killer robots.
Sean: Yes. I mean it’s very sexy for Hollywood to choose super computers and robots going wrong and all this but is there almost a hidden danger that we need to think of with Defence? We’ve heard of cyber attacks coming from nation states and things like that, are cyber defence systems and worrying about exploits more important almost than these physical Hollywood things?
Steven: Yes. So the reality of AI within Defence is that it is a very powerful but ubiquitous enabling technology. It has got the potential to transform, like in lots of different industries and organisations but it’s got the potential to transform almost everything that Defence does. I really like Sundar Pichai who’s the CEO of Google, as I’m sure you know, his analogy that AI is a bit like electricity for the 21st century. It’s a technology that enables lots of other things. So when I stand back and look at the portfolio of research that we’re doing around AI and Defence, it’s about things like command and control.
We put our military commanders in some of the most demanding situations anywhere on the planet. They’ve got to make what can sometimes be life and death decisions in potentially quite fast timeframes and quite challenging circumstances. So how can we use AI to help them make better decisions? How can we help them sift through huge amounts of data? That’s one example. Another might be logistics. So if you’ve been following all the awful events in the Ukraine, you can’t have failed to notice how important logistics are to a military operation.
We’ve been looking at ways in which AI might be able to help us get supplies more efficiently to people. So those are just a couple of other examples and there’s lots and lots more.
Sean: Tell me about some of the research that you’ve been doing then.
Steven: Okay. So as I said, it’s a very ubiquitous technology area. So across DSTL we’ve got programmes of work that are looking at the technology of AI and autonomy in its own right and that’s really about understanding how different algorithms can be applied to different Defence problems, how we make sure that we’ve got the data that we need in order to get the correct level of performance from those algorithms, how we can make sure the algorithms are robust, that they’re not going to break either accidentally or when some intelligent adversary tries to decide them.
We also put a lot of thought into ethics and how we ensure that these technologies are being used responsibly and ethically and also safety, so how we can make sure that the systems that are used are safe. So there’s a whole body of work around developing what we might call the science of AI and then alongside that there’s an awful lot of applied work where scientists and engineers at DSTL are trying to understand how they cause use AI and machine learning to help their area.
So it might be people working for example on counter terrorism and seeing how these techniques could help the police and the security services deal with terrorism. It might be people looking at future ships for the Royal Navy. How could we use more autonomy in ships? So lots and lots of work in all the different areas that DSTL work in.
Sean: What are some of the areas where AI can make a difference? I mean some of these, you mentioned logistics before but I mean obviously there’s decades, if not centuries of experience of logistics and logistics handling and management in the military and in Defence. How will AI make that better?
Steven: So let me give you a few different examples. So you mentioned logistics. We’ve recently completed a large international research collaboration with the army research labs in the United States, looking a something called coalition assured autonomous resupply. Basically we were looking at how you could use AI and autonomy to get supplies to the front line. So it’s a very sad fact but what’s called the Last Mile, resupply is bar are the most dangerous part of any logistics operation, trying to get supplies to troops at the front line.
Very sadly in Afghanistan there were people that lost their lives when convoys were attacked and improvised explosive devices. So we’ve been looking at how you might be able to use things like delivery drains to get supplies to the people where they’re really needed, also looking at things like self-driving convoys, so where you’ve got one human driver driving the first truck but then he’s being followed by a convoy of autonomous vehicles. That was a really successful trial.
We showed how British systems and American systems could work together and that’s now beginning to be acquired by the army as a real operational capability, not just a lab based experiment. So yes, there’s one example for you. I’ve got lots more but I hope that was interesting.
Sean: Yes, fantastic. I mean that idea, I mean okay, your sceptic would say well you just take out the lead vehicle and the whole lot goes but I assume you have things in place to deal with that. Wha are some of the misconceptions around AI in Defence then? What sorts of things do people assume and actually they’re quite far off?
Steven: So as I’ve said, there is this point around the focus on autonomy in weapon systems. As I said earlier, it’s something we do take really seriously. We’ve had DSTL scientists that have testified at the UN. We have ethicists who work with us on trying to develop ethical frameworks. I guess what we’re trying to do is take some of the ethics of AI and intersect it with the ethics of warfare that have been built up over hundreds and hundreds of years and really understand how we can use AI responsibly and how we can use it to minimise the overall harm that’s caused through warfare, how we can reduce the damage that’s done and the lives that are lost.
We really do take this seriously. I feel really committed to trying to make sure that we get the best possible outcomes and also that we counter misuse of AI. I think a particular challenge for AI is that it’s a very democratised technology. Anybody can use it. It’s a very powerful technology. So while we’ve got very high ethical standards, we’re worried about other people, maybe terrorists or extremists who might use this in a very unethical way. We need to figure out how we keep people safe from those unethical uses.
Sean: Yes, because as with any technology from dynamite to SMS text messages, it’s been designed necessarily for one, or the idea was it’s designed for one thing and then they have these unintended consequences. Is that one of the biggest challenges you think you’re facing then?
Steven: Yes. Let me give you an example of that. One of the things we’re concerned about is the potential for misinformation, in particular algorithmically AI generated misinformation. So I’m sure you’re familiar with deepfakes. You’ve probably talked about them for other applications but they can have real miliary consequences as well. Every military conflict is conducted in a sea of misinformation with people putting different versions of the truth out. We’re really worried as to what might happen when people begin to falsify information.
I don’t know if you saw, there was a really worrying example around Ukraine where a deepfake of President Zelenskyy was released. I don’t know if you saw that. But basically it was a video telling all of his soldiers to put their weapons down and surrender. It wasn’t very convincing. His head was very still. It was quite unnatural but that technology area is accelerating very rapidly. So we’re interested in developing tools to help us spot when misinformation has been used in an inappropriate way like that. So maybe an example of an area you wouldn’t traditionally think about with Defence but it really does have a military impact.
Sean: Some way of authenticating those messages perhaps because I think the one famous example of deepfakes is this one of Tom Cruise that did the rounds a few years ago.
Steven: Yes, that’s right.
Sean: But if you look at the behind the scenes, the guy that they used for the deepfake, he looked quite similar to Tom Cruise, they’d done his hair in the same way and dressed him in the same way. At the moment it’s still an emerging technology, isn’t it, but yes, that is a potential real problem area as it gets better and better. What sort of other challenges are you facing in Defence and AI then? Have you got any other good examples for us?
Steven: So if you just step back and think about some of the very cross cutting challenges, some of the big things that we’re trying to explore are things like data and particularly labelled data. So I don’t know how much you know about AI and machine learning but a lot of the supervised learning methods rely on large amounts of labelled data. That’s perhaps a bit easier for some civilian applications. You can just search the web and find lots of pictures of whatever it is you’re interested in.
Some of the things we’re interested in, perhaps we haven’t got hundreds or thousands of examples of that data. So a big challenge for us is how we get enough data to train our machine learning models. Another one is explainability. So for us, a really important principle is that humans are accountable, particular if a military commander is going to give orders, he needs to be responsible and accountable for those decisions. If he’s being supported by an AI, how do we make the output of that AI explainable and understandable to him or her so that they can really own the decision that they’re making and understand the consequences of that decision.
So having systems that are really intuitive and explainable is important. I’ve got two more. Another one is what I call AI at the edge. So a lot of common place applications of AI assume you’re connected to the cloud, you’ve got loads of processing power and you can have all the benefits of cloud computing, etc, etc. Often in a military context you may be operating in a very degraded environment. It might even be that your advisory is trying to jam you and trying to cut off your communications so how can we have more AI at the end embedded processes, low size, weight and power. How can we do more training at the edge to help with that.
Then probably the one that’s most interesting for this audience is about trust. I think about trust in lots of different ways. I think about trust as a human commander. If I’m for example in command of a platoon of soldiers, an AI or an autonomous system is giving me some advice about what I should be doing, do I really trust that system? How do you build trust in a joint human machine team I suppose. A second way of thinking about trust might be in relation to safety. The Ministry of Defence puts an awful lot of effort into ensuring that its systems are safe to be used. So for example the recent Joint Strike Fighter that was procured has got millions of lines of computer code in the aeroplane.
We put a lot of effort into ensuring that that software is dependable and reliable and can be verified and certified as safe for us. When you introduce machine learning into that, suddenly you’ve got a real challenge because the system can learn and change its behaviour. So another really important question for trust is around safety and how do we sign off a military platform to be safe for use when it’s got machine learning and autonomy in it. Then the third one is this point we’ve spoken about a couple of times but about public perception.
How do we help enrich the conversation around AI within Defence so that we don’t detract from any of the important discussions around weapon systems but we also help people understand all the broader applications of AI that I’ve mentioned.
Sean: Yes, because as you said, one of the key points of having an autonomous system is it needs to be able to deal with situations it perhaps hasn’t encountered before and therefore it has to make decisions that you may not have been able to, I mean what do you do with this, lots of modelling, lots of simulation or other techniques?
Steven: So there’s different approaches you can take to that but typically, what you find is that the state space, the permeations and combinations of what you’re encountering just explode. We’ve kind of seen it with some of the self-driving cars where despite the hundreds of millions of pounds worth or hundreds of millions of hours of training data they’ve got, there will always be situations, whether it’s a white truck on a snowy background or something like that, that defeats the algorithm. So we’re interested in all sorts of different techniques.
I would say this is a long way from a solved problem and that’s one of the reasons it’s so great that we’ve got the TAS Hub helping us improve the science here. But things like synthetics, synthetic data generation, things like formal methods, things like building a safety case. So we think a lot about how you make the safety case, how you show the data that you’ve trained your AI on is a good representative data set, how you prove the safety chain I suppose. So I guess if I think about this in terms of a challenge to the TAS Hub community, I would say we don’t really understand how to engineer systems for trust at the moment.
I’d say it’s more of an art than a science. Trial and error, we see what works, we see whether people trust it or not. If the TAS Hub can help generate that kind of engineering mindset, engineering discipline that can help us know what makes a system trustworthy or not, that would be an amazing step forward.
Sean: One of the things we’ve talked about over the last 20 minutes or so is the issue around ethics and AI. Is that something we can talk more about?
Steven: We do really work hard on the ethics of AI. We work with organisations like the Alan Turing Institute, the Centre for Data, Ethics and Innovation and Oxford and we have been working with MoD to develop a set of ethical principles that could be adopted by the UK to guide how we develop AI in a military context. For me, I guess what’s really important is that the UK and the other western nations show leadership in what responsible AI means in a military context. I heard, there’s a guy called Bob Work.
Bob Work used to be the deputy secretary of defence in the Pentagon. He talked about it as being like a competition of ideas at the moment and how there are lots of different models of how to adopt AI that are emerging in different parts of the world. Some of them are more authoritarian. Some of them are more democratic. We really need to make sure that we are promoting the kind of AI that we think is a democratic model and providing a really viable model of adoption that isn’t about using it to repress people or to control but is about enabling humans to make better decisions. It’s about supporting human commanders. So yes, it’s a really interesting area and it’s something we do take very seriously.
Sean: I think there’s obviously connotations with Defence and AI but actually AI is becoming so ubiquitous as we mentioned earlier, that people might not even be thinking that AI is having a role in Defence when actually perhaps it has done for quite some time with even just a commander using sat nav to get to, I don’t know, the air base or whatever. How do we escape those obvious negative connotations of AI and Defence because people will be thinking of wider uses of it rather than what they see day to day as being just part of picking up a smart phone?
Steven: I should say as much as we see lots of opportunities for AI in miliary operations, and I’ve spoken about a few examples of those, we also see huge opportunities just for modernising how the Ministry of Defence does its day to day office work. So believe it or not, MoD is a £44 billion a year endeavour. It’s a huge investment of tax payers money, just a bit over 2% of our gross domestic product goes through Defence. So that is a very complex organisation in terms of things like HR and finance, all the functions that any organisation has.
So as well as all the operational battlefield applications of AI, how could we use it in the back office to help us be more efficient and to deliver more value to the tax payer. So maybe not quite as exciting as some of the areas that we’ve been talking about so far but equally as important I think.
Sean: Definitely. Tell me a little bit more about DSTL then and what goes on.
Steven: So if you’ve not heard of DSTL, we’re part of the Ministry of Defence and we provide the science inside UK Defence and Security. So I would say I think DSTL is one of the UK’s better kept secrets. We’re not quite as well-known as organisations like GCHQ or MI5 but we are a really important part of the Defence and national security. You might have heard of Porton Down for example perhaps when the poisonings in Salisbury, the Novichok poisonings happened a couple of years ago but chemical and biological defence and all of the support we provided to the cleanup operation in Salisbury is actually one small part of what DSTL does.
So there’s around 4,500 scientists and engineers that work at DSTL and we deliver about £700m worth of research a year and a lot of that is done through contracting with different universities and different industry providers. But for me as a scientist, I get a real buzz out of working with DSTL. We’re kind of in that middle ground of taking really exciting science that’s coming out of universities and getting to apply it to real world problems. It’s a really broad organisation. Although we’ve got 4,500 staff, we cover a huge range of different areas.
So that ranges from things like space systems where we’re looking at launching constellations of satellites. We’ve even got our own ground station where we can control satellites from a site we have near Portsmouth. We do work on cyber defence, so how we keep UK networks safe from cyber-attack. We’ve got all kinds of work on air platforms, land platforms, everything you would traditionally associate with Defence and also things like operational analysis, so helping MoD with some of the big policy decisions it needs to make and things.
So it’s a really varied organisation. It’s largely based in the South of England but we’ve just opened a new office up in Newcastle. It’s actually an AI and data science focused office. For me I feel like there’s a real responsibility for scientists and engineers to support our armed forces. We ask our armed forces to put themselves in harm’s way to keep us all safe. I feel like they deserve the best sort of support that we can give them. So I feel really committed and I hope lots of people on the call would see that we owe it to our armed forces to help them with the best science and technology.
Sean: Steve, many thanks for joining us on Living With AI today.
Steven: That’s great. Thanks, Sean, it’s been a real pleasure. I enjoyed talking to you.
Sean: Professor Steve Meers there. Well where do we start with this one? I mean there’s always talk of arms races with anything to do with AI but this is a literal arms race. Where ae we going to start with this? Jack, what about his terminology there? He’s talking about AI quite a lot. What does he actually mean? Is he talking about machine learning? Is that an issue here?
Jack: I think it’s a huge issue. I think it’s a huge issue whenever we talk about AI, what is the thing that we’re actually talking about. There’s a danger that if we use terminology like AI, we have a very speculative sort of conversation about Terminator and killer robots when maybe actually what we’re talking about is increasingly sophisticated forms of data analysis that might be a bit more mundane.
So we have to be a bit clear, especially when we’re talking about something as controversial as the use of these technologies in war, and this is why we need scientists and engineers to help us be a bit concrete about that thing. Can I be a bit just critical though, Sean? So yes, there’s a lot of talk about an arms race. An arms race is not inevitable. It’s easy to think that oh well, these technologies, there’s a race to develop these technologies. If there is an arms race then what’s the point. I think we’d all hope, as researchers, that the future is there to be shaped rather than something that is going to be inevitable. As soon as we tell ourselves there’s an arms race, there’s a danger that we’ve already given up.
Sean: Okay. I can see your point there. I suppose I was responding in part to the point where I think Steve said something like okay, we’ve got these technologies so the terrorists may as well give up. In contrast to talking about it like it’s like electricity, i.e. everybody has access to it, it felt to me a little bit like those things were in conflict.
Hector: Thank you. I just wanted to pick up the particular point Steven made that this was not about killer drones and he’s absolutely right. The human, and I think he said it, the human will always be in the loop, on the loop at the heart of this. A particular programme that I’m leading on behalf of DSTL, through DSTL funding, human anatomy teaming for adaptive systems recognises these systems are being developed to support the human and the human will always be there making those critical decisions. It’s not about killer drones.
Nik: I think I picked up on the term there, democratisation of AI. I certainly think that the technology will be available to all where in military context we’ve historically had maybe governments have had more advanced technologies in some of those areas. But what I think we will see is that it’s the application that will be the differentiator here, the difference here, so how people can use it, who has access to data, who uses that data in the right way, who can see how it can improve their own systems in other areas. I think that’s where I think we’ll see traditional government and defence moving with AI.
Sean: Jack?
Jack: Yes. So Hector makes a really interesting point about human machine teaming. As a social scientist, I would agree and say yes, there are always humans behind the scenes and there’s some really interesting research looking at quite how many humans it takes to keep for example an unmanned drone in the sky. But I think there is something really important that we need to engage with and Steven discussed it in his interview really nicely which is to do with the diminished role of human agency when some of these systems are being employed. So I’m particularly thinking about things like accountability.
There is a real danger that as automated systems get deployed in war, all of the rules that we have for trying to sort out what more humane behaviour looks like in war or less inhumane behaviour looks like in war, all of those rules of accountability and the ethics of war depend on there being humans. Yes, humans might be in the loop but they might be removed in some way which does change the conversation and it does raise some real questions about accountability that in engineering terms, as we know, get filtered into discussions about how we know what an AI system is doing and why it did what it did.
Sean: There’s a really good recent example of this which is the Boeing 737 Max isn’t there where you have the pilots being blamed for effectively a mistake in the software. I don’t expect us to discuss that now but these problems of accountability, they’re not going away. Hector?
Hector: Yes, I just wanted to come back on that. You are quite right, we are fortunate at QinetiQ to have been funded through DSTL for a couple of decades in this space. So consequently we have moved away from just looking at these things as being fully autonomous. We’ve moved it to looking at task authorisation. So underneath that yes, there are levels of autonomy but at the front, right at the forefront we’ve got task authorisation.
So the human is authorising the level of autonomy he might be comfortable with. So the human is always right at the heart of the system and make those decisions and consequently being accountable and responsible for the consequences of the decision you make, the autonomous system you undertake. Of course right at the heart of this is, and Steven discussed this, is the issue of trust. Can you trust that the system will do what you’re expecting it to do when you authorise, give it a certain level of authorisation.
As we know, those of us who are involved in the research space, science [unclear 00:34:37] and so consequently the whole research space of perception and understanding and decision making is a very complex one. Guys, we are a long way from [unclear 00:34:50] machine learning for AI systems to operate on their own without a human at the helm.
Sean: But it’s a question of the complication level though, isn’t it, because, to really simplify things with a very old fashioned simple revolver, you pull the trigger and you hope that the mechanics do a certain thing to cause the gun to fire whereas now you’re perhaps pressing a button in a similar way but so many things are going on under the hood that it’s very complicated. Maybe that’s been the case for decades but now, with adding in machine learning and devices that may be able to evolve and decide to do things differently depending on different circumstances or different environmental things or whatever, this is really, really quite complicated, isn’t it? Nik?
Nik: Fundamentally we are dealing with software here and there’s some really good principles and practices that I think are absolutely still key, whether it’s AI and other areas, so using DevSecOps or let’s say more, thinking about security at the heart of our systems is really key to us. I’ll just build on that slightly just to say that we’re also facing a number of quite specific challenges in defence for AI as well because not only have you got that developing of software in a good way using good ethics, good security and others but you’ve also actually got people trying to attack that system as well, so whether that’s the hardware, they’re actively maybe attacking that, trying to actively target your software, maybe actually trying to mess around you’re your data so it doesn’t work.
They’re using cyber techniques, they’re poisoning the model so it will do things that it wasn’t expected to do. Then fundamentally it’s attacking that trust vector and making sure, putting that doubt in people’s minds as well that it won’t do what it says on the tin if you like. That doubt can lead to people not using it and questioning decisions, making the wrong decisions. I think not only have we got to find good practices, think of our own methods of doing it, we’ve also got to be very, very cognisant that our opponents are actively trying to interfere with what we’re trying to achieve as well.
Jack: I think that’s such an important point that Nik makes and I think it’s something that we’ve seen in recent wars is that the technologies might not be very sophisticated but the blurring of some of the lines in hybrid war and cyber war does really change how we think about the rules of war and how we decide, questions of right and wrong that used to be fairly conventional if horrific. There’s an amazing paper by two of the academics that have been behind the campaign against killer robots, by Lucy Suchman and Noel Sharkey. So the social scientist and the roboticist coming together.
They say that there are three things that we should really focus on when we’re looking at the role of AI in war. The first one is about distinction, can these systems identify the right target and target in the right way. That, in engineering terms is a question of precision but it’s about much more than precision as well. It’s about the second of these principles which is about proportionality. Are we able to programme systems that do the just thing at the right time and which a military commander would talk about in terms of combat instinct and having people there on the front line. The third one we’ve already talked about which is about accountability.
As Nik said, it’s about once you do blur the lines, once you are no longer sure who is doing what and who is accountable for what and once you can get Putin and his cronies to claim plausible deniability about for example the presence of troops in Crimea, that does change how we think about war. You can imagine AI being a force multiplier for some of those things, even if it isn’t just about the difficult question of whether a human is in the loop when a kill decision is made.
Sean: There’s a lot of nuance there, isn’t there. Hector, you wanted to say something.
Hector: Yes. A lot of good stuff just said there by Jack. I was just coming at it from the point of assurance of these types of systems and how we achieve that. I know Steven talked about the challenges of it. It is something that’s very close to all our hearts I think, how do we assure these types of systems. For what it’s worth, because I think partly or because of the research direction my particular company went down, we came away with a view that it’s going to be very, very hard particularly as these systems evolve.
So we took the view that well hold on a minute, it’s something we’re not going to focus on because we are much more interested in recognising the fact there are rules of engagement, there are ethical principles. Now they are quite explicit. We’ve signed up to specific conventions. At a more simpler level even there are rules of the air for example. That’s where we started from. We know the rules up in the air, the air operations and navigation orders etc. Consequently we can explicitly bound and evaluate and ensure those deterministic boundaries are not infringed.
If you take that view with ethics and rules of warfare etc and code them explicitly, it puts clear [unclear 00:40:55] ensure your machine learning AI system is in a tightly bound box. Now that’s the approach that we’re taking at QinetiQ and working with DSTL [unclear 00:41:09] which I think has a lot to offer [unclear 00:41:17] strict rules about what we do, how we do it, etc. I think it’s a good thing.
Sean: Is that not slightly naïve though? I mean I think I asked this to Steve but he didn’t really go into much detail but I was thinking about cyber-attacks and the fact that, I mean we ‘ve seen before worms being released by alleged nation states, without getting into the details. But of course these things have unintended consequences and this is something that I did mention briefly. But sandboxing something and hoping that it’s only going to do that one thing and then missing something is really important, isn’t it?
Hector: I didn’t say it was only going to do one thing. We allow it to do what it wants as long as it doesn’t infringe the rules of engagement. So there is variability. We call this thing compact, it’s got a very fanciful title, configurable operating model policies for control of tasks. So we allow the machine learning AI system to vary its responses but as long as it doesn’t exceed the box that we’ve architected to manage it.
Sean: Mission parameters, okay. Jack?
Jack: Yes. So Hector’s point about rules is a really, really interesting one. It’s great to hear asserted the idea that a company that has made the decision to stay within those parameters and to engineer within those parameters because it’s easy to say well let’s innovate and actually those rules don’t apply to us rather than make the decision to work within those guard rails.
I do think there needs to be, and I think Hector acknowledges this, that we shouldn’t be naïve about the applicability of those rules and there will be a need to update those which is why the UN are taking autonomous weapon systems and they’re saying let’s discuss these in the same way as we have discussed nuclear weapons, biological weapons, chemical weapons and even more prosaic technologies like landmines and boobytraps and things.
I’m particularly interested because I come from a university and what should the role be for a scientist working in this sort of area in informing those sorts of rules because there is an actually very good history of scientists, particularly around nuclear weapons being the ones that have rung the alarm bells and saying let’s wake up to the reality of the possibility of these new systems, let’s get policy makers taking them seriously. Let’s organise social movements. Let’s raise the debate.
I mean the wonderful example like Leo Szilard, the man that patented the atomic bomb, who then persuaded his friend Albert Einstein to write a letter to the President Roosevelt at the time, which is an extraordinary act of taking responsibility. I’d like to see more scientists get involved in that discussion proactively, saying what does the next set of rules need to be? How do we encourage the UN to develop the equivalent of the biological weapons convention but for autonomous weapons.
Sean: Nik?
Nik: It’s really encouraging that people are talking about ethics at the heart of this topic and it’s really encouraging that we’re seeing companies leading the way int aht sense as well where there might be some gaps in say governance on what we can and can’t do in this space because it’s rapidly evolving. I think Hector’s point about let’s use the existing frameworks and work within the boundaries of what’s the rule of law at the minute but also there’s a fundamental question about behaving ethically, behaving against some guard rails, some principles that I think are really important.
Certainly we look at five principles around responsibility, equity, traceability, reliability and governance in the way we do AI work which is something that I think, conscious that might not be the rule of law and things but if you put it to the heart of how you’re approaching it, that you are absolutely committed as an organisation to be looking at accounting from potential biases, that you are actually thinking this is appropriate for human machine team and this is actually appropriate for a machine to do and you’re questioning that. You’re looking at data provenance.
You’re looking at the trust. You’re questioning some of those issues and looking at reliability and some of the governance around that. I mean I spend a lot of time talking to senior lawyers in the organisation who are very tech savvy people, who really want to discuss the pros and cons of how we approach AI. I find that a really productive relationship to get different teams, the legal team, the ethical team, the other business teams working closely together to try and think about some of these problems. Certainly with government, I think that’s leading the way and supporting. I think that’s really important as well.
Sean: There’s one thing that Steve mentioned in the discussion where he said that it was difficult potentially to get huge amounts of data of the sort that they required to train some of the machine learning models. So we’ve all had to click on captures to work out which of the boxes has a boat in or a traffic light or whatever. So we know that, I think most people realise that’s to help with training data for things like autonomous vehicles. How do we get these data sets then that are needed for these kind of defence applications?
Nik: I don’t mind jumping in again here. I think there is a matter of understanding and collecting the data that you already have so it’s maybe accessible and in Defence, the security classification is important so people can see, who have the right to see it. That’s absolutely key in making sure that’s all catalogued and agreed upfront, especially in terms of data ownership and who owns what and who potentially, in IP of that, is important.
But then I think what’s exciting to me is around some of these virtual worlds, synthetic data generation, opportunities with- we’ve certainly been working with a number of companies who are building fantastic virtual models that we can experiment systems in. We’re using that to train and test our systems. Obviously virtual is not as good as real but I think the scale, the processing that’s available, some of the technology that’s coming out of the gaming industry specifically is allowing us to really use some of these systems in scenarios that we wouldn’t necessarily be able to do or be too expensive to do in the real world or collect the data from. So I think the virtual environment, synthetic environment is going to play an absolutely huge role.
Sean: Modelling is really important, isn’t it. Go on, Hector?
Hector: Yes, I absolutely agree with Nik. I mean the current buzzword is all about digital twins and digitalisation. Everywhere you look it’s about digitalisation these days. I agree. To add to that, I’d be interested in your thoughts, at the moment the civil world is leading on the development of these types of systems. [unclear 00:49:15] Defence has been a little late to picking these technologies up. Perhaps it’s not surprising when you look at the investment in the civil industry in this space.
There is lots of good things coming from it and we need to accept the fact that in the Defence world we need to recognise the fact we have been behind in capturing data, the right kind of data and also respect the fact actually do you know what, even if we did capture data based on yesterday’s war, yesterday’s challenges, tomorrow the world will be completely different and so consequently a lot of that data is likely to be less of use. I wouldn’t say it would be completely useless because hopefully you can extrapolate from it and make good judgements. These are the challenges that we face.
Sean: I’ve seen citizen journalism take off in leaps and strides in social media and everything go on. Watching via Twitter, some of the information coming back from the Ukraine that people are being able to use GPS data from photographs, video footage and actually pinpoint things like munitions via basically citizen journalists. So I mean there’s going to be all sorts of new ways of gathering data I suspect. Jack, sorry, you were going to say something.
Jack: Yes. I mean maybe there is a way in which the debate on miliary uses of AI can learn something really profound from the civilian uses of AI which relates to the controversies that we’ve already seen about extremely data hungry systems and the collection and provenance of that data, who gets represented, where the data comes from and you can imagine all of those things being sharpened much more in the military domain but it’s also tied to, as with all questions in public trust, it’s tied to the question of what these systems are for.
This is, I think, something where people in the Defence sector really do need to get ahead of this which is what it does to public trust with the deployment of these systems for uses that a lot of people really don’t like and a lot of people won’t agree with. So I think the really interesting canary in the coalmine that we’ve seen here that would worry me as a Defence AI person was the Google walkout around Project Maven, which genuinely took one of the world’s largest companies by surprise, the fact that a lot of its tech workforce objected to working on Defence projects.
So it’s not just that AI was AI and we’re just developing AI. They were specifically objecting to the use of AI on a Pentagon funded project. Now in the end it didn’t change much because other tech companies picked up that contract but I’d be really interested in either of your thoughts on the possible brewing controversies as AI, which has been used to improve advertising, surveillance, various other things, autonomous vehicles Sean already mentioned, gets turned more to these military applications. What that does to the trust conversation with the public.
Hector: I’m just thinking how best to respond to that. Look, one the genie is out of the bottle, what can you do about it? We’re all clever people, even in the Defence industry and we know there’s a solution out there, we will learn it ourselves eventually. That’s one thing to say. Historically in the past it was the Defence industry where all the innovation really happened and the civil industry benefited from that big time. A lot of our medical breakthroughs etc have come from the Defence industry, not absolutely the case at the moment. So there’s checks and balances or swings and roundabouts.
Sean: Nik?
Nik: Yes, I think absolutely there is an element of doing the right thing. I think there’s a little bit of element of scaremongering in what AI can do. I think it was Jack alluded to at the beginning, what we’re doing at the beginning is usually pretty advanced data analysis really that are helping, I think Steve mentioned it about the logistics and improving, optimising maintenance schedules and others where we’ve seen actually really good benefits. Financially this is costing us less money to do things but it’s probably not what people envision Defence is necessarily doing in AI.
I mean certainly some of the projects that we’re looking at very much around connecting up the multidomain activities, helping up with speed of decision making and other areas where the thing is that the world is moving so fast, you need to be on top of things. You’ve see threats moving a lot faster so AI is actually helping decision makers understand what’s happening. I think that’s a pretty ambitious goal. It’s quite well within the communities of AI. However, it’s not as probably, as people do tend to get quite-
Sean: It doesn’t grab as many headlines.
Nik: -excited. I think that’s the point. You’ll probably see some quite mundane applications around people’s expenses and things like that where you’ll see that it benefits Defence because you save money and it optimises, it makes it more efficient which absolutely means it gives you more capacity to invest and other exciting things. So I think that’s the sort of things I would focus on and then again building on Hector again, it is about supporting some of the human machine teaming, human decision makers in that and that’s what we’re talking about here. I think we’re not really talking about anything necessarily beyond that.
Swarming technology [unclear 00:55:29] is about helping maybe pilots do a little bit more with other aircraft, with other systems or allowing more information to come to them so they don’t get overloaded. I think that’s a real near term, I mean in the next couple of years we’ll see a real application in that. But absolutely Defence has got to be at the forefront of making sure it is, as I mentioned earlier, ethical, we’re using it in the right way and we’re putting our own systems in place to make sure that we’re doing these things right.
Sean: There was one thing Steve mentioned which was quite interesting which combined with thinking about using civil tech, well interests me anyway which was having this AI at the edge, so the idea of some of the tech effectively not requiring the cloud to work. So at the edge, as I understand it, and feel free to correct me, means that the computation is happening on the device that’s in the field if you like rather than what’s happening with a lot of these voice assistants, without wanting to activate a thousand devices, Siri, Alexa, etc, shipping off the information to the cloud to be processed and then giving you a response. So how does that work? What’s the deal there then with edge AI?
Hector: So look, you know about the challenges that we face in the Defence world. I mean Steve said there are several Cs. It’s cluttered, it’s congested, it’s contested, constrained, complex. There you go, there’s the five Cs I believe. But the consequence of all that is that you’re very unlikely to have persistent communications available to you. Sending data down the pipe to be processed somewhere else is quite expensive.
The challenge is [unclear 00:57:29] that data process and can you do right at the [unclear 00:57:32] and can this machine learning and AI solution, approaches that are coming out, can they help us with that so that rather than sending data down the pipe, you’re sending information that can be actioned a lot more quickly. That’s the benefit of AI at the edge, processing at the edge.
Nik: Hector’s absolutely right on this. I think is is quite specific for Defence challenge in some ways. But there is also some opportunities that come from that because let’s say you need to run, at the edge you probably need maybe lower power systems so you’re looking at maybe semiconductors and things that take less power because they just don’t have the capacity to access the grid but then those sort of technologies suddenly go well actually, can you not use that to help with our environmental concerns because we know that AI is using a lot of energy than others.
So even these edge technologies could potentially benefit these data centres and other areas that we’re looking at as well. So I think this is probably an area I can see Defence leading that maybe could help the commercial world as well.
Sean: Yes, full circle. Yes, go ahead, Hector.
Hector: [unclear 00:58:54] stick a sensor out on a glacier out in the Arctic or Antarctic, processing at the edge. It helps a lot. Or if you look at it from I believe even if you want to do a remote operation out in the field somewhere, having smart sensors that can assist you are the kinds of technologies we’re look at, certainly in the civil space I believe, which we are monitoring quite closely in the hope that they will help us back in the Defence space.
Jack: So there was just one more thing that Steven mentioned that struck me as extremely interesting that I thought it was worth discussing because we’ve been talking about the assurance of these systems and that’s about often AI enabled systems are designed to learn over time, right, and learn after deployment. I think there is something really interesting about trying to assure systems that even if the system stays the same, the context may change around the system, such that the system no longer is within its design parameters.
Trying to assure those systems over time and make sure that ethical or engineering principles of accountability or whatever it is remain relevant, I mean Sean, you mentioned the 737 Max example right, which is one of these examples about technological, incremental innovation that then quickly becomes problematic because it isn’t seen as a brand new technology that needs re-regulating. I wonder whether, in the theatre of war, these systems might challenge their design parameters or might need to be updated and how that really does threaten our existing assurance regimes.
Hector: I absolutely agree with you on that one. I wish programmes like the TAS Hub was doing more to understand these types of challenges, these assurance challenges and what they actually mean and the limitations of any approaches that are being developed with respect to test and evaluation, validation and verification. I don’t believe there’s enough of that happening.
As you were responding there, I was going to say something slightly negative but this isn’t Steven Meers [unclear 01:01:24]. I have concerns that we’re not investing enough in this whole space of test and evaluation, of machine learning AI systems. We need to do an awful lot more. We need bigger programmes of records focused in this area, please.
Sean: Nik, any final thoughts?
Nik: I’ll echo that. I think Hector’s right, we do need a lot more investment, not just necessarily talk on this area. There’s some real things that we can make a difference on that TAS Hub and others can make an absolute difference on but it needs that long term investment view to be able to do that.
Sean: That’s just about all we’ve got time for on this episode of Living With AI so it just remains to say thank you to all of our panellists and obviously thank you to Steve. So thank you very much, Nik. Thank you very much, Hector.
Hector: Thank you, you’re very welcome. Thank you for the opportunity.
Sean: Thank you, Jack.
Jack: Thank you, it’s a pleasure.
Sean: If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited. Our theme music is Weekend in Tattoine by Unicorn Heads and it was presented by me, Sean Riley.
[01:02:52]