
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Trusting Robots Day to Day (Projects episode)
This 'Projects Episode' discusses a few TAShub projects grouped around the theme 'trusting robots day-to-day'
Project: Trustworthy human-robot teams
Dr Marise Galvez Trigo – Assistant Professor
Project: OPEN-TAS
Professor Tony Prescott, Lead Contact
Project: Trustworthy human-swarm partnerships in extreme environments
Dr Mohammad Divband Soorati – Project Lead, Assistant Professor, University of Southampton
Industry Partner:
Daniel Camilleri – Cyberselves ltd OPEN-TAS
Featured Interview:
Manuel Parente: Software Technology Development at Ocean Infinity
Pedro Costa: AI Lead at Ocean Infinity
Panel Guests:
Katrina Kemp - Maritime & Coastguard Agency
Jon Downes - Associate Professor in Maritime & Autonomous Systems, University of Southampton
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 2, Episode: 5
Trusting Robots Day to Day
This 'Projects Episode' discusses a few TAShub projects grouped around the theme 'trusting robots day-to-day'
Project: Trustworthy human-robot teams
Dr Marise Galvez Trigo – Assistant Professor
Project: OPEN-TAS
Professor Tony Prescott, Lead Contact
Project: Trustworthy human-swarm partnerships in extreme environments
Dr Mohammad Divband Soorati – Project Lead, Assistant Professor, University of Southampton
Industry Partner:
Daniel Camilleri – Cyberselves ltd OPEN-TAS
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Hi and welcome to Living With AI. AI is ubiquitous from voice control coking timers to autonomous vehicles but do we trust them. I’m Sean Riley host of this, the Trustworthy Autonomous Systems Hub’s podcast. Al our previous episodes are online is you want to have a listen, just search TAS Hub or Living With AI in wherever you get your podcasts. We’re recording this on the 26th May 2022 so have that in mind if you’re listening way off in the future. This episode is one of our projects episodes. We’ve grouped three projects together around the theme of trusting robots day to day. Our researchers today are Marise, Tony and Mohammad and joining us from industry is Daniel. So I’ll ask them all to just briefly introduce themselves and the project they’ve been working on and then we’ll get a bit more detail shortly. So Marise, I’m going to start with you because you’re right at the top of my screen so Marise, tell us about your project.
Marise: Hello. Thank you very much. So my name is Marise Galvez Trigo. I’m currently an assistant professor at the University of Lincoln, however I’m going to talk about the project Trustworthy human robot teams which is a project that I worked on whilst I was working a research fellow at the University of Nottingham. I’m going to focus a bit more about the aspects related to human robot interaction.
Tony: I’m Tony Prescott. I’m a professor of cognitive robotics at the University of Sheffield. I was leading a project looking at responsible innovation in TAS and how we can use robot telepresence to give people access to what happens inside our laboratories.
Mohammad: Thank you for inviting us. I’m Mohammad Soorati. I’m an assistant professor at the University of Southampton. My project was on Trustworthy human swarm partnership in extreme environments and I’d be happy to discuss this later.
Daniel: Hi everyone. My name’s Daniel. I’m the founder and CEO of Cyberselves. Together with Professor Tony Prescott, I was involved with the open TAS project looking at providing VR remote access to different labs around the UK.
Sean: Superb. Thank you very much. So perhaps could we get a little bit more detail, Marise, on trustworthy human robot teams then, please?
Marise: The trustworthy human robot teams was a pretty big project. We had a very large team with members from the three universities from the TAS Hub. That’s Nottingham, Southampton and King’s College London. Whilst I worked in this project, I was based at Nottingham. For this project, what we wanted to explore was a bit the relationship between humans and robots when it came to them being part of a team and when it came to people making use of whatever services or spaces these robots were in where they were teaming up with other people. We basically had two use cases or two scenarios where we wanted to explore this. The first one was robotic surgery. This case was mainly led by King’s College. We didn’t get very far with it because of ethics. Robotic surgery involved going through NHS ethics, etc. Bu what we wanted and will eventually explore within this scenario is how surgical teams can work better with these kind of surgical robots. To give a bit of background, we were working with a specific one which is the Davinci robot and it’s a robot that surgeons can teleoperate to remotely carry out certain surgery types. We want to see how trusting these devices can affect the performance of surgical operations, also the trust that potential patients might have in these devices and the procedures carried out with them but also if there were any barriers to trust or if there were any good and bad things that could affect the trust between the robots and people within the teams but also from general public to the teams using them. Then the use case where we focused on, in Nottingham, was a [unclear 00:04:24] use case. So in this use case we focused on the use of UVC disinfectant robot. Basically a UVC disinfectant robot is a pretty big robot, sort of like a vacuum cleaner robot but much taller, that has a set of UVC lamps on it. What these lamps do, they emit a short wavelength, ultraviolet radiation that if administering the right dosage can kill and deactivate virus and bacteria. We wanted to explore how, if we used this robot to disinfect a specific space, people would feel about it. For that, what we did was to run a series of user studies with two groups of people. These two groups were on one side general public which can be anybody accessing the space cleaned by this robot and on the other side we had professional cleaners, which is people that might access the space disinfected by the robot but might also work with this robot at some point. We basically tried to explore how they felt about the prospects of working with these robots, about the prospect of having these robots performing disinfection of spaces in different scenarios such as hospital, lecture theatres, classrooms, laboratories, even schools or museums. We finalised the user studies not very long ago and we are currently analysing the data. However, we already found out some interesting findings here. We kind of thought that some people could be hesitant to work with this kind of robot or maybe professional cleaners could feel threatened about the presence of them and maybe they’re taking over their jobs or things like that. However, this wasn’t the case. We observed that some people from the general public did mention such concerns but we did not observe them from any professional cleaners which is pretty promising.
Sean: Fantastic. Thank you, Marise. Tony, can you tell us a little bit about open TAS then?
Tony: Certainly. So the TAS project overall has a strong emphasis on responsible innovation which is the idea that we do research in a way which is mindful to how the wider public and stakeholders and society think they want the world to change as a result of introducing autonomous systems. Clearly autonomous systems research can be very controversial so it’s important that we do it in that context and that we understand the potential impacts our research might have on the world and that we consult people about what these impacts are going to be and whether they want them or no. So on the other side of that is open science where we try and report the work that we do in as clear a way as possible and we communicate it as widely as possible as for example we’re doing in this podcast. We make those outputs easily accessible to people so they’re not behind pay walls and so on. But we’ve noticed that there’s a bit of a gap in the middle which is actually the research we do and how we do that research because that tends to happen behind closed doors in our laboratories. We’ve noticed particularly over the last few years with Covid, how suspicions about research often focus on what is happening in the laboratory. For example if you look at vaccine hesitancy, there are questions asked about what’s happening in these labs that are developing the vaccine and are they doing that in an appropriate and ethical way. So we think there’s scope to do more in this space of opening up the laboratory so people can see what we’re doing, they can actually see our research, see what activities we’re doing and see that what we’re doing, we’re doing in an ethical and open way. That’s what open labs is about, it’s about opening up the laboratory, allowing people to come in. Now there are problems with that obviously because our laboratories often aren’t very easily accessible by the public . It can be a long way for people to travel to visit your laboratory and there are issues around safety and, in the last two year of course there have been issues around disease and spreading Covid and so on. So it’s difficult to actually bring the public into your laboratory. We’ve got low capacity for this. We might have an open event every year or so but that’s probably as far as people go. So the idea is to use our TAS technology specifically to use robotic telepresence to allow people to make visits to labs via a robot. So it’s almost that we’re using the TAS technology to enable the visit. What that means in practical terms, and we worked really closely with Cyberselves, so Daniel can talk about this a bit more in detail, we’re using immersive telepresence so people will put on a VR headset. If they don’t have a headset they can use a computer screen and a mouse but they’re able then to project themselves into a mobile robot, usually a humanoid robot and they can move that robot inside the lab. It’s two way so they can see what’s happening in the lab, they can hear but they can also talk and move within the laboratory and go and look at what they want to. So this is obviously a very new idea. It hasn’t been tried many times before. We wanted to explore how that could be used to enable visits into laboratories. So we’ve been testing it over the last year. We had a couple of competitions where people would remotely control a small robot and run it through a maze. So that’s a fun thing to do. It also allows us to test and improve our technology. Then we’ve also had people, and that includes the general public and school children and so on, remote controlling these robots in the laboratory to see what we’re doing. They can move around inside the laboratory and see different robotic activities that we’re doing in the lab. The idea is that we are going to put the methodology that we’ve used into the public space so other people can try this out. We’ve also written a report explaining our motivation for this and how we think telepresence could be the next step in making laboratories more open. People in the past have put webcams inside laboratories which is great but it’s another degree of freedom and it’s another degree of immersion if you can actually control a laboratory to move inside around the lab, sorry, a robot, a robot body with a head and with eyes and head movement, maybe even with arms that can reach and touch things and you’re remotely controlling that from perhaps a long distance away.
Sean: Fantastic. Daniel, we’ll come to talk to you in a moment about that but first of all I’m just going to throw over to Mohammad just to find out about the trustworthy human swam partnerships in the extreme environments, please.
Mohammad: Thank you. Yes, I want to talk about trustworthy human swarm partnership in extreme environments but I want to first give a quick introduction about what swarm robotics is and why we’re doing this. So swarm robotics is a field of a study that focuses on anonymous simple agents forming a large group that have not access to global information. So what can we do with a large fleet of robots so we get robustness and activity and scalability because you have so much redundancy there but it comes with its own challenges. So simple robots will not be able to make critical decisions for instance. It’s far from being able to get any profit in real world use cases. So there are also issues when you have so many robots, there’s the issue of explainability, how do you want to explain what’s going on in a swarm of a thousand robots and how do you get the idea about the performance of a swarm and so on. One way to solve these issues is that we create a team of human operators and the swarms. So human operators are very good at making critical decisions so why don’t we let them take care of the decision making and the swarm being able to augment the vision and actuation of that operator let’s say when the human operator will n be able to deal with all the issues and get a vision around the area that otherwise would not be accessible. So we took care of critical decision making and real world use case if we couple it with human operators. But then there are other issues such as interface, so how would you design an interface and he human operators could easily use and access those information. The other issue is trust. You look at sophisticated devices out there, anything work but do human operators, especially in disaster management, their lives are in danger, do they trust these simple drones for instance flying around. Sill we have the issue of scalability and performance. So we gathered together more than 17 academic and research staff over a year, we worked with [unclear 00:14:10] to support us with some industry driven use cases and requirements. We worked on his project. I was the PI on this project. We were hoping to get funded for the next year as well, this time adding seven more researchers, even from the US and other universities across the UK coming and helping us to make that happen. What we did is we said well, first of all we need to have a simulation platform where we can actually have all these circumstances, all these, everything simulated in there. Then because we worked on designing a highly usable user centred interface design, we went on and asked experienced drone operators who have years of experience in operating drones, asked them what is it that make you lose your trust? How would you gain back that trust? Wha are the important factors for you if we have multiple drones. Of course there is still in the research stage, they haven’t worked on thousands of drones or thoughts of robots yet but how would they think there might be issues and how would they think they can work on that. So we tried to create a taxonomy of all the things we need to take care of when we’re talking about trustworthy swarms. So we went on, we created a use case that our industry partners said you’re very useful, let’s focus on this problem and get this sorted. We tried, we worked with swarm experts and asked them what are the important factors that we need to take care of when we’re talking about explainability of the swarm. We also worked on the case where communicating with every single robot in a large swarm is not possible, which is what’s going to most likely happen in real word use case where you don’t have access to the whole swarm, how can we make sure that critical decisions can be made despite these issues. Obviously our work in progress and the future mission, I’ll just quickly go over them. So when you have a complex system, it could be very useful if we could make sure formal methods, formal [unclear 00:16:32] checking methods to say 95% of this swarm will be able to accomplish this mission successful. That will be very helpful. We could always say you have identified some of the stakes to be risky. I assure you that 0% is the likelihood that the swarm will take that action which is risky. We are also thinking about designing an interface which is not fixed, it’s dynamic depending on the situation that’s going on. Let me give you an example. If a vehicle is out of fuel somewhere and if somebody is dying somewhere else, obviously you have to prioritise. You don’t want to have an interface flooded with all the information that you don’t really need because you are trying to prioritise. So we are trying to work on an interface and that itself in a situation. We asked the operators what do you want to see now if this happens. So we then tried to create a dynamic interface. We are trying to get that into more sci-fi. So we are designing brain electrodes that would measure [unclear 00:17:43] workload on the operators and then see how would they react once everything has happened in the swarm. Another important thing is that we want to make sure that at all times the operator is accountable for something that goes wrong. So it’s not that we say okay the swarm has done it, go talk to them. There’s assuring that there’s always a certain level of responsibility assigned to the human operator. This is what we do in a nutshell.
Sean: Fantastic. That’s great. Can we hear from you, Daniel? Obviously you’re an industry representative so tell us a little bit about what your organisation does and then about how you worked with Tony’s team.
Danie: So my organisation Cyberselves, we spun out of the University of Sheffield a couple of years ago now, two years ago, with the idea of making robots, a universal way of controlling a programmed robot. I myself am a roboticist, I worked at the University of Sheffield for four or five years before spinning out. So my work started a lot on a single robot which I spent roughly six months learning how to control this robot, how to get the joints to move, the camera input, the sound, all the different bits and pieces. But then later on we needed to shift to perform essentially the same function on to a different robot. I was surprised how difficult it was to still get into this new robot and understand how to control it, how to get access to all the different bits and pieces of information. That’s where the idea started. The idea started of we need a way to be able to abstract all of these differences and that’s important because around the world there are a lot of researchers performing a lot of research, creating a lot of very interesting algorithms but a lot of the time those algorithms are robot specific. The fact of the matter is you just cannot take that algorithm and run it on another robot if it’s running a different system, maybe it’s not [s/l raw space 00:20:00], maybe it’s got a different body layout. So that is the problem that we set out to solve. In the pursuit of solving that problem, we landed on telepresence as an excellent way to demonstrate this capability, what better way to show that you can control any robot than by substituting the brains of that robot with a human and then I think the human, with their fixed body interface, controlled a number of different robots, be they humanoids, non-humanoid, underwater robots. Then telepresence brought in a couple of extra challenges. We needed to make it fast. We needed to make it remote and we needed to make it as easy as possible to learn so that rather than spending months to learn how to programme a robot, it should only take you a day to learn how to get all of the inputs and outputs and then you’re ready to go. So combine all those together and that’s been what we’ve been doing at Cyberselves. To show these capabilities we’ve been participating in the ANA Avatar XPRIZE competition. It’s an international competition with one of two teams from the UK competing there and there’s a total of 20 going to the finals now from all around the world. That is specifically on the state of the art for telepresence which is our first application but all of that is geared towards validating the design and the implementation of [s/l Unanimous 00:21:38] which is what we call our robot independent software. So we got involved then in this project through Professor Tony Prescott and through Dr Michael Szollosy who is also a founder of Cyberselves, so is Tony. We got involved in this project to provide this service of telepresence into different labs. That is both a test for technology, it’s getting our technology into the hands of the general public which we believe is really, really important especially with robotics and it got us to experiment with the user interface, with what’s the best way to deliver these sort of events online where you’ve got multiple people either within a classroom or even around the world, we had attendees from Turkey and India and China at one point who were connecting to [unclear 00:22:39] robot in Sheffield and traversing this maze, which was really, really cool. So we got involved in that to provide the service because our software can run on any robot. So we didn’t just do the [unclear 00:22:56] and later events, we had a pepper robot available. We had a robot robot which I’ve got with me here. It’s a small desktop robot and it’s just a moving head but it still provides that sense of presence in the labs. That’s how we got involved.
Sean: It all conjures, in fact all of these projects are conjuring sci-fi imagery from the swarm robotics through to the robotic surgery and just thinking of what you’ve just been talking about, this idea, it’s a bit like the peripheral avatar, the idea of it doesn’t matter what’s at the other end, does it. I mean how do people, and this perhaps goes back to Marise but how do people react when there are robots in their environment if they’re not primed for it, they’re not ready to see this object coming up? What did you find, Marise?
Marise: Well in our project, the use case of the cleaning robot let’s say or disinfectant robot, it’s a bit different than traction that people will have with this robot because I suppose too with other robots, people are not meant to be in the same room as the robot whilst the robot is in operation. So they can have some kind of interaction with it but it will be very basic. So it’s more like trust in the robot doing a good job whilst people cannot actually see if the robot is doing a good job or not. That also bring us to some concerns that some participants in our study had because these robots disinfect spaces using UCV light. You cannot see the light but you cannot see either bacteria or virus present in the air. So people were a bit hesitant about whether, many people didn’t know how this technology worked. Some people were familiar with UVC disinfection however, especially people working in hospitals because apparently they don’t use these robots but they already use UVC disinfection devices there. But yes, some participants wants to see bacteria and virus actually disappear and being killed by the robot.
Sean: Like the CSI, you want to be able to show a light on it. But they are quite dangerous as well those machines, aren’t they?
Marise: Well it’s pretty bit and pretty heavy. It’s pretty slow also so it’s unlikely it will harm you if you get hit by it just because it is slow. But yes, they are heavy devices. This particular robot doesn’t have any arms or anything that can physically harm you with. Although if it turns on whilst you are in the room with it, you might receive radiation from the lights and be harmed by it. Some people also have concerns about that, they ask about what kind of risk mitigation could happen, what kind of trials could happen before this kind of robot is deployed, whether they could receive a warning or if they could have a stop button to make the robot stop or even robot sensors that could detect people. It actually does, I has sensors that detect people around it so in this way it can avoid to urn itself on and harm people. Bu yes, it was very interesting how people were concerned about not being able to see what the robot was actually doing and how hat affected the trust and also based on that, some people mentioned that they could trust the robot as long as the company or the organisation deploying it was a trustworthy company, as considered by them. So some people mentioned well I trust the University of Nottingham so I could trust the robot going and being thoroughly tested and checked before being deployed.
Sean: This is like when people have brand loyalty I guess, isn’t it?
Marise: Yes. It’s interesting.
Tony: We’ve done some research on people’s attitudes to robots and how they change with interacting with actual robots because before people have met a robot their attitudes are largely informed by science fiction. Some science fiction robots are quite dystopian but some are quite utopian. So actually on average, people have a relatively positive view of robots but it might not be that accurate of you because they’ve only seen robots in science fiction, not real world robots. When they encounter real world robots, it’s interesting because that’s quite often different from their expectations. One of the differences is that real world robots are often a lot less capable than the science fiction equivalent. So there’s a learning experience there to discover that actually these robots aren’t as smart and as versatile as you might imagine and that there’s a real place for them operating alongside humans and sharing difficult jobs and humans doing their part of it and robots doing the bits which are easy to automate and perhaps more repetitive and so on. So there’s an education piece I think to all of this human robot interaction work that we’re doing to help people understand what robots actually are and how they could impact on our lives. One of the things we’ve found in our research actually is that if you or I meet a robot or if on of your friends meets a robot and then they describe that meeting to you, and if they had a positive impression of the robot, that will rub off on you and that will give you a ore positive impression of robots. This actually is a phenomenon we see in social psychology. So if you or I meet maybe an out group of people, people that we don’t know very well, that changes our attitude towards them. Then we speak to our family and friends and their attitudes will change as well. So there seems to be similar mechanisms happening in the way we change our attitudes to robots, to those that happen when we change our attitudes towards other people.
Sean: In my experience of having worked over in Nottingham, I think when Marise was there actually, and having some interaction with the Boston dynamic spot robot, I can’t say I came away with the most positive view of it. From a technological point of view, yes, very impressive but I was filing at the time and I tried o film some closeups and actually it scared me, if I’m absolutely honest with you. Daniel, you wanted to speak.
Daneil: Yes. Going on what Tony Prescot was saying, one of the first things that we did do as a company, even before we had spun out is we’d go to different tech vents around the country and Europe and we’ve been also invited to some as well, where we take robots with us, we take a headset with us and it was this great exercise that we do. It was actually a research project which was called Cyberselves and that then spun out as the company. It was all around dispelling some of the fear around robotics because most of the engagement, the exposure that the general public has is through media, through videos. On almost all of them, we see robots as a replica of ourselves, dominating the world or doing something or other. So using telepresence was actually a great way to get people to spend a couple of minutes in the robot’s shoes, being the robot. That way they can understand the limitations of the robot, they could understand what it’s capable of, what it cannot do. That was a great way to introduce the topic.
Sean: I think anyone who’s watched the Doctor Who science fiction show in the UK and has tried to imagine daleks climbing upstairs is probably starting to get an idea for some of the limitations that robots must face. Marise, you wanted to say something. I think you put your hand up, did you?
Marise: Yes, I wanted to mention what Tony said about people having this science fiction idea of robots. That’s quite true actually. For our study we explored the attitude towards robots of participants before even having seen the robot. Then for the user study, when we brought it in the room and they saw it, most of them or pretty much all of them imagined a pretty different thing. They described some of them, that they were imagining a humanoid robot, they definitely overestimated what the robot was capable of doing. I think actually then having seen it in operation helped them understand that the jobs of professional cleaners are currently not threatened or shouldn’t be threatened by the presence of this robot and instead they can see these robots as team members or tools, depending on how each person wants to call them, that can have a more comprehensive version of the task that currently only the human team performs. We ask all participants if they were to choose between a team of only robots, of only humans or a team with humans and robots for cleaning, disinfecting spaces, what they would prefer and all of them preferred the combination of both because they saw how, despite the limitations of both humans and robots, they can both complement each other. But yes, definitely there is a general overestimation of what robots can do that I think may come from science fiction.
Sean: Definitely. I’m going to throw over to Mohammad now. What was your experience in your projects of these sorts of things? I mean swarms have, it’s an unfortunate term, swarm, isn’t it because I think if we called it, I don’t know, flock of robots or a combination but when you hear about swarms, you start thinking of wasps and all sorts of nasty creatures when actually all it means is collaborating robots, right?
Mohammad: Yes, that’s true but we actually got the inspiration from the insects that are probably scary. Actually this is what I want to mention, we are no always trying to increase trust. What we want is a calibrated trust, the right level of trust that this system deserves to actually have because if you have over trust, it might be as bad as under trust because you’re a surgical robot and you feel like okay, I’m going to grab a coffee and let the robot do the surgery. That can be a catastrophe. So what we try to do is always measure the right level of trust and then we try to create incentives for the human that is interacting with the robot and almost say hey, do you want to check this out? You seem to be awfully silent over time and maybe there’s something wrong so you should check. So I think it’s very important that when we say trust, we’re not always thinking about adding it because it can be as bad in the other extreme as well.
Sean: Fair enough. I just think it’s an unfortunate term swarm. I understand where it’s come from.
Mohammad: So a lo of researchers have tried to call this collective, flute but actually the more common terms if what they call multiagent systems or multi robot systems but there’s actually a difference and the difference is that in a swarm there’s a level of uncertainty and no known conditions if you’re no aware of and that’s why the term is used. The reason why it’s called this way is coming from the field of swarm intelligence which is inspired by ants and basically social insects but you’re right, maybe we should think about another term.
Sean: It probably wouldn’t catch on, would it? Swarm is just so synonymous isn’t it now. I mean we have mentioned it before on the podcast, we’ve talked about swarming robots before and I remember mentioning, because I’ve done some work with a scientist who’s working to do with termites and construction and the idea of agents that can just go off and do a job. The problem always seems to be, as you mentioned in your talk before, the communication because how do you know what this group of robots is doing? There’s no foreman or foreperson saying okay, you guys over there do that bit, you go there and do that bit. It’s like an emergent property, isn’t it? How do you deal with unintended consequences in that case?
Mohammad: Well you’re touching on a very important question that is linking what we call micro/macro link which is how you try to translate all your Google objectives into [unclear 00:36:40] that can emerge such behaviour. The honest answer is we don’t know. So there might be emerging behaviours that we are not aware of. Actually there has been some nice work of a different scientist reporting what unexpected behaviour they observed in their research that they had no idea about. So this also touches on the field of evolutionary robotics where we also evolve robots and for instance for gate learning, they were thinking how do we create structures from multiple robots so they are able to move? What they were thinking was that okay, I’ll just give them some time. They can come up with some [unclear 00:37:27] but that’s not actually what happens. What happened in that particular case was that they climbed up on each other and they all dropped on the other side. That’s how they got the other side. There might be emerging behaviour [unclear 00:37:44].
Sean: But that’s research for you I guess. I was thinking back to the idea of the telepresence and opening up the lab actually, not necessarily talking about unintended consequences but the idea of allowing people to see what’s going on. Tony mentioned vaccine hesitancy and all these sorts of things. How do you, and maybe this is outside the scope of this but how do you make the lab understandable to somebody who is wandering around just watching from a telepresence robot? I mean is that to do with how it’s implemented?
Tony: Yes. I mean you can’t just let a telepresence robot run around in the lab. There are safety issues and obviously people doing their work and you don’t necessarily want to distract them. So it dos have to be orchestrated up to a point. One of the things is that we do have to make sure that the lab is actually busy because in robotics research you do find that people spend a lot of time at their computer and the robots aren’t being used every day necessarily. You might be working on your programme in simulation for months and months before you try it on a robot. So robot labs aren’t necessarily the hive of activity that you might imagine they would be. So we tend to wait until there’s a few people who are doing some demos or we invite people to come along and do their work in the lab on the day that we’re going to get visitors in and then there’s something to look at. So that’s the first thing. Then obviously the researchers need to be primed that there are going to be robots peering at them and seeing what they’re doing. So it’s not quite the fly on the wall experience that you might imagine. There are other ways of giving that. Clearly you could have a camera that just watches what people do but that also raises issues about is it fair on the researchers to be filming them all the time. So there’s lots of questions here about how to do this appropriately and how to get the best out of this opportunity that we have with telepresence of a new way of visiting labs. So I think we’re early on this journey and other people around the world are trying to do this but I think the example, particularly from other disciplines for example research using animals has had an image problem for a long time and obviously a lot of societal concern about how animals might be used in live testing, in laboratories. One of the ways that they’ve explored to try and address those concerns is to have cameras and easy ways of accessing what’s going on in the lab so that at least some of the myths about those kind of experiments can be dispelled. The level of for example mistreatment of animals can be shown to be much less or completely minimal compared to some of the stories that are told about it. So I think it’s not just robotics that can benefit from this but any area of research which is seen as controversial. If you can show via the capacity of people to visit and the telepresence allows remote visits and more immersive visits then people can start to see into the laboratory, what we’re actually doing. More broadly unlocking the life of the scientist as well might encourage more people to come into science and engineering because they see how interesting and exciting the jobs can be. So from a point of view of education and bringing school children in, one of our goals was clearly to increase the inclusivity of [unclear 00:41:25] research. We have a gender problem for instance in terms of research. Many more men do it than women. One way to address that gender imbalance is to bring children in who haven’t decided yet what they want to do when they grow up and to show them some of the exciting things that they can do with robots and to show them that actually there’s stuff that could be interesting for them happening in the robot lab. So that’s one of the real goals we have. Another aspect of inclusion is to look at people that might have a disability. So one of the exciting things about telepresence is that you don’t have to be able bodied to control a telepresence robot. It’s idea for somebody who perhaps has mobility problems. They can put on the headset and visit the lab. Whereas if they wanted to come to the lab and they had a wheelchair, that might present a lot of logistical difficulties. So we’re really about opening up possibilities for people that might have trouble visiting the lab or might not think that robotics or autonomous systems is for them and they can have an experience of it which is quite low cost, you can do it from home or from your school and you can discover something new.
Sean: It’s quite meta isn’t it, using the telepresence robot to investigate telepresence robot research. I’m going to come to Daniel now. Did you find yourself eating your own dog food in some way?
Daniel: You have to and it’s the best way to do it really, it’s the best way to test the technology to the limit. Going back to what Tony was saying about how we provide this, usually have also a guide, a chaperone moving around with the robot, explaining things around the lab. I also have to say it’s usually much more fun if you don’t tell the researchers, that way they suddenly see a robot come to life and you get to experience their reaction which is just a lot of fun.
Sean: That’s the subject of this whole podcast, isn’t it, how people react, how people are around robots and working around robots. So were the researchers generally positive though when they found robots wandering around asking them questions as it were?
Daniel: Yes. They’re usually very surprised and confused for a couple of seconds as well as to what is happening here. But then they catch on really quick and they warm up to the idea. Of course the roboticists, they warm up pretty quickly anyway to robots but also even general public wise, it’s been very good even if you- we sometimes carry out this two person act where we’re at a conference, there’s a robot, we’re going around with the robot and for the first couple of questions we act like the robot is autonomous even though there’s someone in the robot. So we carried that out and then tell them that there’s actually a person there. You would be amazed how many times the person on the receiving end has said surely not, that is actually an autonomous robot. They do believe the autonomy bit behind the robot and it’s been a positive experience altogether as well.
Sean: So they trust that it’s autonomous? Fantastic.
Marise: That’s actually quite interesting and it comes to the fact that people tend to overestimate robots as well. I was involved in another project not long ago where we ran an initial study where we had actors pretending to be a robot that sorted the laundry. Some participants, we didn’t tell them that it was an actor. Some participants [unclear 00:45:14] had a super realistic robot that looked human. So it’s incredible how people can think that such a robot that completely resembles a human in every way exists. It’s very interesting.
Sean: It’s also problematic though, isn’t it because people perhaps need a bit of training with this because if they’re making assumptions that these systems are more capable than they are, that could really lead to some problems down the line. I mean I’ll confess to having done that with a voice assistant and when you hear that the voice assistant is going to learn about you and reading between the lines, it’s learning what your voice is but thinking okay, I don’t have to say all the key words to get it every time because it’s learning about me but no, it’s not learning that much about me.
Marise: I think it also shows that the open lab thing is a very good idea because if we can show people, if people could see what actually happens in robotics labs and the kind of things that robots can actually do and cannot do, it could help people regulate better their expectations of what a robot can do and that could surely help because I think that happens to some extent with many pieces of technology, that we may buy something thinking I can do much more than what it can actually do because we’ve seen it in a movie or marketing has been amazing and made us believe.
Sean: The power of advertising. Yes, absolutely. Daniel?
Daniel: It also goes back to one of the most difficult problems that we have as roboticists and that is robots don’t have common sense. What we find easy, we always have to fight against our intuition to break down an easy task into something which is actually much more complex for a robot to do. This is what you see as well when people interact with robots. They frequently assume they’re capable of much more than hey actually are because of that core thing. There are things that we find very easy that robots find exceptionally hard.
Sean: There are things that robots find easy that I find exceptionally hard as well though to be fair. Last question, I’ll just ask each of you for a couple of sentences. What’s the future looking like? Mohammad, let’s start with you. What do you expect to see in five to 10 years? Will people be more used to seeing robots and interacting with robots?
Mohammad: So to answer your question and also comment on a discussion we had earlier, it’s always not what we as researchers are capable of doing but it’s also about what are the boundaries that we need to put in place for the AI to also respect the privacy and so on. When you talk about Alexa or assistant being able to learn, is that really what you want, for it to learn because it could be scary as well to know to what level that voice recognition [unclear 00:48:35]. There’s also research around what other elements can we learn from that voice. It could be an emotion or whatever that you have no idea about. I guess I want to add more questions than what I want to answer and that is when we talk about Trustworthy Autonomous Systems what we are asking ourselves is obviously whenever you talk about technology, you cannot do much if there’s not enough information. You need to feed in the information for the technology to be able to do fascinating things but where do we draw the line and not to limit ourselves in terms of what’s waiting for us in the future but also making sure that always AI is there to help us, not to force us to provide data that it wants because it will shine there but also thinking that we need to drive AI towards a direction that will be helpful and it has to be certain boundaries for this. I guess that’s what TAS Hub is all about.
Tony: Yes. I think that the TAS programme and our research group more generally are moving towards this idea that it isn’t autonomy versus human action, that there’s all these things in between. We talk about a variable autonomy. So if you want to do a difficult task with a robot right now, the robot technology probably isn’t up to it, a lot of difficult tasks. It might be able to do 95% of it but the last 5% is super hard. What you might want to do is to say well right now we can’t do that with a robot but perhaps a human remote controlling that robot could do hat part of the task. That would really open up what we can do with robots in the world a lot, if we could have more of this variable autonomy. So the robot does its thing and it gets stuck and then a human can come in and remote control the robot. So if you have a robot tractor, it drives into a ditch, it can’t get out the ditch. A skilled tractor driver comes in via remote telepresence and manoeuvre the tractor out the ditch and then it can go its way autonomously. So I think that’s something that we’re exploring and we’re excited about in the future because I think that can make a real difference into where we can deploy robots in the here and now and also it can help address this question of robot safety because rather than leaving the robots to do their own thing and it might go wrong, we can develop ways of monitoring them, catching them before they do something that maybe they’re not very good at and then making sure we can intervene. So I think that’s an important part of the research to understand the limits of autonomy and where we need to bring in human control.
Marise: So I think that it’s interesting to- the question that you asked about how do we see things progressing in the next few years. So I think we’ll see more robots maybe used in other areas but it will be as you mentioned before, we are very good at some things that robots are not very good at but some robots are very good at things we’re not very good at or that could cause us harm. So I can see how some robots might be introduced in situations or industries where it's dangerous for humans to perform certain tasks. That’s happening already and has been happening for the past years. So I think that’s where we’ll probably see a high increase of the presence of robots because then it also depends on how you define a robot. Some people can see that they’re digital assistants, robots and some people think it’s just the humanoid ones and some people can see the robotic arm robots and some people not. So yes, that’s another thing that you should consider when you ask that question to somebody maybe or whether people should maybe think about what it is that they considered a robot to be.
Sean: That’s probably a whole different episode of this podcast. Daniel, final thoughs?
Daniel: I think it took us 10 years to go from the introduction of android to maximum number of uses having smart phones with android. So when you go to five to 10 years’ time, I do think we’ll see much more robots becoming personal robots, going towards a society where robots are almost as commonplace as the smart phone. Being a third generation of personal device as we like to call it that, Cyberselves. So that is the future that we’re trying to build with the robot agnostic software that we’re building.
Sean: Well we’ve run out of recording time for this episode. Hopefully we’ll be back to pick up some more on his particularly as we’ve left a lot of open questions at the end of that one. Thanks to all our contributors this week for sparing us their time. So thank you , Marise.
Marise: Thank you very much.
Sean: Thank you, Tony. Thank you, Mohammad.
Mohammad: Thank you, Sean, thanks.
Sean: Thanks Daniel.
Daniel: Thank you. It’s been a pleasure.
Sean: Thanks to all you listeners out there. Maybe you can subscribe to the podcast and never miss an episode. If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited. Our theme music is Weekend in Tattoine by Unicorn Heads and it was presented by me, Sean Riley.
[00:54:38]