Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Telepresence – the human in the robot
From the only museum to be open at the beginning of the UK lockdown, to the decommissioning of nuclear facilities, telepresence can affect all sorts of sectors.
In this projects episode we're talking to Verity, Horia, Praminda and Ayse about telepresence, trust and robots:
Virtually There - Verity McIntosh
TAS ART - Horia Maior
Digital Twins for Human-Assistive Robot Teams - Praminda Caleb-Solly
CHAPTER - Cognitive HumAn in the looP TEleopeRations - Ayse Kucukyilmaz
Podcast production by boardie.com
Podcast Host: Sean Riley
Producers: Louise Male and Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 3, Episode: 3
Telepresence – the human in the robot
From the only museum to be open at the beginning of the UK lockdown, to the decommissioning of nuclear facilities, telepresence can affect all sorts of sectors.
In this projects episode we're talking to Verity, Horia, Praminda and Ayse about telepresence, trust and robots:
Virtually There - Verity McIntosh
TAS ART - Horia Maior
Digital Twins for Human-Assistive Robot Teams - Praminda Caleb-Solly
CHAPTER - Cognitive HumAn in the looP TEleopeRations - Ayse Kucukyilmaz
Podcast production by boardie.com
Podcast Host: Sean Riley
Producers: Louise Male and Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: You’re listening to Living with AI the podcast from the Trustworthy Autonomous Systems Hub. AI is changing our lives in all sorts of ways from the smart speaker in our living room and characters in our computer games through to the decisions that banks make about mortgages, loans and trades on the stock market. But how far can we trust it? This podcast exists to discuss AI with respect to the idea of trust and trust worthiness.
This is season three of Living with AI so if this is the first episode you have heard there are a couple of seasons of episodes for you to go and discover, binge on, just search for TAS Hub or check out the show notes for links. If you are already an avid listener thank you very much for your support.
Today is the 24th of May as we record this, 2023 in case you are listening to this years in the future, let me check the clock on my computer, yeah 24th of May. And today is a projects episode so we are going to hear from a few researchers involved in some of the TAS Hub projects. Here them discuss some of their findings and the challenges they encountered.
In a moment I will just ask them all to introduce themselves and our topic today is Telepresence or the Human in the Robot. And we are going to find out how that relates to autonomous systems and trust.
So I am going to go around the room in no particular order, literally the order on my screen, so top left for me is Praminda. Let us know who you are and what you do Praminda?
Praminda: Hello I am Praminda Caleb-Solly. I am a professor of Embodied Intelligence in the School of Computer Science in the University of Nottingham. And I am leading a research group called Cyber Physical Health and Assisted Robotics Technologies.
And we have been working on a project, Digital Twins for Human Assistive Robotic Teams. And we are exploring how we can address complex behaviours and systems of assistive robots working together in collaboration with people by being able to record data of interactions and changes over time. And be able to use AI and Machine Learning to improve the performance of the assistive robotics technologies.
Verity: Hi there thank you for having me. My name is Verity McIntosh and I am a researcher and senior lecturer in Virtual and Extended Realities at the University of the West of England Bristol. I am in the College of Art Technology and Environment. And the project that I am speaking to you about today is called Virtually There. An outrageously long version of that is Virtually There Exploring Presence, Ethics and Aesthetics in an Immersive Semi-Autonomous Tele-operation for Hazardous Environments. And it doesn’t even have an acronym so we are just going to call it Virtually There for today.
Ayse: Hello my name is Ayse Kucukyilmaz. I am an assistant professor in the School of Computer Science at the University of Nottingham. And my research is in Robotics and Autonomous and Semi-Autonomous Systems where we investigate how humans and robots work together.
So the project I am going to talk about today is the CHAPTER Project, the Cognitive HumAn in the looP TEleopeRations and we work with hazardous extreme environments as well.
Horia: My name is Horia Maior. I am a transitional assistant professor and a nerdy career researcher in Human Computer Interaction, Human Robot Interaction and Brain Computer Interfaces at the University of Nottingham. I am part of various groups but I also co-lead the Brain and Physiological Data Research group.
And the project I will be talking about and the project we have been working on in the past year is the TAS ART, or the TAS Augmented Robotic Telepresence and related projects that TAS ART integrates into broadly. The potential of augmented robotic telepresence for social group, the proof trustworthiness inclusion, accessibility and independence offered by these mobile robotic telepresence to remote users, thank you.
Sean: Praminda can you tell us a little bit about, well a little bit more about the Digital Twins project you have been working on please?
Praminda: Yes, so the concept of Digital Twins actually was put forward or the first phrase was used by Michael Grieves at the University of Michigan in 2002. And it emerged from an industrial setting where it is important to be able to predict what a piece of machinery is going to do for a period of time. And if you can replicate and model the different things that could possibly happen to it or go wrong with it before they happen, then you are in a good position to be able to pre-empt those problems arising through being able to monitor data in the real world.
So a digital twin is a virtual replica of a physical system and it takes data from the real world in real time to keep updating the model, try out different scenarios and then be able to pass this information back to the real system. So that it can perform more effectively and efficiently and pre-empt safety related issues and be able to change what it is doing.
And Michael Grieves and John Vickers in 2017 clarified their idea and their paper has the title which actually communicates the real reason why we have digital twins or working on this research. And they say digital twin mitigating unpredictable undesirable emergent behaviour in complex systems.
And as you can imagine using a robot with people who have assistive needs in their real home where there are cats and furniture and mats and rugs scattered all over the place, we need to be able to have systems where we can know what can go wrong. So pre-empt and anticipate the edge case scenarios. And the digital twin system allows us to create these models, have various versions of them to then know how the real system should respond should a particular situation arise.
Sean: Fantastic. Can we hear about your project please Verity, the Virtually There project?
Verity: Sure, thank you. So Virtually There came about really as the result of noticing that increasingly in multiple industrial use cases robots weren’t necessarily just sort of taking our jobs. We were effectively, so we are training but I am going to run with it, we had this idea that for robots to effectively operate in spatial environments, particularly in hazard environments, where humans weren’t likely to be kind of safely able to participate that the robots weren’t necessarily in the best position to autonomously operate. That actually they would be better operated remotely by human operators.
So we are working specifically with Sellafield Nuclear, you know, decommissioning the Sellafield Nuclear facility and have a range of different autonomous robot projects currently underway that we are working with them on. And in this particular use case it became obvious that having a human operator was going to be necessary to continue to operate the robots at distance. But that using VR as a sort of mechanism for the operator to effectively inhabit the same space as the robot would improve things like their task performance, their ability to have spatial reasoning, their ability to interpret the scene.
So there was this kind of industrial use case firming up that the more we looked the more we found where humans were effectively embodying the same space as robots. And tele-operating those robots using a range of different tools and technologies. And it’s a very untested space about what it is like for the human to be in that space, how we can create robust and trustworthy systems that enable those robots to be kind of tele-operated at scale.
We were looking at teams of semi-autonomous robots so multiple robots in the same space with a tele-operating operator that is not on site. And the dynamics of how you work with that are very untested from a design point of view in terms of the sort of human experience of feeling present in a space that they are not a particularly hazardous space. Very unexplored, underexplored in terms of the sort of psychology of that and the implications for the working day.
If increasingly one of the jobs that we do in an industrial context is to tele-operate robots sometimes feeling as if we are those robots, there is all kinds of ramifications for what we need to understand about that dynamic.
So some of the things that we have been testing are creating a full scale virtual reality Digital twin, as Praminda was describing, of a specific decommissioning environment in Sellafield we have been testing with multiple different novice and expert users to be able to basically use our control systems to move about within the scene. To command the robots to move in different ways, to use the information that they get from those robots to work out where for example radiation might be found. Where temperatures are elevated and where hazardous flammable gases might be found.
And to use a number of different ways of understanding the scene, characterising the scene, tagging hazardous objects and taking action within that space all while sort of tele-operating these robots.
The way that we are particularly interested in is using sonification, so that’s the sort of understanding of data points through sound. So taking things like the levels of radiation and rather than having a little dial in front of you saying the radiation has gone up, having a sound that corresponds to that. So that you can understand the way that radiation is being discovered and labelled in that space.
So we have been working with some incredible sound designers, Joe Simmons and Alison Bone to design different types of sonification strategies to see first of all is it possible to kind of understand the scene sonically as well as visually and is it helpful to do so. And if so what are the kind of useful things that we can do.
And particularly from my point of view, what is the human experience of experiencing somewhere where you are not, as if you were in a very multi-sensory way. So using your visual and your sonic frequencies to kind of spread the cognitive load of what you are doing.
[00:10:06]
Sean: That sounds really interesting, thank you very much for that. Ayse can you tell us about the project that you have been working on, the CHAPTER, which I will just use the acronym there, CHAPTER.
Ayse: Yes that’s better. So the CHAPTER project is actually focusing on a very similar use case to what Verity has explained. We also are looking into tele-operation in nuclear environments, which means we should probably talk more. So the technology we are interested in is co-developed with our partner RACE which is Removed Applications in Challenging Environments, which is a part of UK Atomic Energy Authority.
And RACE hosts the JET simulator, so the JET is the Joint European Torus, which is the world’s largest and most powerful fusion research device. So it is not like a real fusion reactor but it’s a mock up that real operators work on. And how they have been maintaining the JETs reactor was majorly through tele-operation for over a decade.
And currently they are taking off the JET and there is a lot of opportunity for tele-operation in actually disassembling this whole reactor.
So tele-operation technology basically enables us, enables humans, to go to spaces that they can’t go to control remote powerful, physical systems which is typically a robot through the mediation of the control interface. And RACE uses the Mascot which are very mechanical identical devices to what are actually installed in the JET reactor. And they actually use it in, the industry actually uses these operated devices in complete operation mode which is a human does everything.
And in robotics typically you don’t call tele-operation a class of robotics because we mostly define robotics as sensing information but also some form of intelligence that is embedded onto the device. And so tele-operation is more like an extension of yourself and ideally you try to be virtually there when you do some operations to make it effectively.
So our aim in CHAPTER and some different linked projects to CHAPTER is to make this robotic tele-operation more intelligent from multiple perspectives. So one of the perspectives that we try to achieve as a longer term goal is to create a dynamic autonomy system where we can actually make the robot more autonomous as long as it can, wherever it can. And have a team work with the human to alleviate the operation of the human by making the human’s workloads better, the performance of the task better maybe enabling even the robots to give suggestions or corrections to not harm the remote environment which is quite critical to keep safe.
So in the CHAPTER project we actually did work on a continuation of a European project we had worked on previously which is called HEAP, which was a shared project. And in that project our thread was on creating shared autonomy mechanisms for tele-operation. And in the CHAPTER project we aimed at creating a test bed in the laboratory to actually mimic the operation scenario that is at RACE and to create a test bed for investigating human operators and their workload.
So the project aim that is collecting a lot of data from humans when they are operating on a few different manipulation tasks in the remote space or included tasks like grasping but also tasks like manipulation following trajectories and stuff. And we looked at kinematic information, how they are actually moving the robot, how they are actually connecting to the remote environments through the haptic channel, which is another channel which is very important to enable tele existence. And we also looked at physiological sensing data from people to understand how they are operating throughout these tasks.
So the CHAPTER project has finished but we have another project ongoing right now which is called Dominoes, kind of by the Connective Everything Network. And in Dominoes we are now using that set up we had actually created or finalised in CHAPTER to look at how people respond to different workload situations. We are looking into physiological data like brain data using Ethnos sensors as well as electro-thermal activity, heart rate, as well as performance data looking at how they are actually performing the operation.
Sean: Wow, plenty going on there. Yeah it sounds really good, really interesting stuff. Horia can you tell us about the project you have been working on, the TAS ART project?
Horia: Yeah. So compared to the robots that help us get into remote environment to do complex tasks that perhaps Ayse and Verity mentioned, telepresence robots are a little bit simpler version of that. In case you haven’t used one or seen one, telepresence robots also nicknamed, or kind of the nickname gives it away, tablet on wheels. These robots have a tablet like device mounted on a mobile robot that is typically on wheels with a height of ranging between one and one point seven meters.
And these robots are connected to the internet. And using a web interface or similar remote users can call into these telepresence robot, locate it in different settings, not necessarily inside nuclear reactors maybe they are co-bots so settings where perhaps there is other people etcetera.
So it could be someone’s home, so you can imagine visiting relatives by calling into this robot and wandering inside their houses. You can visit an office or instead of joining a Teams call remotely you can perhaps join a hybrid meeting by calling one of these telepresence robots and being more interactive in the hybrid meeting, conference hall and case studies go on.
But these robots are using cameras, microphones and speakers mounted on the platform to offer remote users this ability to interact. Like in a video call environment with other people co-located with the robot with the extras that you feel way more present and there is a lot of other benefits to it.
So looking at this, this may offer a more sustainable future where being co-present remotely will no longer be that problematic or not that great experience, especially when it comes to conferences or similar events. And the other possibility it offers in the context of accessibility to different public places, museums, remote visits where typically these were not possible, visiting patients that perhaps are vulnerable in different settings, again the list goes on.
But one of the current problems with the system, I was very positive so far, what are some of the limitations that we are looking at? And even though the future use of this technology is very promising, it is really critical that we study specific legal, ethical, social issues and taking social and environmental factors that will facilitate the responsible development and deployment of this technology in different settings.
And I think our project TAS ART, integrates multiple ongoing work and case studies focused on particular settings and deploying telepresence robots to support us with remote work and remote visits in different particular settings.
How are we approaching this problem? Well first of all we have a multidisciplinary team that brings together social scientists, law experts, computer scientists and engineers from across different institutions including here, and from India and Asia and we are investigating as I mentioned different case studies.
For example our partners at Kings College, Sylvaine and Paul Luff, have investigated use cases and problems and specific requirements for augmented robotic telepresence in auction houses. So what would it mean for you to attend an auction remotely compared to an in-person event and what are the limitations and problems? Is it fair for remote users? Is it fairer for in-person users?
[00:20:03]
Similarly local to Nottingham, Open all Senses and related projects are focusing on deploying telepresence robots in healthcare settings or museum settings. And in the co-bot makers space at present we are developing new research spaces including smart medical ward room and smart museum room. In which we can initially deploy these systems and explore the potential problems before deploying them in the wild perhaps with six, seven, eight cameras and sensors. These systems are constantly recording.
So perhaps we need to make some adjustments into where it is data goes etcetera, if we wanted to for example deploy one of these robots in hospital settings where you cannot simply just record everything that happens.
So yes I will stop there but I am happy to follow up with any other questions and conversations.
Sean: Fantastic thank you very much Horia. I mean these projects obviously they are all telepresence related but the range is surprisingly broad isn’t it? We are going from maybe visiting an art gallery or a museum or an auction house through to care giving or dismantling fusion reactors or nuclear spaces. What are the trust implications here? I wonder how we can kind of approach that?
Because there are a couple of things kind of that I have picked up from listening to all the projects and one is that there is a kind of high part of the human is in control and therefore how does this connect to our AI and the autonomous side of things? But also what are those trust issues?
Praminda: I think one of the things that joins up all of these projects is the human in the loop and for our Digital Twins with Human Assisted Robot teams it is very much also being able to model and capture what the person is doing. So we are working with biomechanical experts, Professor Donal McNally for instance in being able to monitor and capture how people move around EMG data to see how much pressure they are exerting.
We have got cameras, as Horia mentioned, all these cameras monitoring and mapping motion, so there is a lot of personal data. And the reason we have these systems is to enable them to become more personalised and individualised and Ayse talked about reducing cognitive load. But that does mean we are modelling and capturing and storing very, if you like, intimate things about how you behave and respond in a particular situation, your own environment. These models who else would use them? What does it say about you and how you react and respond in different situations?
Could there be implications further down the line in terms of somebody coming round and saying we found you are not very good at handling situations and so does this have an impact on your future career, what you are going to do?
I think we have to be very careful about how we use AI to improve our tools. And that’s what they are, these are all tools for us to be able to work more effectively in dangerous environments. To manage risk, to pre-empt support that might be needed for somebody who is an assistive robotic worker but to me that is a big issue.
And also there are issues around bias of the data itself. So in capturing the data we are going to places or working with people maybe in certain circumstances and certain socioeconomic conditions and missing out on certain things. So it could be doing our experiments in particular environments which are, have got better equipment or better facilities or the people there have got better working conditions, are different to how the system might be used and deployed in another circumstance, in another culture, in another country.
And so being aware of the fact that by encapsulating all of this information in a particular context we risk alienating or forming assumptions and decisions and tools based on a certain group. To me that is something we need to be thinking about and there is lots of others now but I’ll leave the others to come up with some more.
Sean: It’s interesting that you have mentioned context because it’s a recurring thing that’s come up on the podcast that AI has a problem because it doesn’t have the full picture effectively a lot of the time. Is that also a problem in these kinds of scenarios then, the context? Ayse what do you think?
Ayse: Yes obviously because generally I will say Machine Learning was, statistical Machine Learning models, they do perform well but they are as good as the data that you provide them. So the bias in learning I think has attracted a lot of information in the last year such as computer regional algorithms not working well with several ethnic groups, which are they are not trained on classifying people in quite serious situations as well.
And our project, some of our projects are typically futuristic so most of the things that I particularly work on are quite far away from getting into industry or anything. For example especially in the nuclear industry they are so conservative ,they are so high risk that any autonomous system, or possibly Verity would agree, that any VR system that you would implement right now is just a case study for them to see that this is possible in the next decade in a very optimistic way.
However I find that, I agree with Praminda that there is a lot of concern around how this data is used. What are people comfortable with, what people should be comfortable with? Also because people are also very bad in imagining what could happen. And there is things that we started to discuss around the community which is very positive.
But I also see that all this autonomy talk and so on is actually an opportunity to think about the future of work right now because we have all the opportunity at this moment to transform how work will be in the future. And I don’t think we need to be very paranoid about AI being always enemies of humans, being aware that there could be problems but also being open to the opportunities that it could provide.
And knowing the challenges having probably more interdisciplinary research, talking to different communities, sociologists and social science disciplines is quite necessary and I think we are at a better stage right now.
Sean: I think it’s interesting yeah that you say that we need to, we don’t need to worry about necessary jobs going because of these things. I think these are tools that we can use. But just harking back a little bit to what you said before about the data that goes in and, you know, it is only as good as the data you put in.
If I just go back to Verity and we are talking about digital twins and we are in a conservative kind of system, not system, but a conservative sort of area of being worried about what happens with nuclear plants etcetera. I was just thinking about the digital twin how can you make sure that they have as much data as possible to work as well as you want them to?
Verity: Yeah that’s a really good question. I think with the particular scenario that we were developing for, we initially were giving, we need to give the robots a completely equivalent set of data inputs as they would have in the physical space. So they were able to kind of scan for things like radiation, proximity to a hazard, to be able to kind of recreate that digital twin in real time.
One of the things that we discovered in an early iteration through testing is that what mattered most to the operators is that the AI that the robots had that informed how they navigated needed to be more sophisticated. You know they were gathering this data, they were mapping, they were giving us the sense of the scene so that we could explore but the thing that was driving people crazy was that the robots kept putting themselves in harm’s way. They sort of didn’t protect themselves. And so the operators spent a lot of time trying to kind of like tease a robot out of an area that it had already discovered was bad for it and wouldn’t kind of act in its own interests.
So in a later iteration we actually kind of improved the sophistication of the AI for the robots so they were more risk adverse and that they didn’t put themselves in harm’s way. And for the human operators that kind of reduced the stress level of having to constantly kind of monitor the health of the robot as well as look at the sort of risk within the environment.
So sometimes the AI can kind of complement our own interventions in this space. And in a way that kind of freed up the operators to do their primary task which was to capture the scene. But people did immediately sort of feel quite responsible for the robots.
And in fact the more their movements made sense to people in terms of their kind of risk aversion for example, the more our operators kind of felt sort of, people often named the robots or kind of spoke about their voice. You know the robots would make noises to say that they were in peril or that they had found something and that was part of the task to kind of respond to the noises the robots made.
[00:30:11]
And so yeah the more the robots kind of vocalised or acted in a way that made sense to the operators, the more quickly people adopted the realism of the scene. They seemed to build trust relationships with the robots, they kind of created little narratives for them.
So there is a phenomenon that we understand quite well within sort of VR where the sense of presence that you have in these spaces very quickly makes you kind of have a psychological realism that isn’t really true. You are in a heavily mediated space, you are in a very different physical space to the scene that you are tele-operating.
But because it’s VR and it’s not sort of a series of monitors and dials and knobs as we are more used to in the nuclear decommissioning context, you very quickly imagine that you are effectively one of the team. You are sort of cast as the supervisor of the robots in this and people took that role very seriously and were kind of hyper-focused on their dual responsibility of enacting a task but also attending to the robots needs.
And that question of trust becomes quite relational in a way that we perhaps didn’t expect at the beginning. It’s interesting to see how we are kind of creating those kind of kinship relationships with robots when we feel hyper-present in the space.
Sean: And just going over to Horia just to think about how that, are there similarities there between your kind of iPad on wheels, I am calling it an iPad on wheels. I am supposed to call it a tablet on wheels am I? I don’t know. But effectively the whole telepresence kind of wheelie boy experience, is that quite as immersive as the kind of VR that we are just talking about there with the nuclear decommissioning projects and the simulations there?
Horia: Yeah. So I think that is one of the ongoing challenges that we are looking at, we are trying to create more immersive and multisensory experience for operators. Even though they don’t go inside a nuclear reactor, it is important to try and maximise the feel of presence. It is important to provide feedback to the operators where there is objects nearby, where there is people nearby. So we are actively thinking on how we can embed more experiences that will provide this idea of presence and provide a better user experience really for the remote users.
But I just wanted to go back to the point you mentioned earlier and also Praminda’s point. I think the human and the context is highly linked to trust when it comes to telepresence robots as well. We are just looking at the differences between the museum context where people are generally happy to be recorded. Typically people are in almost like a public setting already where you are kind of already used to certain technologies being present. And I think there is a very important point there because if you don’t look at context then you won’t be able to find out who is being left out. Or perhaps what are specific requirements needed for certain scenarios.
And the key here is talking to stakeholders and involving stakeholders and partners in the design process, in a way that is our aim. For the museum case study we worked with different museums nearby, we invited museum curators and interviewed museum curators on different case studies. We looked at, you know, how people would do perhaps a remote visit.
I even remember one of the articles back from Covid times when all the museums of the world were closed except one and I think Praminda was heavily involved with that work. And I remember they had like a when everybody is closed you can actually visit the Hastings Museum, they were kind of the first to have a telepresence robot and allowed people to visit remotely. So there is no, that should be obvious, that these systems would bring value and enhance our lives.
Now the real focus here is to make sure that we create an equitable world. A world in which perhaps people are not left out and actually people who need more support will find it in these systems.
Sean: I think that’s really interesting. I was thinking myself of the Covid times and the fact that we were reduced to everything being on screens for a while. What does telepresence add that, you know, compared to what we are doing now? Which is if listeners aren’t aware we are recording this podcast using a very popular piece of online conferencing and meeting software which I shan’t name, Teams.
What does the actual ability to move something in a different place, what does that add to the. I am going to ask Praminda this because of that museum thing that you just mentioned there Horia. Does it, is it different then saying seeing a scanned fake painting?
Praminda: So it’s, yeah, it’s about the autonomy that it gives you the driver to be able to move around at your own pace. It works better for some pieces of art as we found from the Hastings contemporary.
By the way I remember when they announced that we were going to lockdown and I hopped into the car and drove all the way across the country. I was working in Bristol at that time, all the way to the other end of the country to deliver that robot before lockdown and that was definitely worth it.
We have been doing a project with telepresence robots together with Age UK in Bristol and certain residential villages. And the experience was about getting volunteers to support people in their own homes encouraging them, being able to access support to talk them through doing exercises more often than they can through their physiotherapist. So the volunteer became the person that would be there three times a week and encourage you to do the exercise.
Now the experience for the person with the visitor coming into their house, especially when they couldn’t leave their home themselves given their particular situation, was more then what the tele-operator sitting on the screen driving that robot felt, so for the person in the home it felt ah, Maeve has come to visit me. And then Maeve can turn around and look at the cat or look at the plant and say oh your garden is blooming. I see your daffodils are coming up now. And it is about that which you don’t get from a screen. It’s very much about Being There, is what the project was called in Hastings.
From the other side we had people with, disabled people, using the telepresence robot and some people with certain types of conditions need and continue to need to stay isolated longer. And so it is a completely freeing experience of being able to go and visit the gallery when they wanted to.
One of the other things that they said from the, it was a wheelchair user particularly made a statement, is in the wheelchairs they are quite low particularly when they visit these galleries and most of the art is put at eye level, standing eye level. But with the robot they were at the level that we are able to see it and that made a tremendous difference to their experience, being able to see it from that perspective. So that was really interesting, we hadn’t thought about that.
Sean: And just thinking back to the kind of overarching idea of autonomous systems, because these, I am going to be kind of rude here in a way, but they are basically glorified remote controlled devices right? So where does the autonomy come in and how does that matter? Is that because you make the low level parts of it autonomous or?
Praminda: Yeah so it’s about, I should be able to say more about that it’s about shared control. So yes, so one of the robots that we use you can just direct it where to go and it uses its camera to map the environment, sweep the environment and be able to avoid obstacles and get there. So if you particularly have a disability where you are unable to control or fine motor control of the operating device, then you can get to where you want to go and then you can say turn left and right.
So that’s interesting and we are taking that forward further with being able to look at this shared control and exchange of when is the system able to take over and when the human can revert back to being in control themselves. And that is a really big opening in such an area.
Sean: So I suppose you can almost use some of these new large language models to ask in very plain terms for the system to do something for you and then that converts into the exact motor responses, right?
Praminda: Indeed.
Sean: Thinking back to that Hastings example then, was there one telepresence robot and obviously that is kind of one person or one group of people using that at a time? I mean how can this scale up, is that possible?
Praminda: Yeah so I think scaling up is a big challenge for us particularly given the infrastructure that’s needed. So with a remote operation, tele-operation if you are on the same site you can, you know, use wired connection but if you are remote you are reliant on the bandwidth and latency and that is actually really holding things back.
Similar to the research that we have been doing, people still don’t have good internet connectivity. It costs too much as well so they are concerned about the costs of it and there are patches, you know, in the country particularly older accommodation with thick walls etcetera. So these practical issues and the way we are using these spaces are something that people aren’t thinking about enough.
[00:40:21]
So hand in hand with developing this technology we also have got to think about these infrastructure delivery mechanisms as well, together with how people will change what they do. And Ayse talked about that, you know new road that we adopt and how people prepare for those new roads.
Sean: And obviously the price band of telepresence equipment is absolutely huge isn’t it? I mean from Horia’s mentioning of the basic kind of telepresence right up to these maybe the Boston Dynamics kind of equipment at tens or hundreds of thousands of dollars.
Verity: The temptation is to always assume that scale is needed and positive. And I think one of the things that is coming up for us, and that builds on what Ayse was saying, is that we don’t know enough about the working conditions of people using these tele-operation systems. We are still in a very early moment with understanding like what it means to be asked for eight hours of your working day to be tele-operating robots. Particularly in hazardous environments and particularly in virtual reality, which is a very kind of immersive environment.
So if we are, as it seems to be, kind of building towards the situation where increasingly this will be something that people will be asked to do at work, what rights do the workers have to require kind of reconnecting with their own physicality breaks? Or what liability do they have if they create industrial accidents whilst tele-operating in virtual reality? Like what responsibilities do the employers have to ensure their wellbeing and the robustness of the systems that they are operating?
We are very far behind in terms of policy and regulation about understanding what this is as a sort of employment context. And I think that even just from the small experiments that we are doing a lot of people are really excited about the prospects but then when you ask them the question like what if you did this every day? They really like run for the hills.
So I think we need to have some sort of better information about what the responsibilities, what the rights are, what the liabilities are. And how this applies to things like the health and safety executive and employment rights before we look at it as a scaleable context. We are seeing it already used in things like disaster recovery, in nuclear decommissioning, in a number of different industrial use cases where we are just kind of going for scale without checking to see what this actually means for employment experience.
And I think there is an anti-scale narrative that I would like to run up the flag pole for today just so that we can try and think about our human rights within these contexts before we scale up the technology.
Sean: Is this, is this are we looking at the future, you know a broad future here? I know Verity just said about not scaling it up but you know, looking at the side of my room here I have got an Oculus Quest sitting there on the side there. I haven’t used it very much but it’s there. Is that something we are all going to be looking towards or is this, you know, just for niche applications? Perhaps you could all have a few words on that, what’s the future looking like Ayse?
Ayse: Yeah actually that’s probably no. Some of the technologies are probably more interesting. They have this novelty effect that they look very nice and everybody wants to try it and then they don’t really stick.
In industrial settings I think especially in the ones that we are working, there is a genuine need because you simply cannot go there and stuff. But for a lot of the things I think there is benefit. For example in telepresence robots when people are constrained at home and that gives them a way out and then it becomes really very nice.
But we have to also know that these technologies lack one very major part of human connection, the physicality and the physical part of the intelligence. Which is not only about a code that is running in the brain, where you are literally using your body to touch things, perceive things, learn about things and that doesn’t seem to me to come across with many of the technologies.
Horia: I think that I mentioned that I am quite positive looking forward. Whether we need to go and see a specific person in the world, maybe the best doctor around, maybe we no longer need to travel. And maybe, that’s the last thing I mentioned, maybe it is one of the future travel options and depending on how other things go, sustainability, net zero, etcetera, then perhaps these will become an important part of our lives.
And if it is not going to be in everybody’s home, it doesn’t mean that it is not going to create a really good positive impact.
Verity: I do think that there is huge possibility within these immersive tools to change and kind of evolve our ways of communicating to go past these kind of flat, slightly cold rectangles of glass that we are becoming so familiar with and to think about more embodied and spatial ways that we can connect to each other.
For us some of the next stages for this Virtually There project is to think about, we have looked a lot about how you kind of spread the information that you receive across the visual and the auditory context. We are going to be looking more at haptics and we are going to be looking at different kind of sensory configurations so that we can better understand like what is a comfort level and what is useful and what is distracting.
But I think, you know, as others in the room have suggested, we are not, I think we need to remind ourselves periodically that we are not looking for a replacement for physical contact. We are very social. We are very tactile. I don’t think that’s, I personally don’t think that is at threat. I think that’s such a fundamental part of human behaviour that there is always going to be a pull to each other in physical ways that will be fundamental.
But I think it’s worth, it’s worth remembering that sometimes because occasionally we design to replace things that don’t need replacing. Whereas if we think about designing to augment or to extend what’s possible for different members of our communities and to, as you say, kind of bring, bring the eye line to people who don’t often get the eye line in museums, this is a really precious gift. So yeah thinking about additionality rather than replacement seems helpful to me.
Praminda: I think it’s about augmenting and enabling and allowing people to achieve their aspirations. So if you are sitting in your home waiting for your ten minute slot with your social care worker, of which there are a hundred and ten thousand vacant places at the moment, and your personal assistant robot can help you get dressed so you can go out, then that’s where the future is that I am looking to work towards.
And this project that we are working on and several others have come from an EPSRC healthcare technologies network called Emergence, tackling frailty and facilitating robots from labs into the real world. And by being able to use AI and Machine Learning we hope to be able to develop the technology that can work better in the real world.
We do not see, or I do not see, the human ever being taken out of that loop. It is our creativity and imagination that has come and brought these systems into fruition and it is exactly that which will continue to help us to address some of the big challenges we are facing today.
Sean: I would like to thank all of you for joining us today whether listeners or fans of science fiction and whether your vision of the future is William Gibson’s The Peripheral or whether your vision of the future is the Ready Player one, The Oasis, thank you so much for joining us today, Praminda.
Praminda: Thank you.
Sean: Ayse.
Ayse: Thank you.
Sean: Verity.
Verity: Thank you very much.
Sean: And Horia.
Horia: Thank you.
Sean: If you want to get in touch with us here at the Living with AI podcast you can visit the TAS website at www.TAS.ac.uk where you can also find out more about the Trustworthy Autonomous Systems hub. The living with AI podcast is a production of the Trustworthy Autonomous Systems hub. Audio engineering was by Boardie Limited. Our theme music is Weekend in Tatooine by Unicorn Heads and it was presented by me, Sean Riley.
[00:48:39]