
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Who Cares about Robots in the Home?
We speak to Radu Callinescu about Assistive AI, and the problems of robots in a care role. Joining the panel are Christian Enemark and Paurav Shukla. Topics include 'Robots doing the dirty work' and that old saying "Autonomous lethal weapon systems don’t kill people, people kill people"
Recorded 10th March 2021
00:30 Christian Enemark
00:42 Paurav Shukla
00:59 Sean Riley
01:15 Autonomous mobile robots doing the dirty work (zdnet)
03:30 Anti-MRSA silver fabric trialled (BBC)
05:15 Living With AI - Food Manufacture Blockchain
06:00 Passengers (Internet Movie Database)
07:25 “...it's okay to eat fish 'cause they don't have any feelings” Something In The Way - Nirvana (songfacts.com)
09:50 Executive Summary World Robotics 2020 Service Robots (ifr.org)
Meet Sophia, World's First AI Humanoid Robot | Tony Robbins (YouTube)
Artificial General Intelligence (Computerphile/YouTube)
Iain M Banks / Culture (Wikipedia)
Logans Run (Internet Movie Database)
Matrix (Internet Movie Database)
Spot: Boston Dynamics condemns robot paintball rampage plan (BBC)
Klara and the Sun - Kazuo Ishiguro (Wikipedia)
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 1, Episode: 13
Who Cares about Robots in the Home?
We speak to Radu Callinescu about Assistive AI, and the problems of robots in a care role. Joining the panel are Christian Enemark and Paurav Shukla. Topics include 'Robots doing the dirty work' and that old saying "Autonomous lethal weapon systems don’t kill people, people kill people"
Recorded 10th March 2021
00:30 Christian Enemark
00:42 Paurav Shukla
00:59 Sean Riley
01:15 Autonomous mobile robots doing the dirty work (zdnet)
03:30 Anti-MRSA silver fabric trialled (BBC)
05:15 Living With AI - Food Manufacture Blockchain
06:00 Passengers (Internet Movie Database)
07:25 “...it's okay to eat fish 'cause they don't have any feelings” Something In The Way - Nirvana (songfacts.com)
09:50 Executive Summary World Robotics 2020 Service Robots (ifr.org)
Meet Sophia, World's First AI Humanoid Robot | Tony Robbins (YouTube)
Artificial General Intelligence (Computerphile/YouTube)
Iain M Banks / Culture (Wikipedia)
Logans Run (Internet Movie Database)
Matrix (Internet Movie Database)
Spot: Boston Dynamics condemns robot paintball rampage plan (BBC)
Klara and the Sun - Kazuo Ishiguro (Wikipedia)
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to Living With AI where we discuss artificial intelligence and what impact it has on our lives. Today we're going to focus on robots in the home and specifically the context of care and assistive technologies. We'll be joined by Doctor Radu Callinescu, he's Reader of Computer Science at the University of York, but first let me introduce this week's panel on the Living With AI team. This week it's Paurav Shukla and Christian Enemark. Christian is Professor of International Relations in the Faculty of Social Sciences at the University of Southampton he leads a project on ethics and drone violence, enjoys running, reading and travel so it's nice to have you back Christian.
Christian: Thank you.
Sean: And regular listeners will know Paurav is Professor of Marketing at University of Southampton's Business School. He specialises in luxury goods and it says here, “Insert gag about testing said luxury goods here.” Hi Paurav, how you doing?
Paurav: I’m very well, any luxury good gifts are always welcome.
Sean: Fantastic yeah, and I am Sean Riley, I'm a tech lover, an end user and host of the Living With AI podcast. We're recording this on the 10th of March 2021 so as ever, we open up with a little what's been going on this week. Paurav you sent me a link earlier, what's this all about then?
Paurav: Sean this is something very interesting, the pandemic is bringing up some new industries to light and in particularly in robotics, because we have to remember that we all have suddenly been taught around the world how to wash our hands properly, that 20 minute, 20 second routine has now become quite a thing you know. There's so many memes have come about and all, and as we become more and more conscious, as germ conscious, new aspects in terms of our interaction in the marketplace will emerge and one of that is in retail supermarkets.
Most of the time right now in London is in lockdown, but the retail outlets some of them are open and when we go into those outlets we are also a little bit cautious and conscious and at that point in time we want to see the store being cleaned and so on and so forth. And human intervention in that regard is quite, quite a challenge because how many people can you keep to clean up a store?
And in that regard number of technologies had existed for a time being in terms of what we call autonomous robots but now they've come to the forefront and there are companies like Brain Corp and others who are now selling their, selling their autonomous mobile robots to big companies like Walmart. So all those things are happening now which is quite exciting.
Sean: Just to look at this there is an image and obviously I’ll put the link in the show notes if, for anyone who's interested. There's an image here of one of these robots and it looks like a, a giant buffing machine if, if you’ll pardon the kind of comparison. Is this just for floors or do you think these things, do they clean shelves and all sorts or-
Paurav: I assume this would be initially for the floors but over a period of time I, I believe that this would then move into different kinds of setup wherein possibly human touch is detected and then something's been done. There are there companies which are also creating surfaces, so that is also happening as we know, that the companies are packaging their products in such a way that the surface remain decontaminated for certain touches and all that. So number of new technologies are emerging because of the pandemic.
Sean: I remember a few years ago a news story about adding silver into the, the cloth of surgical gowns and things to repel. It's going to cost a lot of money this right?
Paurav: Oh certainly will do but, but at the same time what may happen is that if that becomes mainstream then that would also lead to you know, lowering of the costs and everything but it also depends on the precious metals and their availability and everything. So a lot of interacting variables.
Sean: Christian what do you think about this idea of having robots clean our, our retail environments?
Christian: I think it's a tremendous idea if it reduces the risk of infection both to cleaners and to customers. I think this is a straightforward case of how it makes a lot of sense for robots to do things that are dull, dirty and dangerous.
Sean: Sounds good to me. I know we talked a few weeks ago on the podcast about the idea of having kind of a, a block chain for want for better word, paper trail of what's been clean and what hasn't in food factories etc. Do you think this will be the same kind of direction Paurav, what do you think? Is, is this going to be something where they publish, “The machine has cleaned X amount of times and you know found this much dirt”?
Paurav: Possibly it may happen if it provides companies with some sort of competitive advantage. So companies are going to look at it in terms of, does this add into the public trust, like what our Hub tries to talk about and trust in autonomous systems. And, and so in in that way we want to look into if, if this actually increases trust then I'm pretty sure like that Nestle example we talked about earlier in terms of blockchain, other companies would jump onto this.
I think initially companies would keep this data internal to see what kind of support this provides in terms of foot fall and if it does then I'm pretty sure companies would use it in different ways that, “Our stores are the cleanest.”
Sean: I know we've also talked in the past about these automated vehicles being vandalised and preyed upon by gangs of children. I'm wondering about you know how, how long before people are trying to fill these robots into cleaning the wrong area of the store or, or block them in with tins of beans or- I can see lots of fun being had with these things.
Christian: Can I-
Paurav: I cannot agree more I, I, I just remember that movie Passengers which I mentioned I think last time also, but in that movie a person wakes up you know, in the journey, interstellar journey and in the middle. And so now they have nothing much to do and suddenly the only moving things they see are four or five robots and so what do they do? Sometimes to just to attract them they actually throw things on the floor so the robots would come and pick it up and, and so those kind of new relationships may burst out of it.
Sean: They might become the butt of some jokes, a bit like we've discussed before about certain nameless virtual assistants.
Christian: There is an interesting philosophical discussion to be had about whether it's possible to be cruel to a robot. On the one hand you know, a non-living entity cannot be a, a victim of sort of physical interference of any kind but on the other hand that impulse, that human impulse to do what seems like teasing a seemingly living entity is potentially a vicious impulse. And that cruelty toward robots is something that is certainly not bringing out admirable qualities in the humans that do it.
Sean: Yeah, I would agree and obviously there's, there's a Kind of, a parallel here to how far do you go with the, with the animal world and you know the, the old- I think it's a Nirvana lyric, “It's okay to eat fish because they don't have any feelings.” Or something like this. Yeah, I don’t think we've quoted Nirvana on the podcast before, but there’s a first for everything.
Paurav: Yeah-
Sean: Yeah- go on?
Paurav: But also I think Christian you, you’ve brought in a very interesting point that we discussed in the, in that episode around assistive technologies, particularly those, those which cannot be named as we say. And they are, I have seen those examples in my house you know, kids when they discuss any of those things they are almost ordering that robot.
They are almost kind of you know, even there is this master servant type of a relationship and, and the way how that robot also speaks tends to show that they are almost like a slave you know? They are, they are almost saying, “Yes I'm ready to serve you.” Type of a tone, that feeling and so the mastery, that abuse as we, as you pointed out would actually become stronger when people start thinking, that I have a dominant relationship here with this technology. So you're absolutely right, I think it is worth an area of concern.
Christian: Yeah.
Sean: And it's also how that translates because obviously, obviously children often aren't able to work out the context difference between discussing things or, ordering a, a voice assistant or whatever to do something and the difference between perhaps interacting with other people, well people.
Today's feature is all about robots In our homes I'd like to welcome Doctor Radu Callinescu who is Reader of Computer Science At The University of York. His team are developing mathematical techniques and tools for the modelling and analysis verification and engineering of autonomous systems. So welcome to Living With AI.
Radu: Thank you Sean it's a pleasure to be here.
Sean: So today we're talking about robots in our homes and my immediate thought was Roomba, you know this automatic robot Hoover or vacuum cleaner that moves around your home when you're asleep and, and you know hopefully cleans up for you. Is that, is that kind of like the starting point for home automation? Is that, is, it home automation, well robots in our house?
Radu: This is very much the starting point, but if you look at the recent figures for instance from a survey done in 2020, 18.6 million domestic and service robots have been sold last year, and many of these are as simple as the automatic hoovers that you mentioned. Others are helping with the kitchen tasks and meal preparation, but the vision is that these will expand much, much further and they will include autonomous robots that will help people continue to live independent lives in their homes as they grow older and maybe develop physical impairments, mental impairments and require additional help.
[00:10:31]
Sean: I think over the last few years we've got used to the idea of robots not being kind of the sci-fi android shape you know, and the idea of robots being I don't know, agents that help us. But do we think that these things will be a bit more like kind of a human, humanoid helper or are we looking at kind of very specific bespoke things for different tasks around the house do you think?
Radu: There, there is significant research at the moment looking into how robots can more easily integrate with humans and support them through potentially appearing more, more social through, through showing empathy towards the humans they help and using anthropomorphic forms of robotics could help in this direction research has shown. So I think we will see some of these humanoid forms of autonomous robots in the future yes.
Sean: This is like having a smiley face on your yeah, your assistant isn't it?
Radu: That, that's indeed the case yes and studies have shown in recent years that it, it does make a significant difference in terms of acceptance of autonomous robots whether they appear to be empathetic towards their users and having a smiley user face as you mentioned Sean does help with that.
Sean: And what what's the current state-of-the-art then? Where are we at, at this moment? You mentioned 18.6 million kind of robot assistants have been sold what, you know if, if I needed assistance around the home, perhaps I’ve got some kind of impairment what what's the state-of-the-art? What could I get with, for my money?
Radu: In terms of robots that you can purchase it is not that much but the future will, will, will accelerate the development of this system so they can be deployed to help with daily tasks including helping people get from a seat to a stand position if you want. Providing people with reminders, finding their items around the house and bringing the items to their users, helping with meal preparation, helping with dressing. Ensuring the safety of the people that they are looking after in order to help them continue to live these independent lives at home.
Sean: And how is it better than kind of more traditional types of caring for instance, you know having a carer go into the home and help? Is there a research to show that it's better to have autonomous agents doing this?
Radu: There is actually concern about some of these aspects but there is also an awareness that the level of care that will be needed in the future will not be, will, will not be supported by our existing systems and people may well prefer to continue to live independently at home for longer rather than being located in a care home with other people. So continue to maybe live with their families will make a significant difference, so the final decision and, on this has not been reached but that there is a, an economically if you want incentive to develop these types of solutions and that there is an expectation that they will also be beneficial to their users.
Sean: And what's happening in your TAS HUB node in terms of research towards this? Is this something you're actively working on at the moment?
Radu: This is indeed something that we are working on and in terms of assisted care robotics if you want, but maybe more general autonomous systems. One can potentially think of three waves of research and advances. One would be to ensure that these systems have the appropriate functionality to perform their tasks, to be able to manipulate maybe small objects, to avoid obstacles, to move over uneven surfaces and that what one can maybe classify as functionality. And of course but there is still a lot of additional research to be carried out in that area but we do have these robots that are sold in large numbers. So many of the functional aspects are covered by research that is commercialised and ongoing project.
Now a second waves in this area is to ensure that autonomous system can operate safely and securely, preserving our privacy if you want and there significant research as well and some of it is ongoing on safety, security and assurance of these systems. But on my particular node we are looking at resilience from a sociotechnical perspective, because that in our vision is a third wave that requires significant research and which is slightly behind the other two I mentioned. And this is to look at the social and ethical and empathetic aspect of robotics, and autonomous systems and there are significant issues in particular in this domain that we are discussing here Sean on the, the use of robotics in assistive care.
There are issues of an ethical nature, what will- as you mentioned yourself, will the use of these systems perhaps lead to dehumanisation of care? There are aspects of related to the society's view on aged care, should it be delegated completely to these autonomous systems? There are also ethical issues associated with these systems starting to carry out tasks that are normally performed by humans related to the dignity of the older adults, their autonomy. Are we going to start to objectify older adults by, by, by, by helping them in this way in the end?
There is also an element of deception by robots showing empathy whereas in fact this is all simulated and the relationships that are being created between the humans, some of them are less able maybe to, to understand the technology and the, the, the robots that may, may provide support to them.
Sean: Yeah, because I can see, I, see that ethical issue but also, well it, in, in support of it you know, I think that a lot of people who have a carer who come in that might be the only person they speak to in that day if it's a human carer. And therefore if they as you say, become attached to a robot, robot that is pretending to have some kind of feelings if you like, for want of a better word. The flip side of it of course is that you can have inconsistency with human care and you know, you might get somebody who isn't as experienced as somebody else whereas with a, a, a robot system hopefully the care will be consistent. And hopefully as you say, will retain a little bit of dignity if perhaps there are tasks that it has to do that you know, help the, the vulnerable person not feel that, not feel that they are you know, being degraded in front of another human I suppose for want of a better word.
The, the issue I'm wondering about is how you can trust these things, you know we all know that computers and robots and anything technological will need security updates and software changes and all this sort of stuff. How do I trust that my elderly relative for instance, hasn't got an older version of software on, on their piece of equipment for instance?
Radu: This is very much the topic, it is at the core of the research that, that our project and other projects that I and colleagues are involved in are trying to address. So we are looking at providing the required assurance evidence and audit trail if you want, at each step of the process that is carried out by an autonomous system helping a vulnerable person. So we, we are looking at gathering this assurance through the sort of mathematical reasoning that you mentioned earlier which is a key part of my team's research. We are also working on developing these solutions through co-designing them with actually the end users of the solutions and with the operators.
[00:20:23]
These systems have a broad range of stakeholders so it is not only the engineers that actually develop them, it is also the operators, the health and social care systems that are going to deploy and operate them. We also do not foresee that anywhere in the near future we will have a completely autonomous solution, I think we are looking at a blended type of care in which the care provided by the normal medical personnel and social care personnel will be supplemented by these assistive care robotics applications.
Sean: What do people think about this who you work with? How do they react, is there a kind of a revulsion about having a, a machine helping them or is it more yeah, bring it on? How, how do the older, elderly people for instance feel about this?
Radu: Often in studies that have been carried out, not by our own project but by other projects within the programme of research that I'm involved in have found people quite, quite attracted by the idea of being supported in various ways by these systems. In many cases they, they see them as an extension of other types of automation that we have around our house, the sort of household appliances just becoming more intelligent and being able to provide support with an extended number of tasks.
Sean: What's the kind of medium term you know, medium to longer-term future on this? How, how far are we away from you know, having these sort of systems even in a blended fashion operating in the real world?
Radu: I would expect that we will see more of these systems in people's homes in many cases starting with providing simple reminders. I think in many scenarios it would be helpful to just have one of these systems remind people in, where with certain types of vulnerabilities, where the items they need are. Providing reminders on when they should take their medication, even the types of automated hoovers that you mentioned earlier could be particularly useful. I think it will be a significant step forward when autonomous robots with manipulation capabilities will be wandering around our homes to help with other physical tasks, maybe preparing a meal or tidying up the kitchen after the meal was served. So that that would require a significant additional research to ensure both that those types of systems are safe to use and that those types of systems are not going to violate the, these ethical norms, these societal norms that we have in place and care so much about.
Sean: I think it's hard to imagine this kind of like cyborg you know, android type character being in- character there's me I’m anthropomorphising it straight away, being in someone’s home when the perceived cost certainly for me is that that would be a really, really expensive piece of equipment certainly at this time. Whereas care workers tend to be you know generalising here, but fairly low paid you know workers. Is it realistic to say that you know, all of these people outside the kind of super rich will potentially have this kind of technology in the home? Yes okay I can understand reminder systems and you know, virtual assistants helping remind you to take your tablets or whatever. But the reality is until costs go right down for anything kind of automated it, it doesn't feel like it's on the horizon to me.
Radu: Right yes I see what you mean Sean, but I think that quite the opposite actually, using these types of systems in a blended mode could actually take the cost down quite significantly because when we discuss the, the salary pay to social workers which is very low indeed as you mentioned. And maybe that’s a wrong thing in itself but when we perhaps ignore the costs of training these people and other costs that society is finding harder to, to afford and that, that is not sustainable. So if you look at the types of budgets allocated to social care and healthcare especially in Western Europe they are just going up and up and I think that there, there is a clear signal that support from other technologies will eventually be needed.
Sean: The sort of thing we think of when we think of assisting in the home is kind of robot butlers that we see in films like Robin Williams was in Bicentennial Man which was about a robot butler or Iron Man has this you know butler called Jarvis. Obviously lots of us nowadays have the Amazon device I won't name for fear of setting people's devices off or the Google devices in our homes to help us. So is the real, is the realistic thing that we're going to have something between a virtual assistant and what we see in a film? So are we ging to have something in the middle, you know perhaps something that can physically assist but also can remind people etc.?
Radu: I would imagine that we will have the whole spectrum from systems that are just providing reminders and we are starting to discuss about personalised healthcare, so perhaps there will be a personalised assistive robotics care as well with people offered the solutions that are, that are aligned with their needs and those will also be updated as potentially a medical condition progresses over time. So I think we may have more, more, more than one of these autonomous agents cooperating with, within a household potentially, to provide the required support to, to the person in need of such support.
Sean: One, one thing that I find using my personal assistant, I use it to set reminders etc. and because I don't have the best memory in the world I will set reminders for all sorts of things from it you know, bring the washing in in case it rains through to you know, remember to pay your tax bill. The thing I find for me is that I get the reminder and if I don't act upon it there and then I simply forget again. What I'm interested in is if you're trying to remind someone to take their medication what, what process could be in place that makes sure that that happens and isn't just a reminder that is you know audibly through a speaker in their house and then the, the machine ticks that off as having reminded them? How, how do you make that interface happen I suppose?
Radu: I think the concerns in this scenario that describe Sean is mostly to do with what, what the, the autonomous system should do if the person persistently refuses to take their medication as it might well happen and there is less risk I believe that, that the system will forget that the medication has not been taken. But in the case when the person is potentially feeling unwell or low and does refuse to take the medication I think we are facing one of the ethical dilemmas that I described there. The only solution that comes to mind at this stage of our research project is that perhaps that is a point in time where human care needs to be engaged and the, this blended solution would be very much applicable.
Sean: Yeah, I can see that happening. I suppose I was wondering about the mechanism as much as anything. How would the system know whether that, whether the human had been, had done the, the, the thing that they were being reminded to do? Is, would there have to be some kind of feedback loop like you know, some computer vision to watch them take tablets or some kind of system where they press a button to say, “Yes I’ve done this.”?
[00:30:02]
Radu: I think any of these submissions would work as would work a solution in which the, the, the, the medication would be itself RFID tagged and the, the person taking the medication would therefore be observed having done. So but there is always the slight risk that the person might, I don't know even hide the, the, the medication and bin it or something. So that there is no perfect solution but neither is one with a human nurse for instance observing the, the vulnerable person.
Sean: Yes we've all seen storylines where the, the patient is hiding the tablets under the, under the mattress or whatever there's, there's not a lot you can do about that. What's the holy grail of this, what's the holy grail of assistive technologies? Where, where would you see it if you know, we have the absolute perfect solution?
Radu: I think the, the perfect solution would be to continue to provide human managed social care and to, to people in their old age but to, to blend it with these autonomous systems provided solutions. So people can continue to live independently at home for as long as they want and to, to preserve at the same time the human dignity, to achieve our- to comply this part, these blended solutions with our social, ethical, empathetic norms.
Sean: Radu I'd like to say thank you for joining us on Living With AI, it's been a pleasure having you on and really interesting to find out more about assistive technologies.
Radu: Thank you very much for inviting me Sean.
Saen: It was really interesting to talk to Radu there. What are your thoughts on it Christian and Paurav? Do you think we’re, we’re going to be heading towards a world- it's a bit like the cleaning robot right? Where you know, things that people don't really want to have to do will be, will be taken over by automated systems?
Christian: Well I would actually say that it's not like the cleaning robot in the sense that it's one thing to be wiping up a residue of virus, but it's something quite different to be looking after some very vulnerable members of our society by which I mean senior citizens. And there is something a little concerning about the, the direction of travel here I, I have to say, given that vulnerability. And I think we do have to constantly be wondering when we, when we hear claims that there's a social benefit to be achieved by switching to new technology and relying less upon the human performance of a caring function it's worth asking what kind of benefit really is being pursued and thought about? Is it an economic benefit in terms of the profitability of the aged care industry or is it a genuine improvement in the quality of social care being provided to these vulnerable members of society?
Sean: I think you're absolutely spot on and there, there's a small part of the feature where I mentioned that that a, a vulnerable person may only have that interaction with the carer that comes in a day, in a, in maybe once a day or twice a day and that that is their social interaction and, and they need that as, as you know, input into their lives. Paurav, what do you think about this?
Paurav: Yes in some ways Christian is absolutely right with regard to most of the time these benefits are seen from a larger corporation or a systemic perspective and they are mostly seen from the perspective of economics rather than the society itself. Like what Radu also said, that they are thinking about empathetic robots, but when I'm looking out for empathetic robots what I also find is hardly anything. Recently I saw a very interesting interview Tony Robbins did with a, a robot called Sophia which has been developed by a company in Hong Kong and it has become the first robot citizen of Saudi Arabia. So that is quite an exciting thing, so I was like, okay Let me have a look and when I actually looked at it what Sophia was, was nothing but- to me it felt like a glorified chat bot. You know, it was dressed nicely and it was a, it had some sort of facial expression and the eyebrows were moving up and the nose was moving a little bit here and there and the lips were moving.
And the answers were quite you know what I would expect the assistive technology that cannot be named you know, you would hear, you could, you could ask, “What is the, you know answer to life?” and it will come up with the answer, “Fourt two.” Those kind of things. And I mean these are gimmicks, these are- are these really empathetic things, like a human being, like a carer who goes inside the house and understands the person? You know, understands the situation and, and so there are, there are much deeper debates to be had before we just say that this is beneficial.
Sean: There is yeah, there is definitely an element of you know faking, faking things with a large database, perhaps some machine learn facial expressions linked into a set of stock answers. But the, the argument the other way around might be that somebody feels that's enough for them. I mean, maybe I'm being cruel here but there might be people out there who are happy with that you know, happy to have a conversation with Siri once a day or whatever, Christian?
Christian: I think there might be some people who are sad and lonely and desperate enough to convince themselves that this is enough. Kust in reply to, to Puarav’s excellent observations, I take the view that there's actually no such thing as an empathetic robot, empathy is the sharing of feelings with others. And, and when I think about the idea that a, a vulnerable person in an aged care facility could, could be made to think that this, that this machine perhaps with a face slapped on it is showing empathy towards them I'm, I'm, I'm thinking that, well maybe that's because that's what that resident of the aged care facility wants to be thinking is going on, rather than that this is actually going on.
One of the interesting things that Radu said earlier was that, he said that there's an element of deception in robots showing empathy. It was interesting that he, he was very upfront about that, that deceptive quality of the technology here. And it's hugely relevant to, to what Paurav and I are, and others at our universities are researching because it goes to the very heart of the trustworthiness of autonomous systems and the very important distinction between trust and trustworthiness. So you can, you can play tricks, you can engage in deception, you can seduce a vulnerable member of society into thinking that this machine is showing empathy towards them but really it's just a trick, and that arguably undermines the claim that these systems are trustworthy, even though under the right circumstances they might be trusted.
Sean: I think that's really important and it's something we've come back to a few times in the podcast that there are these different levels of trust. There's the trust that you will expect your car to get you to your destination, there's then the trust that perhaps your car is you know, do you trust that it's not swiping all of your information and sharing location data with a bunch of advertisers? So those different levels of trust and that's- you're absolutely spot on that's the element of trustworthiness. And I think that the holy grail of AI intelligence or certainly one holy grail is to get to the idea of artificial general intelligence and perhaps that's the only thing that would ever get us anywhere near non deceptive empathy if you like.
Meanwhile it's just a yeah, machine learned database full of stock responses I think I mentioned earlier on. That's my, that's going to be my machine learned or human learned stock response to anyone saying that empathy is possible right now. Anyway Paurav-
Paurav: Yeah-
Sean: -is there anything good to come out of this?
Paurav: In some ways there is from, from, from one perspective when I think about it both of your observations, one thing we realise is that like when we think about robots and in a way we, we make sometimes comparison, compare them with some sort of a vehicle because in a way they are some sort of a machine. So if I take the autonomous vehicles as a, as a perspective and we say, “You know what, in autonomous vehicles we have level 1 to 5.” So level 1 is where the car, you are in control, car is almost just providing you some assistance, in level 5 the car is in complete control, you are just sitting there.
And, and in some sense when you start thinking about robots in our homes probably there are five more levels required because in a way if I have a Roomba in my house, it is a round object which cannot go under my sofa. It cannot open my sofa’s arm so it can go under it or it cannot work out the curves which Steve Jobs likes so much you know? So when you think about it, it can only do work in a very specific set of circumstances and that means that when we are thinking about robotics and particularly in homes that just a pure autonomousness would not work.
[00:40:34]
There has to be a number of psychological underpinnings that will have to come into picture which Christian was earlier talking about. You know, the two aspects he mentioned that, that, that dominance versus you know, subordination type of a situation or for that matter the empathy side of it. All those will have to be built and that would be extra levels and these are extra, extra levels to any sort of robotics.
And, and personally speaking, when I'm thinking about it you know I, I, I searched a little bit in terms of how do we define robots and when, when it all starts and you know post-World War II we- anyway you know there is a talk about Devil's Mechanical Arm which was the first mechanical robot as it was a thing, [s/l talk 00:41:19]. Have we really moved from that mechanical arm the question is, because most of the robots we are trying to engage with are mono dimensional single taskers. They either in a way that, they're there is a machine in my house which can clean the floors or there is a machine in my house which can possibly mix a cake mix. But that cake mixer cannot clean it up and the cleaner robot cannot do cake mixing and so what I have is just a number of mono tasking machines and that's about it.
Sean: And this, this is the way things have been heading for some time though. The idea of okay- and again I mentioned in the feature the, the idea of robots from sci-fi which are humanoid looking and can do what we can do has been replaced with this idea of as you say, kind of single use or very specific area robots that are you know, specifically designed to do a task such as mow the lawn or, or hoover the carpet or as you say, mix a cake batter. But you know, does that mean that we'll have to go to what could be a very potentially expensive route of, of humanoid robots to try and get around our houses or are we going to return to this idea of will the houses adapt to be a space that robots can move around more easily? I, I, I'm intrigued by that.
Paurav: There is a lot of talk about these houses that are smart. So your refrigerator will connect to the internet and will tell the retailer that, “Hey you know” if you remember Amazon's dash buttons which have now been discontinued-
Sean: I'm very glad they've been discontinued.
Paurav: and-
Sean: I do not need to have a button that, that I press every time I need something. Sorry carry on Paurav?
Paurav: But exactly right? That, that, that was that was kind of a non-movable robot in, in some sense right? That it told something to happen in my house you know it would facilitate my interaction in retail terms. Now if you take that a little forward and you can make a house as a box that is compatible to the robotness but the humans are not robots. So you have a two year old child in your house how are you going to make it robot friendly? Or would you want the house, the robot to be human friendly or humans to be robot friendly, and that's the question which, which would remain with us.
Sean: This, this also ties into something I was thinking when we were discussing the idea of empathy and the robot and I mentioned the kind of carer having an empathetic element to it. There may be just kind of a wider issue here which is that perhaps we're thinking of the robots as being replacements and instead they should be augmenting what happens right? Is that a fair point Christian?
Christian: Yes I mean augmentation is one of the ways in which these things are initially sold and, and the way is smoothed by, by, by precisely not talking about replacement and certainly in the context of aged care facilities the argument has been that, “Oh look it's not as if all those human staff are going to disappear and be replaced by humanoid robots. No, no it's simply going to be the case that when, when the staff don't have time to sit for several hours on end with one of the residents then we, we will send in Pepper the robot or, or, or something like that.” So augmentation is all very well in a sense, but the temptation having started along that path is to keep on augmenting, just you, you keep on doing it until there's less and less human contact in the, in the life of, of a resident of an aged care facility.
So you know there, there are systemic problems in play here whereby you know, society and governments decide to pay people employed in the aged care industry a pittance such that there's a staffing crisis. We have an ageing population, the solution or even the larger part of the solution doesn't have to be the, the partial replacement of, of, of humans with, with humanoid robots. We, we don't lose our responsibility to improve the standard of human care that is provided in these facilities as well. So as we race towards having sort of roboticised modes of care and reduction of loneliness we shouldn't give up on improving the human dimension as well.
Sean: Totally agree. I mean it's a bit of the kind of sort of thin end of the wedge kind of argument there but I’ve seen that happen in the broadcast industry. You know I, I went into the broadcasting industry about 25 years ago and worked for an automation company where we went in and installed automation systems to allow one person to do the job of three. So you would press a button and the tape would be played and the sound would be faded and you know, all of the things you can imagine in one of those rooms everybody’s seen pictures of with hundreds of TV's in front of someone. By the time I left that particular firm I was seeing one, two or three people managing dozens of channels. So it starts with, “Hey one person will be able to do the job of three.” Then it'll be three people will be able to do the job of 20 and, and these things do happen and, and often economy is the driver of that and perhaps as you say, this is the wrong space to be thinking about just in purely economical terms. Paurav?
Paurav: Quite right because when we start thinking about human beings as part of just an economics or a number we become less human as a society. We have seen some examples of robotics in automobile industry in particular you know, automobile plants have now more number of robots than people. You have seen it in retail, there are now dark warehouses where there is nobody actually working in those warehouses, only computers and barcodes and string of you know, kilometres worth of just line that are transferring, transferring and transporting goods. And so while in some cases that has been beneficial economically, its societal costs are enormous because every society needs that societal support of different kind. And especially when you start thinking about the care home and the care industry, when we start thinking about you know the word itself is care and somehow that is not the word we attach with robots at all. And so these are just contrasting in that, that, that sense itself.
Sean: If we assume that maybe a utopia happens where actually robots are doing menial tasks but people are also caring you know? So, somebody comes in to talk to the person and doesn't necessarily have to do the cleaning for the person or something like this will we still be able to trust these robots? I mean I'm trying to you know, really delve into this idea of trust you know, if they are doing things and not pretending to do things. So we're not in a, in a system where they are you know if you like, faking empathy we still have human carers for that.
Christian: Let's imagine the utopia. That, that's interesting I mean yes you can imagine a situation where there's all these labour saving robots deployed in our lives so that now I’ve got so much more time up my sleeve to go and visit granddad and just chat away with him for just hours during the day. And look for a full salary I only work three days a week and there's been all these social changes, partly assisted by more and more capable robots and, and so the future means that there's more and better human based aged care going on. Now the, the flaw in this imagined utopia of course is that these other social improvements will have happened, so that we really do have more time to care for each other rather than simply having more time to do different kinds of work.
Sean: This is it. I mean I was, I was, I was thinking, oh universal basic income you know, kind of like Iain M. Banks culture style kind of utopia and then the Vaseline started wiping off the lens and we came back to reality. Paurav?
[00:50:08]
Paurav: What a wonderful thought for, thought in a way Sean. But an interesting idea came to my mind right now as, as Christian was describing that utopia is that I have not generally seen in any utopian movie of sci-fi nature you know, wherein I have seen older population. Isn't that something amazing or-
Sean: Well they, maybe they just get whisked away kind of what was the, what was the film I'm trying to think of? Logan's Run do, does- if anybody knows Logan’s Run you’ll know exactly what I’m talking about.
Paurav: Yeah.
Sean: Just a brief explanation for those who don't, I’m paraphrasing because I'm trying to remember it but each, each person has a, a crystal on their hand that when it changes a certain colour they have to go to let's say some magical fairy place i.e. are euthanased is that the word? I can’t remember this, euthanised? Yeah, they are they are basically spirited away and yeah I, I forget that there's a whole kind of overriding kind of mythos over the top of that film and the, the, the storyline of the film is the characters who realise that it doesn't have to be this way. But yeah, I think that you know plays into that idea of, hey everything's great, everyone's you know young and, and everything's wonderful. But yeah-
Paurav: My, my question is Sean you know when Neo grows old where does he go?
Sean: Yes well yeah and, and if we're talking Matrix you know are the, are these human batteries you know- were the most preposterous premises ever but still a wonderful film, are they still as efficient or you know, do they get lumbago and arthritis and-
Christian: We, we need to be careful in this space I mean Paurav quite rightly says that you know, maybe care is distinct and special and when we think about AI and robotics we shouldn't think about progress in exactly the same way as we think about say transport or, or cleaning or something like that. And I think to get over this it won't just be a case of technological advancement in terms of getting more and more lifelike humanoid robots out there and doing experiments inside aged care facilities and seeing what the residents think and taking some surveys and trying to establish evidence of acceptability and transitioning and so on and so forth.
There will be an important step being taken in terms of the social appetite for anthropomorphisation and anthropomorphisation is the great deception, the great seduction that actually humans have been engaged in for thousands of years. It's just that we're now anthropomorphising humanoid robots as well and that great deception of anthropomorphism is something we're going to have to be very careful about, especially when we're talking about trustworthiness. Because we don't want to be simply seducing ourselves into thinking that a system or a robot can be trusted and we have to be very careful about not giving up too much of our innate sense of humanity and the uniqueness of humanity are in that process. I’ll leave it there for now.
Sean: I think, I think anthropomorphising- which I usually have quite a trouble saying that word so I'm glad I’ve managed to get it out, I'm not going to say it again in the near future. You're absolutely right we do it all the time. We do it to animals, we do it to in a in animate objects, we do it to robots. we do it to Siri and, and the like and I'm saying the names Alexa, all the rest of them. We do that because it helps us kind of understand these things right? We, we do it to I don't know, is that right? Do we do it to help understand or do we you know, make sense of things in, in a way that we understand? I mean it's often the animal world is a classic right?
Christian: Now that you've broken the seal you might want to think about whether or not to include the whole issue of sex robots or maybe that deserves a whole episode in its own right. But there's, it's hugely controversial, hugely potentially lucrative but the ethical problems are profound. Can I come back to your, your, your musings about anthropomorphisation though Sean which, which are sound I believe.
As you know I’m, my area is primarily international relations and when I'm researching military ethics and especially the, the intersection of AI and military ethics you come across something called the lethal autonomous weapon system. And in that, in that space in terms of policy making with regarding AI and weapon systems it's generally considered to be a big no no to anthropomorphise.
A group of government experts assembled in Geneva couple of years ago to come up with what they hoped was a set of guiding principles to give to various governments around the world who were developing AI for military purposes. And one of the guiding principles that they came up with was that when governments are crafting policy measures, emerging technologies in the area of lethal autonomous weapon systems should not be anthropomorphised.
And the reason why they included that as a principle was precisely because they wanted to avoid a situation in the future where the weapon system itself having been anthropomorphised would be made to take the blame for some kind of military wrongdoing. Anthropomorphisation is a pathway if you take it to its logical conclusion- if you want to, if you want to think about if you like, the unholy grail of AI, it is the thorough anthropomorphisation of the technology to the point where we become convinced that the machine, the system itself can bear moral responsibility for wrongdoing. Now this is very deleterious because of course a robot cannot be punished, it cannot suffer precisely because it is not alive.
Sean: So this is the ultimate autonomous lethal weapon systems don't kill people, people kill people.
Paurav: The company Boston Dynamics has developed this dog robot has been highlighted in so many media outfits and in so many interviews and it, it, it seemed to be doing wonderful things like jumping and you know, walking and running and all those kind of things and it could, it has been shown to become a companion to people also. But at the same time it was developed initially with funding from military and also what had happened I think in regards to you know, the company has always been trying to distance itself from that military foundations and trying to keep itself as a corporate entity of its own kind.
But artists in New York if I'm not mistaken actually, putting a paintball gun on the top of the robot and then made it internet wide sensation by you can actually go to that website and take control over that robot to shoot in that gallery space, you know, objects. To just highlight that like what Christians said, they, a robot doesn't have a soul. It doesn't have those emotions, it doesn't discriminate in, in different ways but it means that it can do whatever it, it wishes to.
Sean: But Boston Dynamics didn't like this did they? On that-
Paurav: No.
Sean: -in that article it says, “They said they would give us another two robots for free if we took the paintball gun off. That just made us want to do it even more.”
Christian: Couple of weeks ago a new book was published by the Nobel Prize winner Kazuo Ishiguro and it's a book called, “Klara and The Sun” and it's fascinating because Klara the subject of the book is what's known as an artificial friend. And this artificial friend is assigned to a 14 year old girl named Josie and the story is narrated by Klara. And it’s, it’s a fascinating way of acquainting us the reader, with the notion That you can, that we as humans who are capable of empathy can empathise with a robot whose role it is to provide care to humans.
It's kind of like the flip side of, of some of the things that Radu was talking about and it’s, it’s fascinating as a story. I, I’ve yet to finish the book but it goes to and touches upon this theme of if you like, mutual regard between humans and anthropomorphised robotic systems. It’s, it’s fascinating not only from a literary perspective but also from a philosophical perspective because of course, it occasions us to think quite deeply and maybe disturbingly about what it truly means to be human.
Sean: What it truly means to care. Yeah, fantastic that's really good. I think it's really interesting isn't it those are you know, it's the flip side like you say the, the other side of the coin. You, you know you've charged this robot with caring for someone but someone's actually caring for the robot I love it, it's brilliant. Christian I'd really like to thank you for joining us again on the Living With AI podcast.
Christian: A pleasure.
Sean: And thank you Paurav again, nice to see you again.
Paurav: Thanks, thank you.
[01:00:00]
Sean: If you want to get in touch with us here at the Living With AI podcast you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI Podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited and it was presented by me Sean Riley. Subscribe to us wherever you get your podcasts from and we hope to see you again soon.
[Audio ends: 01:00:31]