Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Mapping trustworthy systems for Robots and Autonomous Systems in social care (MAP-RAS)
Effective embedding of robotics and autonomous systems for social care will require alignment across many levels. This project seeks to move from the imaginary to the concrete by using an actual robot to help with dressing people, and imagining how this might integrate within the existing health-social care ecosystem in the UK.
Employing Lego Serious Play as a tool for creative exploration, the project wants to build a physical map of this system in order to identify agents and obstacles which may influence trust in, and therefore the integration of, this robot, and experiment with potential reconfigurations. The project is also conducting a series of interviews in the lab where the dressing robot is being developed, in order to understand how engineers make technical decisions.
Find our more information at the website: Mapping trustworthy systems for RAS in social care (MAP-RAS) – UKRI Trustworthy Autonomous Systems Hub (tas.ac.uk)
Podcast Guest: Stevienna de Saille, Lecturer in Sociology, University of Sheffield
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 4, Episode: 9
Mapping trustworthy systems for Robots and Autonomous Systems in social care (MAP-RAS)
Effective embedding of robotics and autonomous systems for social care will require alignment across many levels. This project seeks to move from the imaginary to the concrete by using an actual robot to help with dressing people, and imagining how this might integrate within the existing health-social care ecosystem in the UK.
Employing Lego Serious Play as a tool for creative exploration, the project wants to build a physical map of this system in order to identify agents and obstacles which may influence trust in, and therefore the integration of, this robot, and experiment with potential reconfigurations. The project is also conducting a series of interviews in the lab where the dressing robot is being developed, in order to understand how engineers make technical decisions.
Find our more information at the website: Mapping trustworthy systems for RAS in social care (MAP-RAS) – UKRI Trustworthy Autonomous Systems Hub (tas.ac.uk)
Podcast Guest: Stevienna de Saille, Lecturer in Sociology, University of Sheffield
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to Living with AI from the Trustworthy Autonomous Systems Hub. This is a projects episode and we’re going to focus on the project MAP-RAS . I’m going to call it MAP-RAS but it stands for Mapping trustworthy systems for RAS in social care, we’ll find out what that means in a moment from our guest, Stevienna.
As we’re recording this, it’s 26th of April 2024 and I’m your host, Sean Riley. And let’s meet Stevienna now. Welcome to the podcast, can you just give us an introduction?
Stevienna: Hi, so I’m Stevienna de Saille. I work at the University of Sheffield where I’m a lecturer in sociology. And I’m a researcher in our institute which is called the Institute for the Study of the Human, otherwise known as iHuman. And I lead the research team there on human futures.
So, I’m a science technology study scholar. So, I’m interested obviously in what technology does to society, but also the way society and technology kind of co-produce each other. So, human futures in that sense is how do we actually retain the human and the things that make us human and the things that we love about being human and the things that we need as humans in the face of all of this new technology. And that is particularly relevant, obviously with robots and AI.
Sean: Absolutely. Yeah, yeah, yeah, definitely. So, let’s start with an idea of what the project was, so I’ve just called it MAP-RAS, but what’s the RAS stand for, well mapping is MAP, right, tell us about the project.
Stevienna: Yeah, so we call it MAP-RAS as well. So, if I was to dry out the name it is Mapping trustworthy systems for Robots and Autonomous Systems in social care, but because I hate repetition I just put RAS. Our funders knew what I meant by that, so, yeah. It’s a shortcut that afterwards I was like oh I probably should have put robots or something.
Sean: No, that’s absolutely fine. So, tell us what you were doing in the project and, you know, is it ongoing, tell us a bit about it?
Stevienna: Okay, it is ongoing, in fact we just collected- We had a wonderful workshop on Wednesday, which is our last big, huge trans-hub data collection. So, MAP-RAS kind of is the force in a series of projects that began from the very simple question of what do engineers need to know about designing robots for social care that their disciplinary, more system training does not really equip them to ask. And what kind of things could social science do to help them with that. But from engineers the objection was, yes, but all this values based stuff, we don’t know what to do with that necessarily, it doesn’t really help us.
So, the idea was to collaborate very closely with engineers over a series of time to try to get what we call far upstream knowledge. So, what really is the system in to which these kind of robots will be deployed because that does affect, actually, technical decision making. And the further upstream you can do that, the more information the engineers can have to make their decisions, the more readily, hopefully, the system can become to incorporate these machines when they do actually become available, at least that’s what we’re hoping.
We did one project in which we just tried to see what are better methodologies for include people with various disabilities in to this kind of research, very early on, before proposals were developed. And from that I’m a certified facilitator in Lego Serious Play. And I had a colleague of mine do that particular workshop, but we tested different methodologies to see what actually seemed to work best, and Lego won, I’m happy to say or at least it won it in terms of what I was trying to achieve. And so, I did, with some of the same colleagues that I’ve been working with all along, it’s kind of developed in to this very nice multidisciplinary team.
We had a project that was funded from TAS called Imagining Robotics Care, and that was a couple of years ago and I think I came on and talked about that one at that point. And now we’re looking at just simply how different stakeholders were imaging robots for care. What is in their head, what are the expectations, what are the potential users that they’re seeing, what’s the scenarios, because we realised we had no idea. Actually when you ask someone what do you think about robots for care, they give you a response but you have no idea what they’re basing that response upon.
Out of that came a dilemma, and what I would call that is how the perfect robot fails. And, how the perfect robot fails is that the system, it has no capacity to incorporate the robot in the form in which it’s being developed. So, what came out of the IRC project, Imagining Robotic Care was that from a council point of view a robot that could help independence of somebody who is still living at home with comorbidities, with frailties, someone who needs help with basic daily living tasks. That robot does not work under the time and task model because there is not time for the carers to come to set up the robot for each client, to do it, to break it down, to bring it back. What happens if it goes wrong. If you still need two carers the council is not saving money.
From the social worker point of view, it’s exceedingly difficult, their biggest problem with lifting people is the weight of the person and the injuries that causes. Well, if the robot is going to be just as heavy, it’s going to have this similar potential, so that’s not really going to work. But overriding all of that is from the user point of view. If that robot is not in their home, something that they can use so that they can get in and out of bed, and they can get to the toilet, and they can get dressed when they want, not when the carer can come, none of it has any use whatsoever. So, we thought, okay.
So, within the TAS Hub there is the Reason Project, and they were designing a robot for dressing. And there were a few tasks that had come out of IRC that seemed like, okay, this is really- This would be incredibly useful, being able to have a robot at least for like standing, lifting, dressing. And, so we’re working very closely with them and the PI of that project is Sonya [unclear 00:06:36].
And we are trying to understand how that particular robot, which is in the very, very early stages of prototype, which is exactly where we want to be working, how that would fit in to a social care system.
So, there’s two strands to this particular project. One is we’re working with the engineers using a protocol called STIR which is socio-technical integration research. And that’s kind of an ethnographic protocol. And the idea of that is to try to understand their observation and short interviews, and being present in the lab. What kinds of decisions are being made at this very early phase of development. What are the information needs of the engineers, what are the obstacles they’re coming across. What are the knowledge needs that potentially we could fill, those kinds of questions.
So, we did this as we were preparing a demo of the robot which, at that point had two arms, two [s/l Fracker-Armica 00:07:44] arms these are kind of industrial arms that are about three feet, four feet in height, I guess when they’re fully extended. So, they’re a little bit daunting but they’re not huge.
So, we were preparing the dressing demo and then we brought that in to an all day workshop with a council that’s our partner, have them, (a) experience the robot in the state in which it presently is in, then imagine what it would need in order to be suitable for their particular system. And then mapping out that system and trying to understand how the robot would integrate, what would be its impact, where are the obstacles. Who are the people that you might need to get on board. When might you need to get on board.
And this is kind of following the framework of responsible research and innovation in which, you know, there’s sort of four parts, so you kind of try to anticipate what the impacts and the benefits might be and for whom that might be. And reflect upon that. And then use that to take in to your next step.
So, it’s a protocol that they call area, we call it aria. So, anticipate, reflect, include rather than engage and that to me is very important. Who do you include, what kind of knowledge. What do you need, at what point it will defer over different points. But also how do you integrate that in a meaningful way. So, you’re not just engaging with people. But you’re really integrating people in to the design of the robot. Because if we’re asking the councils to use this, and the councils are not familiar at an early stage with what it is and what it could be, and, you know, if they don’t feel that they have had any kind of input really, apart from the most nominal in this process, I think the robot is probably not going to be fit for the service. And the service, itself, may actually resist using the robot because it doesn’t do for them what they need doing under the constraints under which they’re presently operating.
[00:10:02] **
And all of that is done with Lego, that whole mapping is done with Lego, so it’s quite fun, you wind up with a 3D map of the system on the table in front of you which you can then manipulate.
Sean: Yeah.
Stevienna: And say well what if we brought this group of people closer to the manufacturer and you can lift the models and actually move them closer so that they’re all connected. You can see what it does to the rest of the system.
Sean: There’s multiple things that come out of that from listening to you explain it. You know, things like the multidisciplinary kind of approach, those languages that everybody has to speak. The one thing I was thinking and you kind of mentioned the size of the robot, I’m thinking retrofitting that in to the current system where carers go in to people’s homes, homes that are often not that large, you know, that people are having to shuffle around, you know, maybe medical type beds and things like this. I can’t see how- Maybe this is something that came up, but how does the robot sort of fit in to that scenario? I can imagine it being in a purpose built care home where there’s space for it to move around but, you know, is that something that you found?
Stevienna: Yes. Yeah, that comes up all the time. And it is a real possibility that the robot is only fit for certain purposes, yes, indeed. It is now one arm, and that is partly because the second arm broke. And so this is one of the things that I find quite interesting, when you’re embedded in the lab and doing this kind of ethnographic work is how does something like that then affect the decisions that are being made- So, in the workshop that we did on Wednesday with a different council, the demo was very different, but also the user was different.
Sean: Yeah.
Stevienna: Because the vision of what this robot, it could be now, is much less something that dresses somebody who themselves cannot really engage that much reciprocally in that process to someone who is possibly a stroke victim or has haemopoiesis. Someone who has one side that does not function particularly well. And the robot then, in effect, kind of takes the place of the arm that’s not functioning properly. And so it becomes much more of what they call co-botics.
Sean: Yeah.
Stevienna: So, it’s not just the carer but the person themselves is working with the robot to dress themselves.
Sean: Yes.
Stevienna: And, so everything about that changed because of just this serendipitous event which is that one of the arms broke down and could not be repaired.
Sean: Yeah, I mean, it’s interesting isn’t it because we often have tasks we do round the home where you go, you know- It’s slightly flippant for me to say but, you know, I wish I had three hands because I can’t hold this while I do this. But if you’re down to one hand that robot arm, you know, functioning as that other arm does sound like it can be really, really helpful, yeah.
Stevienna: But one of the values that keeps getting spoken of, you know, within this field is independence, is helping people maintain or even recover independence. And now that’s a very different imaginary now around this robot. So, now we’re seeing this is something that is integrated as an assistive device. It maybe depending upon the person’s condition that there is another carer there, or it maybe that the person is actually quite capable of just using the robot on their own. It’s one arm rather than two, which would be half the price. It would take up half the space and as the technology evolves the hope would be that the arm itself would be able to be made significantly smaller. There’s always a problem with the torque on the machine, you know if somebody was pulling on it or if somebody is leaning on it, how much the machine could actually take. So, those are the kind of technical decisions that get weighed against how small is it, how light is it, those kinds of things.
And I found it really fascinating to be in on the very early development of the process and trying to understand more about- I wouldn’t say the engineering mindset, because there isn’t THE, one, but just as I’m trying to understand the constraints that the councils that are operating under and how the adult social care system actually works, it’s also trying to understand the difference between, for example, academic robotics and a private robotics firm that’s developing something for money.
Academics robotics is constrained by things like money, you know, you can’t have the ideal gripper arm because it’s too expensive for the project, that kind of thing. You know, you didn’t foresee this when you wrote the budget and the budget will now cover it. Those kind of constraints actually affect the technical decision making, which I find very interesting.
Sean: Absolutely, absolutely. So, it sounds like- You know, you mentioned this serendipitous event of one arm breaking and then you sort of pointing the project down almost a different path, or certainly, you know, altering the direction. So, that’s something that’s gone well. What else have you found that, you know, people have been pleased with because often we look at AI and the first thing we talk about is the negatives or the problems. What’s been the opposite of that, where have people thought, actually this is great?
Stevienna: I think in general we were quite surprised, and I can now say it’s two councils, not just one, by how welcoming they are to the idea of robotics in general. There’s a lot of caveats with that, you know, it has to provide value for money. It has to be something they can afford. It has to be something they can integrate within their services. But in general I was expecting to encounter a lot of resistance. I did not encounter that, but one of the caveats that people do not speak of that much, actually two, one is what it means to be a carer. Whether you are an unpaid family carer or somebody who does this for a living, there’s a joy in looking after people that we de-value as a society enormously.
And so one of the things that some of the social workers in particular, and some of the people who work more at the coal face, who work directly with the users were saying, was they wouldn’t want to compromise their ability to be there with the person, for two reasons. One because they get a joy out of it, this is the best part of their job but also this is the part of their job that the person gets the most out of. And there’s a whole range of activities of observation that happen during these encounters that help them understand how this person is doing today. You know, does the skin look sallow, do they seem depressed, does the body feel a bit more flaccid than normal as you’re helping the person dress. It’s like this wealth of information.
Sean: Yeah.
Stevienna: It’s possible that at some point in the future a robot could taken in all that information, but it cannot give the person that human interaction that- You know, if the pandemic taught us nothing, it should have taught us the importance of that, that we go a bit spare when we don’t get to physically see and speak to and touch each other.
Sean: Yeah, because that is a real part- I mean some of these people who have carers, that carer might be the only person they talk to that day in person. And that empathy is something that I’m sure people are worrying about, you know, “rise of the robots.” But if the robots are used as a tool then hopefully it takes away the drudgery, the difficult tasks, you know, the repetitive tasks. And then helps the carer to do those things properly. I suppose that’s the way to approach this, is it?
Stevienna: I think I would like to argue against framing certain tasks as drudgery.
Sean: Yeah, no, I suppose by that, I’m not talking about the care itself, I’m talking about some of the things they might have to do, like carrying things and, you know, that side of it, because that is the problem with carers, as you said we de-value the role to the point where it’s a very, very low paid role even though it’s so important and can be so rewarding.
Stevienna: But the healing capacity to care, by that I mean care about each other, you know, to actually give a damn about each other is one of the loveliest things that we have. And it really- I find that the language that’s used in this field de-values that so enormously. And it’s been quite gendered as well, I mean people are not saying oh well, you know, women do the rubbish work because they should, they’re not saying that outright but care tasks are traditionally given to women. So, when you’re de-valuing the act of human care, you are, effectively, de-valuing women, even though you’re not doing that deliberately. And, so I think there are these huge social biases that we’re working against.
To make the robot useful, is a different thing, I think. And what we might think of as a drudge task or a repetitive task might be something that actually really does need a human. One of the things that’s frequently mooted is, oh, I could go and give out medication, for whatever reason that is considered to be a drudge task, that is just repetitive and boring. And yet when you’re giving out the medication that’s also a moment of human interaction. And it’s a moment of human observation of the person, how are they seeming today. So, there is a bit of a danger of just relegating that task to mindless drudgery that can be performed by a machine. Because now, basically what you’re saying is that the human doing that work is effectively a mindless drudge.
Sean: Okay, yeah, yeah, absolutely, absolutely. I suppose- Yeah, I’m not trying to defend the use of the word drudgery, I suppose my point is across the centuries when automation has come in it’s been to replace those things which are repetitive-
[00:19:50]
Stevienna: Which are repetitive because that is what is easily automated.
Sean: Yeah, but not just as easily automated but-
Stevienna: [Overtalking - unclear 00:19:56].
Sean: -but they’re also the things that a human might get maybe bored doing and, therefore, make a mistake with, whereas a machine will do, usually, unless there’s a fault, will do the same thing every time.
Stevienna: But which humans. So, this is also a question.
Sean: This is not in the care, but this is the point, is I suppose my history is coming from not necessarily the care sector. So, this is it, this is applying this in a new place, isn’t it. And as you say, certain parts of society have de-valued it to assume it’s the same, but maybe, you know, that’s the problem.
Stevienna: I’m minded of a time when I was volunteering in an archive. And one of the things that you do in the archive is the exceedingly boring task of removing stables from documents. It’s horribly boring, it’s horrible. But I had an autistic colleague, and what she said to me was give me that stuff because that’s the stuff that I like to do and that you hate. It’s perfect.
Sean: Okay.
Stevienna: So, when I say well for which people-
Sean: Yeah, fair enough, fair enough.
Stevienna: Because there are a lot of people, and particularly a lot of people with intellectual disabilities that, you know, we’re trying to get them in to the workforce. These are tasks that not only can they perform well but it might suit who they are and what they do. And, so, therefore, that’s a job that no longer exists because we’ve now given it over to a robot. And in the meantime we’re pressuring this group of people to take jobs that are changeable and that are more difficult for them to deal with because a job that is something that’s constantly different all the time is more difficult for them with their condition.
Sean: Yeah, understood.
Stevienna: Right, so this is sort of- The holistic kind of thinking that I think that we need in that sector. We don’t need to be experts in everything but the bigger picture, exactly.
Sean: Absolutely.
Stevienna: To try to see what does the system look like. Who’s employed within the system. Who’s doing this kind of stuff. Volunteers used to be used in hospitals to go- And I don’t know if they still are in this company, because I grew up in a different one, as you can hear. I was a candy-striper when I was a teenager. And candy-stripers were, you were in the hospital and your job was basically to go around with the book cart and to talk to people and to make sure they were okay and to give them books. And it was a volunteer job that as a teenager it helped breed compassion and empathy, but it also gave you experience to put on a CV when you started looking for your first real job. You know these are all things that are important.
Sean: Yeah, definitely and particularly in kind of a multigenerational context because it gets missed quite a lot these days.
Stevienna: Exactly, yeah.
Sean: Just going back to the robots for a moment.
Stevienna: Yeah, so I was about to get to the robots, so this is what I’m about to say. So, tests that are physically dangerous, this is a very good thing for autonomation. So, these are the things that are less ethically fraught, I think. We know that injury from lifting is one of the major problems for care, so when let’s have a robot try to do that, to take the weight of the lifting, okay, actually that’s a good task for a robot to do.
Things like delivering large things in a hospital, sure, we’ve got robots already that are starting to do that, [s/l bathrooms 00:23:10] like, you know, it might be okay. But I think we have to weigh these things and because of the English language care stands for so much, right. It’s the emotive act between two people, it is a service that we provide. It is something that is purely transactional, you know, I could slap a cup of tea in front of you and I have technically performed an act of care, but I do not care about you or the tea or anything that’s going on, right. Other languages they have different words for these things. So, we have the tendency to conflate these things when we talk about robots for care.
Sean: Yeah.
Stevienna: What are the transactional tasks of care to which a robot might be suited and how do they fit in a service which is supposed to provide both, transactional and emotive care, that’s the question.
Sean: It’s funny you said that because the first note I made on my notes researching for this was social care with robots, help with difficult tasks but what about empathy. That was literally the first sentence I wrote and you’ve kind of summed that up with that last paragraph.
One thing I will just ask about, coming back to the robots, which we just did, what’s autonomous, is there any autonomous parts of these systems that you’ve been doing or is this an exploration to see how autonomy might play a part?
Stevienna: So, between the first and the second workshops with the councils, the robot did become more autonomous, yes. I mean, autonomy it’s always a progressive process, because they have to write the software. They might have to adjust the hardware, things like that. So, initially it was done Wizard of Oz, so you have a remote controller who is doing it, which, you could actually dress somebody that way, somebody could be sat in a room somewhere, you know, in Bangalore and do this as a remote task, which has a whole other world of difficulties that I won’t get in to right now. But, as the robot progressed over time they were able to automate it.
So, when we did the demo on Wednesday I was sat in the chair and I was actually the person that was being dressed. Yes, we have had the gold approval for that. And, so what I was doing was basically helping the robot to- I would put the garment on and at this point I was trying to put on a coat, sorry a dressing gown, rather than a hospital gown, which it was before. So, now it’s going on around the back rather than just something you stick your arms in, so that’s become more complex between the first and the second iteration. Put the sleeve on my, “bad” arm and then hold it up for the robot gripper to come and then it’s automated from there. So, to go from my wrist to my elbow was automated. At the moment our technician is the saying yes to go to the next stage step but the vision is that I would be able to do that verbally to say, yes, go on. I’m okay, go on.
Sean: Yeah.
Stevienna: And then it goes to the shoulder and then it grabs it around the back and it holds it out so that I, on my curvy stool, can reach back and put my arm in and then it pull it up for me.
Sean: Yeah.
Stevienna: So, that’s how automated the process is now. The vision is that, that will all be completely automated with kind of break points, where the-
Sean: You can check whether it’s okay, yeah.
Stevienna: -yeah.
Sean: Understood.
Stevienna: And built in with that are automated mechanisms, we demonstrated this, for when the garment might snag on something and the robot goes back to a resting position and then I, or a carer, if I can’t, would have to readjust things to allow the robot to continue.
Sean: Yeah, yeah. Because, I suppose that’s it- You’ve answered my next question already which was about the trust side of that. I mean having those break points, having those reset ideas. Did you trust it?
Stevienna: I did, but I think partly also- So, this is the concept that I’ve been playing with, with my colleague, David Cameron, trusting the deployer because normally there’s the trust in the robot, there’s the user and the robot. But he had this idea from a project that he was working on that we’ve sort of taken forward through this, that actually trust in the deployer plays a significant role.
And, this is the question that I was thinking about, because part of the ethnography is looking at myself, autoethnographically, because I am deeply immersed in these processes. And in sitting in that chair I did feel completely trust in the robot, because I trusted our technician.
Sean: Yeah, yeah.
Stevienna: I trusted our technician, absolutely, to make sure that the robot would not hurt me or even touch me, which it didn’t.
Sean: So, I’m wondering if that’s, as you say, because of your knowledge of how it worked and who had worked on it and the process.
Stevienna: So, he was right there at that point, yes.
Sean: Yeah, exactly, so the difference of, you know, maybe somebody being trained in how to use it, turning up at somebody’s house for the first time and saying hey I’ve got this new robot let’s use it, I’m wondering how that would feel.
Stevienna: So, this is something that came up on Wednesday in the workshop as training. It’s not just the training for the user but training for the care workers that might need to be working with it. Training for the other people at repair, you know, there’s got to be some form of maintenance and repair facility built in to this whole system within which to the robot is going to be embedded. But, I could feel- In sitting in the chair, I could feel that first tingling of if I were somebody that were disabled on one side, I could see me becoming actually quite fond of this machine that is helping me get dressed. And I think that’s part of the me, who I am, as somebody who likes tech. Right, that wouldn’t necessarily be the case for everyone. There would be some people for whom it would be a horror.
Sean: Yes, yeah.
Stevienna: But, for me I could see where the concept of cobotics, actually, you know, I could viscerally feel that beginning to work, and that was a bit of a revelation to me as well. I wasn’t expecting to already start to feel kind of fond of the robot.
Sean: Yes, I mean I have some friends who are decidedly untechnical and they have a robot automated mower, and he had a name, Monty the Mower, you know, so these things do, they gain names, yeah, you’re right, become fond of them.
One of the other projects we do in the podcast, chat about the embodied dancing with robots. And certain of the dancers became attached to certain robot arms, which is kind of a similar thing again. Like you say you become fond of a device.
[00:29:44]
Stevienna: It is our incredibly human tendency to anthropomorphise and to create relationships even with objects that we know are inanimate. We name our cars, some people name their phones. You know, God knows we all like clutch our phone like it’s our baby, our very best friend, whatever. This is innate in us, so we need to work with that rather than trying to work against it. But also not to abuse it. And not to create trust where it’s not deserved yet.
Sean: Absolutely, so thinking sort of forwards now, what happens next, you know, when does the project finish and what are you hoping that happens after that as an outcome?
Stevienna: Sadly, it’s finishing at the end of June. So, what will happen after that is a lot of data analysis and paper writing. I mean, I would be very happy if dressing robot continues under development, you know, continues on that pathway towards something that is actually eventually going to be deployed. I would very much like to stick with it. And just stick with that part of the project, but it’s always dependent on, you know, what funding we can find as we go on, as these things are.
But, yeah, I mean I started fairly sceptical, I have to say. It seemed like dressing was a task that was not really going to be conducive to automation, except for a very small group of users who perhaps were paraplegic or something but otherwise had upper body strength, that kind of thing. But as the robot has grown and changed my opinion of it has definitely grown and changed. And I actually can see that it could actually be incredibly useful for a relatively, hopefully, large enough to bridge that gap, you know, that death value of innovation. And, you know, get to the market, because if it can be made as we’re currently envisioning it, I think it would be very useful. And I think it is something that could be in people’s individual homes, which has often been the sticking point, yes.
Sean: That’s about all we have time for today. Stevienna, it’s been brilliant to speak to you, I’ve really enjoyed hearing about your fond friend the robot.
Stevienna: It has a name, actually, it’s called Frankie.
Sean: Thank you for telling us about Frankie, and thanks for joining us on the Living with AI Podcast.
Stevienna: Thank you so much, Sean, for inviting me, it was very fun to talk about it.
Sean: If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living with AI Podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited and it was presented by me, Sean Riley.