Living With AI Podcast: Challenges of Living with Artificial Intelligence

TAS for Health and Social Care

Sean Riley Season 3 Episode 13

TAS for Health and Social Care

This projects episode features four TAS Hub projects associated with Health and Social Care:
 
DAISY - Tunde Ashaolu
Verifiably Human-Centric Robot Assisted Dressing - Yasmeen Rafiq
Empowering Future Care Workforces - Cian O’Donovan
Imaging predictors of Oesophageal Cancer MDT patient outcomes - Navamayooran Thavanesan

Podcast production by boardie.com

Podcast Host: Sean Riley

Producers: Louise Male  and Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

 

Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 3, Episode: 13

TAS for Health and Social Care

This projects episode features four TAS Hub projects associated with Health and Social Care:
 
DAISY - Tunde Ashaolu
Verifiably Human-Centric Robot Assisted Dressing - Yasmeen Rafiq
Empowering Future Care Workforces - Cian O’Donovan 
Imaging predictors of Oesophageal Cancer MDT patient outcomes - Navamayooran Thavanesan

Podcast production by boardie.com

Podcast Host: Sean Riley

Producers: Louise Male  and Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

 

Episode Transcript:

 

Sean:                  Welcome to the Living With AI. AI, unless you’ve been under a rock stands for artificial intelligence and we hear about it all over the place on a daily basis. It’s not an exaggeration to say AI is changing our lives in all kinds of ways and here we discuss different aspects of AI under the umbrella topic of trust. Today,we’re discussing how TAS, that’s the Trustworthy Autonomous Systems, relates to health and social care. In a moment, we’ll meet several researchers who’ve been involved in projects in this field, but first just a bit of housekeeping. This is season three of Living With AI so there are couple of seasons’ worth of episodes for you to discover. You can search TAS Hub or have a look in the show notes for the relevant links. We’re recording this on 22 May 2023 so now time to meet today’s guests. So joining us today to discuss their projects are Tunde, Yasmeen, Cian and Nav. So if- I’ll just go round the room in no particular order and if you could let us know your name and the title of your project and then we’ll get a bit more detail afterwards. So I’m just going to go round in the order you are on my screen, so I’m going to start with Nav. 

 

Nav:                   Hi everyone. My name’s Nav. I’m a [audio distortion 00:01:11] and also doing a part-time PhD and I’m looking at oesophageal cancer. That’s my interest. 

 

Cian:                  I’m Cian O’Donovan. I’m a social scientist at University College London and I’ve been leading a project called Empower Future Care Workforces. The idea is there to understand how health and social care professionals can benefit from using assistive robotics on their terms. 

 

Yasmeen:         My name’s Yasmeen Rafiq. I’m a research associate at the University of Sheffield and I’m leading the project Verifiably Human-Centric Robot Assisted Dressing which is to assist people with stroke patients, a robot assisting people with stroke patients to get dressed. I’ll go into more details about the project later on in the chat. So yeah. I look forward for the group chat. Thank you. 

 

Tunde:               My name’s Tunde Ashaolu, I’m a doctor, a consultant in emergency medicine currently working at York and Scarborough Hospital. The project I coordinate is DAISY. DAISY stands for Diagnostic AI System for Robot Assisted Triage. It is DAISY that is a project that we’re undertaking with the University of York and it is to help manage the overcrowding that we have at the front door of access to acute care. 

 

Sean:                  Well it’s really fantastic to have all four of you here on the podcast today and thank you so much for your introductions. Tunde, can you tell us a bit more about the Diagnostic AI System project?

 

Tunde:               Thank you very much. For ease of communication, I’ll call her DAISY because it makes it easy. It’s common knowledge that we struggle in our country and globally as well to manage the front door of our acute services. The emergency department, GP surgeries, ambulances, people struggle to get the right care when they need to have care. And it’s a very difficult thing to automate and people have tried in the past and it hasn’t actually worked. What we’ve tried to do is rethink the entire process and look at the things that we do on a regular basis and see if we can put that into an automated form. Perhaps add a bit of AI to it as well and then get a robot to perform the tasks that- You know, once you  remove these tasks, what’s remaining for human beings is much less, so we’re not going to eliminate human beings. It’s impossible to do that. Which is why we called it robot assisted triage. If we get that triage right, we know that we’ll get treatment started as quickly as possible and we know that the quicker you start a treatment, the more results you get, the less mortality you have, the less morbidity you have. So it’s a very very simple thing. If you can treat people as quickly as possible you save more lives. Treatment depends on how quickly you can triage them. So we can get our triage better then we will save more lives. And you know that machines do some things better than human beings, but they can’t think. So the idea is for us to think for them and make them continue the thinking in our absence. 

 

Sean:                  Fantastic. Thank you so much. Yasmeen, could you tell us a bit about the project you’ve worked with the stroke victims, thank you? 

 

Yasmeen:         Yes of course. So I’m looking at robot assisted dressing. So robot assisted dressing has the potential to support stroke survivors’ independence which is positively associated with quality of life. So it’s to facilitate safe human/robot interactions. We need to understand experts and views on assisted dressing through focus groups, interviews with stroke survivors to see what they think of robot assisted dressing, and codesign workshops which is a codesign workshop including stroke survivors, healthcare professionals, care givers, and also their family members and to take on their views on robot assisted dressing. And these would help us then develop computer models, specifications, and algorithms. So we want to create robot assisted dressing which can be trusted by people and we want to be able to mimic the same human to human collaboration as a patient and a caregiver. So we can replicate that in a human patient and a robot assisted dressing caregiver. 

 

Sean:                  Thank you. Nav, can I ask you about the imaging predictors, the cancer project you’ve been working on?

 

Nav:                   Yeah, so I’m working under the Walsh laboratory at Southampton and we’ve got an interest in oesophageal cancer and one of the things that we know is that with every cancer which currently is managed in the UK it’s managed through a multidisciplinary team and that’s been around since the 1990s and it’s led to incredibly good outcomes as a result. But in the meantime, we have a mix of domain experts in one room at one time who are under quite a lot of work pressures in terms of caseload, time for preparation, different voice in the room etc. It can lead to a degree of variability in that decision making process. And much like Tunde mentioned earlier as well, we’re interested in that decision making process quite early in the patient’s care and how we can use data to try and predict outcomes but also to model the decisions themselves, try and improve automation perhaps in that workflow and in doing so, try and improve the efficiency of the MDT process. And also, hopefully, improve the variability that we see in that decision making and that, in turn, will hopefully lead to better equality of decision making in healthcare. And so one of the ways in which we want to do that is to look at predictors of outcome using imaging. So histopathology, but also radiology as we’re acquiring a data rich environment that the MDT is routinely used to analysing with the human eye and we would like to see if we can extract important information that’s perhaps at the pixel level which the human eye can’t necessarily pick up on but actually may speak to the patient’s long-term outcomes and processes. 

 

Sean:                  So you mentioned MDTs there, that just stands for the multidisciplinary team you mentioned?

 

Nav:                   Absolutely right yes. The multidisciplinary teams. So it’s a group of individuals with different speciality expertise. So a surgeon, a medic- A physician really, an oncologist, a medical oncologist or clinical oncologist, and in our case, even a gastroenterologist, research nurses but also specialist cancer nurses, physiotherapist, the gamut is quite broad but that’s exactly the reason why because we bring all our expertise in one place in one time. 

 

Sean:                  Thank you fantastic. Cion, so could you tell us about the empowering future care workforces project you’ve been working on please?

 

Cian:                  Absolutely. I guess our starting point was with robotics in health and social care that are coming out of labs and being embedded in places like hospital wards, physiotherapy clinics, peoples’ private homes and private and public practice and there’s a ton of work already in this sector that’s looking at technical functionality. Work from clinicians looking at clinical upside or work by social scientists like me but a gap in some of this research has been a focus on putting health and social care professionals themselves at the centre of the research. So that’s what we wanted to do. We wanted- We had a 12-month scoping project to say okay, how can health and care professionals themselves, people like nurses, physios, occupational therapists, how can they be empowered to use assistive technologies, to work with assistive technologies in ways that matter to do them to do their jobs in a way that kind of matches the values, matches the best practice, in ways which they derive wellbeing from, satisfaction from. So we asked examples of people like nurses, people like physios, you know, what is it about their jobs that they value today? What is it about the tasks? What kind of skills? What kind of capabilities? What kind of knowledge are needed to do those tasks? And together with about 100 different people like that we thought through the current developments of assistive technologies in these settings and the implications it would have for new skills, for new tasks, for new capabilities that focus on need. 

 

[00:10:06]

 

                            What did people tell us? They told us first of all they want to keep doing the jobs they’re doing today. They told us there’s going to be a huge diversity of capabilities they’re going to need. Not just kind of skills and capabilities for people on the front line, for manual handling, for ensuring that when there’s touch and intimate touch, that’s respectful for ensuring that kind of assessing safety is at a really high level, but also that there’s capabilities needed right through our organisations. So for the folks who are procuring robotics and assistive technologies, often at local authority level, maybe not even in a clinical setting. For the legal and governance people who are making sure the contracts are good. For folks who are ensuring that best practice is done in regards data and other technologies. So we think- I guess we think that that- The framework we’ve developed there is going to be important for all those kind of people and how we assess how these technologies are embedded in practice and we think there’s some things there for developers of these technologies to assess both the social and the technological kind of implications of embedding technologies in new places. 

 

Sean:                  All of these projects seem to be in a big way about that kind of interface between humans and technology. Humans and robots. Humans and AI. And I know that in the past we’ve had Prokar Dasgupta on the podcast talking about robot assisted surgery and his take was that people actually prefer and choose the robot assisted surgery where possible. Is that the same in all of these fields do you think or have we got a trust barrier with the patients, with the public, with people to get past?

 

Nav:                   Yeah I would absolutely agree with that. I think certainly for my own project and our sort of interest, one of the things we wanted to focus on was actually introducing the public into the design process of this assisted decision tool, if you like, quite early on. So one of the things we’re going to be doing as part of our project is also focus groups, but not just the patients but also the clinicians. It’s effectively a two-way conversation, that these sort of things encompass if you like, and they have massive implications for our patients, especially as a cancer patient, you know? So when we’re moving in towards, as I suspect in future we will, more and more of the AI interaction with the decision making that we make in a clinical context, that’s going to have massive implications on the patient in terms of how they trust us to make those decisions and how transparent they are and how explainable they are. Which in turn will ask questions of the explainability of those models that we work on or the algorithms that we train as well. So I think I absolutely agree. I think my perspective certainly is we have to engender trust with noth these key stakeholders and that’s going to be the patients but also the clinicians, so there’s buy-in from both ends and that makes a slightly more seamless transition in the future. 

 

Sean:                  And I suspect the clinicians also kind of have to sell this onto the patients as well. Cian, do you think this is something that will just come with experience as people realise how good these systems can be? Or is this just- Are there any other ways we can go about helping improve trust in these systems?

 

Cian:                  I think it’ll come with experience absolutely, and what underlines, maybe what underlines from that experience is process of learning, right? So how are folks throughout organisations going to learn? How are they going to learn from early adopter organisations is a really important thing. What are the infrastructures that we’re putting in place that folks maybe in one part of social care can learn from other people in maybe the same integrated care system, but maybe there’s barriers there. So I think what comes with the implementation in one part of the health or care system has to be the infrastructure that will help transfer that knowledge and embed it contextually.

 

Sean:                  And just thinking about these- All these different situations, I was thinking, Tunde, your project deals with people in those absolute kind of key moments of literal life or death with potential seconds meaning everything in decision making terms. Is that going to be more of a challenge or do you think it’s an easier sell?

 

Tunde:               Yeah it’s more of a challenge. Yeah, you’re very right. Most of the patients we’re catering to are coming to see us in their most vulnerable states. They’re scared, they’re petrified, they’re panicky and that’s when they need to trust the system more. It is easier for human beings to trust technical systems, because technical systems do technical things. The kind of skills that we need for such people in their vulnerability are non-technical skills that robots and AI may struggle to deliver. So right from the very beginning we’ve had this conversation about how do we get our patients to trust these machines. Now we have a twofold thing whereby the patients have to trust the machines but the clinicians who are accepting the machine into their midst also have to trust what the machine is telling them. So yes, we have spent a lot of time discussing trust. However, we have managed to build things into the product, into our project to try and stimulate trust in people. The first one is that I’m told we believe that patients and clinicians trust things that they have explained to them. So our machines will be coming out with answers but also with the reasoning behind their answers, so that people can actually see how they’re thinking. It’s also not a pure coincidence that we’ve chosen a humanoid machine, because we think that people tend to trust things with eyes and noses and mouths a bit more. The way the language in which we’ve written the communication from DAISY also has been subjected to people who are experts in human factors, who are able to tell us how to phrase these statements, how to phrase sentences. From the initial tests that we’ve done, we’re very happy that people can trust the system but yes, it hasn’t come on a platter of gold. We’ve had to spent every minute trying to understand how to build trust into our system. 

 

Yasmeen:         So yes, so this is actually a very important question because we actually had some discussion with stroke patients just to take their opinions on what they think of having a robot assistant. So where having these robot systems would be assisting patients in their own homes, so it’s a technology entering into the homes of these vulnerable patients and it was a mixed feeling that we had. So there were some patients who were actually really looking for this new technology but we did have patients who were like actually, no, how can we trust a robot to do this and one of their main concerns was losing out on that human factor. If the robot replaces a human caregiver, they won’t really be interacting with a human any more and how can they emotionally engage with a robot. So they’re really actually quite fearful of it. So again, so one of the things in our project we’re trying to do is bring together the stroke patients, clinicians, caregivers, family members, and take on these things and really try to address these concerns. So yeah.

 

Sean:                  Yeah because often the care is kind of multi- Yeah, it’s not just a physical help getting out of a chair or whatever or getting dressed, it’s also a social interaction isn’t it? Talking of social, Cian?

 

Cian:                  You know, in terms of trustworthiness and particularly in terms of decision making which is something Nav mentioned there, a really important issue that came out in our analysis talking to folks was a deep uncertainty about where decisions about appropriate levels of autonomy are made. Who’s designated to make these decisions? And this is really important because these kind of decisions will dictate not only how robotic systems are expected to work but also how and what health and care professionals are expected to do. So as we see this shift, as we see these shifts in levels of autonomy, we might expect that there’s shifts in the roles and responsibilities of folks and I think who’s going to mediate tensions between on the one hand robotic functionality and on the second hand, human empowerment within care organisations or within innovation processes is a really, really important issue and we need- I think we need a wide range of professionals and developers around the table talking about that issue. 

 

[00:19:39]

 

Tunde:               Yaz referred to people having mixed emotions, mixed feelings about this. It’s one of the things that we also found out in our project. Some people were very suspicious of robots asking them personal questions while a lot of people were actually happier that they were being asked by robots. So some of the patients, for example, have things that they consider embarrassing and they struggle to express to human beings, and they felt more at home just punching buttons into a robot and they preferred the robot. So again, that is buttressing what Yaz said about different kinds of people. Then again, using the word responsibility, which is a great one for us. When you make decisions in medical practice, the big question has always been where does the responsibility lie? For those of who work in a clinical environment, the consultant is where the responsibility lies. So I’m sitting here with you now. There are doctors on the shop floor doing work in my name and I’m responsible for what they’re doing, even though I’m not there. So responsibility is a big thing when we talk about trust. Now the way to get people to accept AI in a clinical environment is to discuss with them who is going to be responsible. So DAISY, for example, is geared to work in a team whereby even though DAISY is assisting with triage the ultimate responsibility still rests with a human consultant. 

 

Sean:                  I think that’s really important to hear isn’t it? Yasmeen, you had your hand up?

 

Yasmeen:         Yeah, I just wanted to reiterate that same comment that, you know, autonomy is good but I think ultimately we do need to keep the human in the loop and ultimately the responsibility should lie with a human expert. So again, our robot assisted dressing which will really be assisting patients in their own homes, but if things go wrong, you know, these people, the patients, they may not really have the expertise, they may not be IT savvy at all, they wouldn’t know how to fix anything, whose responsibility would that be then? So we do want to make sure that an IT expert or a caregiver should be able to operate the robot assistants remotely. That’s very important. So yes, ultimately the decision should lie with a human expert and we should always have a human expert in the loop.

 

Nav:                   I would echo that as well and I think that’s the same for my project because ultimately I think that what you’ll probably find is that across all of our projects there is this common theme which is where patients are involved it’s hard to envisage a situation where the human is excluded entirely. And that’s probably sensible. And actually, when you present the stuff we presented at conferences to the clinicians in this context, they have very similar concerns. Actually, are we being replaced- How do we trust the result of the actual outcome? What kind of models are you using? How do you explain the models? The logic? As Tunde mentioned for his as well, the explainability that’s so key and we know from clinical studies as well where deep learning models have done astonishing things but they’re black box models which makes regulatory approval challenging. Not impossible, but challenging. Whereas an area of explainable AI which will at least give us a better shot at explaining that logic will help to build some of that trust as well. But if the clinicians don’t buy into it, it won’t get implemented. If the patients don’t buy into it, it won’t get implemented. And I think that’s why it’s so key to impress both sides of that equation at all times. 

 

Sean:                  I think that’s really important to mention that whole black box idea you just said because quite often deep learning systems, neural networks, all of these different types of technologies have been trained on a dataset to produce outputs that are measured against, again, noted, annotated, rather, data from experts, but once theyre up and running on their own, like you say, collaboration is really key here. Is there any issue with bias in these systems? Do we have to worry about that when you’re using them in a clinical sense?

 

Yasmeen:         Yes, so that is one of the key challenges that we’re trying to address with the robot assistants. Again, you know, people are- Stroke patients come from diverse backgrounds, people with different accents, different skin tones, different facial expressions. How do we train- Make sure that when we train our neural networks, we want to make sure we include all these diverse background people in the training and that’s very important. And one of the difficulties that we have at the moment is that we need to look at different facial expressions. So people, how do they- What kind of expressions do they make if they, for example, are in pain? Or fear? And especially with a stroke patient because they have diminished facial muscles, how they express fear may be very different to those from healthy patients and at the moment, we don’t really have a proper database of stroke patients and facial expressions. So one of the things that we’re trying to do is run these individual interviews where we would be interviewing stroke patients. So we have an occupational therapist who will then ask certain questions to elicit certain emotions from the strok patients so we can capture different facial expressions associated with different, like pain, happy, sad emotions, so we can develop this dataset which we hope then others can build on. But how we train these dataset- Our neural networks at the moment is proving very difficult because even for our interviews that we’re going to be conducting, it’s very difficult to recruit patients because, you know, we sent out adverts to various organisations but we only really managed to recruit I think about four, five patients. So the other thing that we’re trying to look at, how we can perhaps generate synthetic data using healthy patients’ datasets that are available and try to then take those and use deep fake to put those emotional expressions onto people with stroke patients. So that’s the other solution that we’re looking at. But then of course, you know, the robot needs to be robust enough to different accents as well. So if someone says stop, but depending on the accent, the robot needs to understand stop is a stop, regardless of what accent a patient has. So these are the kind of challenges that we’re looking at in our project and it’s very important.

 

Cian:                  Where bias came up for us in our project was- Where it was important was around the issue of inequalities. And it was less about the inequalities in the data but rather how new and emerging technologies might sustain or make worse inequalities that already exist and are pervasive across our society. And one of the most, you know, one of the strongest things I heard in our workshops was one respondent said Cian, if you want to understand how to empower staff in the future, you need to understand how technology disempowers them today. And that made us think about, well, it made us think about access. So who gets access to emerging technologies? Are there postcode lotteries? Are there some areas that can afford new robotics technologies and some can’t? How do we distribute them across the country? And folks told us it was going to be really important if they could access one set of technologies, let’s say in Southampton, that would be really important that they could access the same stuff in Glasgow. But another area where this is important is access across patient pathways. So say for folks who can access reablement technologies getting physio in clinical settings, when they leave hospital can they still get access to them at home in their community? They’re really important questions I think for procurement, for governance and so on. 

 

Sean:                  How does AI cross those postcode boundaries? I don’t know. It’s down to different trusts though, right? That’s obviously a wider question.

 

Cian:                  Well one, Sean, just a real quick one, any issues there of course is the kind of form that these technologies take, particularly AI, requires centralised computing and centralised infrastructure. That doesn’t necessarily lend itself to distributed technologies in community settings. So we may have tensions in the very ways in which huge resources are required to pull together to do things, and does it pull resource from one CCG, one ICS, from neighbouring ones and so on? So that’s future work to be done I think. 

 

[00:29:38]

 

Tunde:               The concept of bias in what we’re trying to do is one that we’ve handled rather delicately because we actually want some bias in the systems that we’re building and the reason we want that is, we’re human beings, we generalise. In the acute phase, for example, I have little time to find out what is wrong with my patient. So even though I- We all preach against stereotyping, I have to resort to a bit of stereotyping to be able to manage my patients. So if you come in screaming help and help and help, with your arm around your throat, I’m going to assume that you’re choking and that’s the way medicine works. So we have had to build a bit of bias into our system but yes, we cannot allow that to run wild. So we need to be able to keep an eye on it to make sure it doesn’t become counterproductive to what we’re trying to do. However, we also know that some of the people who come seeking help in their acute phase are disadvantaged because they don’t speak the language and we struggle to find people who can speak languages. So we address that by presenting our interface in multiple languages. So in the old days, you go around looking for someone who speaks Polish or whatever, but now, because it’s AI, because it’s automated, you can choose your language and have your conversation in your language of choice. Yeah, the result will be printed out in English for the clinician who’s going to look at it, but that disadvantage has been taken care of. So even though I said that we’ve deliberately built some bias into it, we’ve also built it in such a way that it eliminates some bias. Obviously medicine is a dynamic process. The boundaries that we’ve drawn for data today may be wrong tomorrow and we’ll be prepared to change it as long as we don’t lose our focus on where these boundaries are. When I say boundaries, I mean the bias boundaries. As long as we don’t lose focus and we have a plan to escape it, I think we’ll be quite happy. 

 

Nav:                   One of the challenges is the size of the data. So, taking histology as a nice example, because we’ve done a bit of work in our lab on that in the past, what we initially tried to do is scan an entire slide of a patient’s tissue biopsy, which with our scanners, it can write about 2 GB of data quite quickly. So then we need to think about how we’re going to handle that data. So then we break it up into patches and we then look at the patches of those in much smaller scale and try and use that, and the same thing with scans. High density scans. And different hospitals have different protocols. Different scanners, different software, different interfaces which can generate a lot of different challenges about how we handle the data, especially when it comes to imaging and computer vision as well. So right now, one of the challenges we have to face is finding a way of downloading that securely, make sure that we can annotate it as well. So part of the process will be being able to annotate those images if we’re using supervised learning and we have some wonderful radiologists helping us with that process, and then passing that through to whichever models we develop and train to handle that level of detail. And that could be pixel, it could be simply histol- You know, what’s the word I’m looking for? Histograms. It could be textural changes. So that’s stuff that we’ve still got to come and try to figure out how to address and that’s an active challenge and an active process that we’re working on at the moment as well. So I don’t have the readymade solution for you just yet, but we’re hoping to kind of address that, you know, with lots of very clever people across the board in both TAS Hub itself but also in radiology departments, scientific computer, the hospital as well to try and find a common language to work with so we can enable that pipeline.

 

Sean:                  Because I can imagine you need all that detail which results in the huge file sizes but then you need to simplify that to be able to train data- To train the AI models on it, yet simplifying it might lose the detail you need. It’s a catch 22 I guess. 

 

Nav:                   Exactly. And one of the things we have to keep in mind as well is that it is wonderful to be able to have access to high performance clusters, for example, and to have the university level resources that we might need for this project. But we have to then think about the long-term implementation and the fact that ultimately our models will be employed and deployed in NHS infrastructure with potentially slightly older computers and slightly less powerful ones, and how do we navigate that. If we have a wonderful model that we can’t actually actively use on a day-to-day basis then we have to find a balance in terms of the physical infrastructure as well to be able to handle that process.

 

Yasmeen:         One of the other challenges that we’re looking at in our project is like so with the robot assistants, what we want to make sure is that it doesn’t hinder any rehabilitation therapy that the stroke patient is going- Because the aim really is to get the stroke patient back, recover from the stroke. So we don’t want to have a robot assistant that over assists and hinders that progress. So again, we’re talking with clinicians and seeing how we could use certain techniques and how we could adapt those for rehabilitation into the robot assistant so that the robot assistants, as the patient progresses with their therapy, the amount of assistance is reduced as well. 

 

Sean:                  A potential problem with over trust I guess.

 

Yasmeen:         Exactly. So eventually, you know, at the beginning, in the early phase of the stroke, probably the autonomy, the human, the patient will sort of like give more of the autonomy to the robot assistant. But over time as they start making process, that shared autonomy should then go back to the human patient slowly, you know. They should be able to regain their independence. So when we’re running these focus groups with clinicians, this is one of the things that we’re looking at, how we can build that into our models and also what other things that a physiotherapist may work- When they sort of work with stroke patients, some of those techniques, how we can adapt those into the robot assistants as well.

 

Sean:                  So the robot’s care- Well, assistance, is kind of tapering that becomes part of the plan for rehabilitation I suppose. 

 

Yasmeen:         Absolutely, absolutely, yes. 

 

Tunde:               Inevitably, the robot and the AI system will learn from the things they’re doing because they can easily be programmed to learn. One of the things we have to be very careful with, certainly on our project, is to ensure that the learning by the AI system is not reintroduced into practice. Just because a robot sees associations and connections between things doesn’t necessarily mean in the real world, in the real clinical world, that it makes sense. There has been several examples in the past where AI has picked something and run with it and it’s turned out to be entirely disastrous. So we’re very cognisant of that and we’re keeping track of that. However, we’re developing that learning because we think that that is how we’re going to generate research questions for clinicians. When a robot can see associations, when the AI system can see associations, that’s fed back to humans who again will see through it and say this must make sense and then we generate research questions for the clinicians. 

 

Sean:                  That sort of feeds into something that’s popped up on the podcast before which is that the AI systems don’t always have the whole context, right? So they don’t necessarily see the wider picture and sometimes that’s great because you want them to have tunnel vision and do the job they’re doing but sometimes I know that certain clinicians have found additional problems because they’ve noticed something else while they’re doing something. 

 

Nav:                   Two things really. One is that we also on our project want a little bit of bias. I’m going back there just for a second, because it helps to separate our classes and we’re trying to predict treatment decisions, so actually that’s helpful to us. But also, one of the things that’s really helpful about our project is it’s giving us an insight into decision making in the first place. It’s also feeding back to give our clinicians a bit of an idea of what they might be doing that they’re not necessarily aware that they’re doing. And that doesn’t necessarily mean that it’s wrong. Of course a lot of this experiential learning and that’s how medicine evolves and that’s how we learn. So for example, one of the things that we’ve noticed in our unit that we’ve focused our training data on in the first instance is that age is playing a surprisingly high importance. It’s the highest importance in many of our models that we’ve trained so far with simple clinical pathological data, and age has historically been used for our patients as a surrogate marker of frailty and fitness, but in fact, over the years we’ve evolved new metrics for that. But what it does suggest is that sometimes age may still play a subconscious bias that perhaps we’re not always aware of, and sometimes, being able to look into the future importance of our models gives us an insight that we perhaps hadn’t considered to begin with and it can be really helpful for feeding back to our clinicians and saying are you aware this is going on and is that a problem for you or not? And that can also generate some research questions too. 

 

Sean:                  I’d just like to say thank you to all of you for joining us today. It’s been just a great chance to talk to some extremely experienced people and hear what’s going on in different ways relating to health and AI systems. Thank you very much for joining us Tunde. 

 

Tunde:               Thank you, you’re welcome, thank you. 

 

Sean:                  Thank you Yasmeen.

 

Yasmeen:          Thank you. 

 

Sean:                  Thank you Cian.

 

Cian:                   Cheers Sean.

 

Sean:                  And thanks Nav.

 

Nav:                    Thank you very much.

 

 

Sean:                  If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS hub website at tas.ac.uk, where you can also find out more about the Trust for the Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Bordie Ltd, our theme music is Weekend in Tatooine by Unicorn Heads and it was presented by me, Sean Riley.