Living With AI Podcast: Challenges of Living with Artificial Intelligence

Living with Robotic Surgery

Sean Riley Season 1 Episode 8

00:40 Paurav Shukla
00:43 Christine Evers
00:48 Age Chapman
01:04 Sean Riley

01:22 FTC settles with photo storage app that pivoted to facial recognition (The Verge)
05:21 Literatin Chrome Plugin to Rate Ts & Cs

10:00  Prokar Dasgupta
16:48 John Wickham (Wikipedia)
33:30 Telegraph Article with Professor Dasgupta

45:10 Rob Miles on Computerphile


Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.



Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

Season: 1, Episode: 8

Living with Robotic Surgery

00:40 Paurav Shukla
00:43 Christine Evers
00:48 Age Chapman
01:04 Sean Riley

01:22 FTC settles with photo storage app that pivoted to facial recognition (The Verge)
05:21 Literatin Chrome Plugin to Rate Ts & Cs

10:00  Prokar Dasgupta
16:48 John Wickham (Wikipedia)
33:30 Telegraph Article with Professor Dasgupta

45:10 Rob Miles on Computerphile

Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.


Episode Transcript:

 

Sean:                  This is Living With AI, where we look at artificial intelligence and the intersection with us, the non-artificial intelligence in most cases. Does AI improve our day-to-day lives? Can we trust it? Should we? Today we're looking at one of the most trusting situations anyone can find themselves in, surgery. We're talking about robotic surgery. We'll hear from Prokar Dasgupta, who became the first professor of robotic surgery at King's in 2009. As you'd expect, he's an expert in his field, a clinician and a scientist. But before that, I'll introduce our panel. 

 

Braving the panel today include regulars Paurav Shukla and Christine Evers, and for the first time on our panel, we welcome Age Chapman. Paurav is Professor of Marketing at Southampton Business School. Christine lectures in computer science at the University of Southampton and specialises in machine listening. And joining us for the first time is Age, she's Associate Professor of Computer Science in the Web and Internet Science Group at the University of Southampton and also runs a science fair for local primary schools. So finally, someone who's used to explaining things on my level.

 

If you're wondering who I am, my name is Sean Riley and if it moves, I usually point a video camera at it. But for this, I get to put the camera away, sit back and fiddle around with microphones. We're recording this on the 14th of January 2021. So just general catch up on with everybody. Hope you all had a good break over Christmas. I noticed the story in The Verge earlier this week about a company that's been ordered to delete user data and any algorithms trained on it. 

 

Christine:          I thought it was really interesting, not so much for the work that was lost, but from the perspective of the people, right? So a lot of this is people put their data into the system for a very specific reason and gave consent for its use for a very specific set of reasons that were obviously judged as violated. And more than that, even if it hadn't been violated originally, people's minds change over time, right? We learn new things and we're ever learning and evolving and changing our opinions. So how do we know that even if they were given consent the first time that that consent lasted as long as it was needed to? It's a hard problem in this world. 

 

Sean:                  It is. I mean, the company were called Ever and by the sounds of it, they went through a name changes over the time, but they retained all this data. I mean, they've been ordered to delete photos and videos of users. By the sounds of it, they were a cloud storage provider who, I don't know if they specialised in photos, but people had a lot of photos up there. And then they trained algorithms using those photos, seemingly without the consent of the people. How many trust issues are in that? I mean, just so many. Paurav? 

 

Paurav:              I would say this is really a challenge to all of us, not just as person as an individual level, but at also a societal level. And this spans across the globe. We have to understand that while sitting in the UK, nowadays, when I'm thinking about different apps, and for example, if you remember that app called Face App, which took your photograph and then turned it around as how you would look in 10 or 20 years’ time, and so on and so forth. Or for many, many other apps, Microsoft also tried to do ‘what is your age app’. So when you think about all these different types of apps, which are using different kinds of AI mechanisms, the problem is, is that where do they originate from? And so in that country, what kind of rules apply with regards to individual ethics, with regard to societal ethics and morality around it, as well as the legal framework that is supporting. 

 

So for example, if I'm, if I'm using some sort of emerging markets, and you know, I'm an app from an emerging market, I have no other legal ramification, I could do anything with that data. And so this becomes a nightmare, come true for most of us, when you think about it at an individual level, because for us, it's one minute worth of, you know, just checking it out, putting it on social media, but then after the ramifications are poor, you know, much longer.

 

Sean:                  Even if they did ask for consent, how many of us put your hand up now have ticked just yes, I agree on almost every time you're asked to agree to some terms and conditions, right? Has anyone gone through the 26 pages of the iTunes or however many is, I just made that number up? How many pages is of the iTunes? Oh, people are putting their hands up now. Does that mean you have? 

 

Paurav:              It's a funny one because one time I was sitting with the legal councils about two years ago, I was sitting with the legal counsel of Amazon, Facebook, funnily enough, and another big mega corporation in digital world together. And I was funnily enough moderating the panel. And so I asked them the same question, and you could imagine the silence coming from their side also. So in that sense, when you are thinking about it, you know, many times when legal experts devise this consent forms and all those consent related aspects, then when they go to other websites, do they actually check everything? Then I don't think, I think the answer is very much what, we know what we do as such. 

 

Sean:                  A few years ago, there was a University of Nottingham project to build a plugin for a browser. I think it was called literatin. What it did was it compared the text in the terms and conditions and gave you an equivalent novel, okay? So for instance, the terms and conditions for Facebook might have equated to reading Moby Dick. Right. And therefore you could then sort of have a real kind of idea of what level of education you needed to have attained to be able to understand the terms and conditions, which beggars belief really doesn’t it? Anyway, we are being massively sidetracked onto terms and conditions. 

 

Age:                    I was going to come back to something that Paurav said about us uploading the data. It's not just that, though, is it? It's actually also our family or friends who take photos that contain us and where we can't possibly give consent unless we actually give consent to the photo being taken, which in many situations is actually impossible to enforce. So, how is data being handled where you're basically an opportunistic subject of a photo? And how can you possibly ensure that consent when photos aren't strictly limited to a single person who's actually in control of that photo? 

 

Sean:                  That's a problem that even before AI has been rearing its head, from photobombing people's holiday photos right through to any kind of publishing. This is a problem, isn't it? 

 

Christine:          Yeah, absolutely.

 

Sean:                  But to put that to one side now, because today's feature is all about robotic surgery, and in a moment we'll hear from our expert, Prokar Dasgupta. Before we get to that, what are your thoughts and memories of things to do with robotics? Let's start with Age. Tell me about your experience in robotics? 

 

Age:                    Actually, I come from a classically computer science side of this. My research itself is not in robotics. The best I can do with this is my own surgery. So I've had four surgeries in my life, all of which have been in some way assisted with a robot. 

 

Paurav:              I am not particularly into the field of robotics as possibly the listeners who are the regular listeners to this know. But funnily enough, I still have an interest. From childhood, you know, robots have always been fascinating in different ways. When we saw those robots from Sony walking up the step, my God, that captivated the globe's attention. And in that sense, when I was looking through this, when I was thinking about robotics and our podcast, I was reminded of old memories, school day memories. And I found in 1986, National Geographic did something around robotics and surgery and robotics and its applications. Now, even in September 2020 issue, they also picked up on robots. And so in that sense, you can see that there is a continuous dialogue and the way the dialogue is moving is far more interesting. 

 

Sean:                  And our resident roboticist, Christine, you obviously deal with robots a lot, perhaps not while you're sitting in your own home office working from home. But maybe I'm wrong. Tell me about your, there's one behind you, is it? It's blurred if it is. Tell us about your experience of robotics. 

 

Christine:          Well, the first memory that I really have from a robot and how possibly this all was kickstarted was because I was opportunistically during family holiday sitting in Berlin with my parents. And there was, it was just at that start where companies were exploring social robots. And I vividly remember that there was this tiny little pet like dog robot from a particular company that was just roaming around and I didn't understand anything. I was still in high school, didn't understand anything about it, but I instantly thought, this is so cool, if you had machines almost coexisting with humans in order to help them and support them at the time. 

 

And that kind of triggered me into the pathway that I suppose I got into of studying electrical engineering, computer science, and then sort of exploring how machines actually think and how we can make or teach machines to think and make sense of this world that we're in, which is a really highly complex world. But how can you allow these machines to understand things in a manner that is sort of intuitive to us as humans so that we can actually reconstruct how a machine got to a decision. So that's sort of how the path I suppose that eventually led me to where I am nowadays.

 

Sean:                   This week's feature is robotic surgery and our guest is Professor Prokar Dasgupta, a clinician scientist whose career started over 30 years ago. Professor Dasgupta was recently appointed as a Foundation Professor of Surgery for King's Health Partners and has been a consultant urologist at Guy's Hospital since 2002. He became the first Professor of Robotic Surgery at King's in 2009 and then chairman of the King's Vattikuti Institute of Robotic Surgery. Welcome, welcome Prokar. Thank you for joining us on Living with AI. 

 

Prokar:              Thank you very much, Sean. It's a pleasure to speak to you on this podcast. I'm very excited. 

 

Sean:                  It's amazing to have you here. I mean, one thing when we're talking about robotic surgery, I mean, first in my mind comes an old shot of Luke Skywalker in Star Wars with a robot leaning over him and then fixing his hand. How close are we to that sort of sci-fi kind of imagery? 

 

Prokar:              I love Star Wars myself. But I must say in real life, particularly in surgery, that is pure science fiction and to be left on our video screens and in the movie screen. So, I must say this particularly for the lay listener and for our patients who might be tuning into this. It is a mistake to think that I press a button, sip coffee and the robot just goes and does things on its own. That is just not reality today. 

 

Sean:                  One of the things that you think of when you think of a surgeon is very steady hands. And something that I did wonder is how much of what a surgeon does is in the skill of how good their hands are and how much is in their brain as to what to do? 

 

Prokar:              Yeah, yeah, this is, so you've absolutely hit the nail on the head. This is called Robotic Assisted Surgery and that is the correct terminology rather than just say robotic surgery, which has its own connotations and often mistaken. So what happens is that the robot is controlled by the surgeon. So the robot does nothing on its own without the surgeon directing it as to what to do. What do I mean by this? It's a very specialised form of doing keyhole surgery. So traditionally, what we would do would make a cut on the tummy or the chest. That would mean quite a bit of loss of blood, more pain. 

 

And imagine now doing that through five or six small keyholes, which means the patient loses less blood, the vision is much better because the vision is magnified 10 times. So for example, I am a prostate surgeon, the prostate is the size of a chestnut. Here, the robot would magnify it to the size of a football. So the image is much more steady. The instruments are tiny, like the size of our fingernails, about seven millimetres in size. So it's a finer way of doing surgery. 

 

So of course, you would think, as you said, this must make surgery much more precise. Yes, it does. And therefore, you would say, well, that means everyone should just have robotic surgery, it has got nothing to do with the surgeon. Quite the opposite. After 20 years of this kind of surgery, I have one message and one message which is vital. A fool with a tool is still a fool. What do I mean by that? The robot doesn't have a judgment of its own. So the outcome for the patient is not just dependent on the technology, it depends on the skill of the surgeon. So I tell my patients, “If you have an experienced robotic surgeon, then choose that person to do the operation for you. But equally, if near you there is no experienced robotic surgeon, or someone who is less experienced, and you have a highly skilled, traditional open surgeon, then you are better off having open surgery.” So you're absolutely right. This depends more on the skill of the surgeon and not just on the technology.

 

Sean:                  And is this sort of technology available for any kind of operation? I mean, obviously, the body's a complex sort of system. Is it appropriate for any kind of surgery? Or is it very suited for more specific types? 

 

Paurav:              So the market leader in this kind of technology is the Da Vinci system, built by Intuitive Surgical. And that is mainly used for surgery inside the tummy or the abdomen, and in the chest, but also now being used in other parts of the body, such as through the mouth, inside of the mouth or in the neck. But then there are other robots coming to the market, which is always good for reduction in costs. And there are other robots coming, for example, for microsurgery, as in plastic surgery, where you attach tiny little blood vessels under a microscope to skin grafts, for example, or even surgery on the spine, where clearly, the existing systems are not good enough. 

 

In neurosurgery for the brain, you need completely different robotic systems, so on and so forth. In orthopaedics, which is surgery on the bones and joints, again, there are new robotic systems which are completely different from the Da Vinci. So horses for courses, use different machines in different parts. But the main bulk of the literature on robotic surgery is either inside the tummy or in the chest.

 

Sean:                  And as I understand it, with most AI systems, there is an element of training the system. How do you go about training a robot surgeon or a robot to assist in surgery? 

 

Paurav:              So this is a very complex issue, because you mentioned AI. In the same way as you mentioned AI, one has to think of what is the role of AI in these robotic systems? So let me just give you a bit of history to put it into perspective. Nearly 30 years ago, one of my heroes, John Wickham, who passed away a couple of years ago, who was a surgeon like myself, worked with Professor Brian Davis from Imperial and did the first robotic operations on the prostate. And can you believe it, this was a completely autonomous system called the Probot.

 

So essentially what he did was map out the prostate gland and the inside of it, like the core of an apple using ultrasound and then he would fit a robot inside, press a button and the robot would automatically vaporise the central part of the prostate gland. The prostate gland in a man sits just under the bladder so if it enlarges, then it causes difficulty in peeing as the man gets older and therefore vaporising a channel in the middle of the prostate clearly allows the man to pee better. This is a bread and butter operation that surgeons do. 

 

But that never really took off apart from a proof of principle, because it was such a common operation and it was much cheaper for a human being to train to do it than a robot to do it. But as a proof of principle, this first robotic operation in the world, which happened at Guy's, is a pretty significant step. And so you would say that machine must have had a considerable amount of intelligence to be autonomous. Now, today, that is not the case. So the autonomy is actually not visible because everything is controlled by the surgeon.

 

So in order to train to do this, we have a curriculum, particularly driven by the European Robotic Urology Section of ERUS, of which I'm a founding member. So the surgeon would initially do some online reading about the basics of such a system. Then they would go into a lab where they would do some dry training on models, then perhaps some wet training, which is not allowed in the UK because animal surgery is not allowed. So you typically have to go somewhere else in Europe. 

 

And then you can do training on virtual reality models, and then finally on cadavers. The reason for doing all this is to make sure that you don't harm patients. Clearly, this is all about patient safety. And finally, when you have gone through such a programme, you would then train with an experienced surgeon, something that I call modular training and that surgeon would mentor you in a fellowship program till you become competent. And at that point in time, you would submit your video, surgical video, which would be assessed independently. And there are many AI platforms which are now actually automatically able to assess these videos of proficiency. And once you are proficient, you are given a certificate of completion, and then you go off and start operating on patients.

 

So it's a very complex system. But in the current systems where the robot is completely controlled by the surgeon, the level of AI and autonomy is not visible. There's a lot running in the background, but it is not visible to the human eye. What will happen in the next five years with AI and these systems is very exciting, and we can talk more about that. 

 

[00:20:05]

 

Sean:                  Fantastic. So at the present time, the role of AI is mainly in checking that, you know, that the training has been completed properly in a way? 

 

Paurav:              In a way, assessing of those videos, but there is a lot of AI already in the existing systems. Now, one very exciting thing that is happening, and is going to be tested in a programme called the MASTERY study, M-A-S-T-E-R-Y, MASTERY study, which is supported through the Royal College of Surgeons of England, is called Automated Performance Metrics, or APMs. So I just described to you where the surgeon learns how to do the operation, is then certified, because an expert says, “Look, I think you're good enough.” And then the video is assessed either by an AI system or by other experts independently. 

 

So let's say that that is not really the best way to talk about proficiency. So can we improve that? The answer is, yes, very likely. So what happens is, you take this surgical video, and you attach a black box to the back of the Da Vinci robot. Yep? So if you attach a black box, the black box then has a number of algorithms working within it, and it tracks the movements of the instruments which are made by the surgeon. And it also tracks the outcomes of the patient and then it produces automated performance metrics. 

 

So very curious, you would think that the best outcomes would be for the highest volume surgeons, i.e. the surgeons who do the most of these operations. That is what we have traditionally been told. The machine looks upon it differently. Some high volume surgeons are very, very good at getting excellent outcomes. But equally, there are certain lower volume surgeons who are naturally talented and have better automated performance metrics. How exciting is that? And we are going to, within this MASTERY study, across various surgical specialties, for example, prostate cancer, lung cancer, cancer of the mouth, we are going to test these automated performance metrics and see if we can develop performance based training of the next generation of surgeons, rather than just relying on the human eye. So that is a role, I think, where AI will have a very exciting future in the world of robotics.

 

Sean:                  When we think of any kind of surgery, obviously, people who are, I mean, there's the phrase ‘going under the knife’, people are naturally concerned about, you know, outcomes, etc. Should people be more worried about robotic surgery? Are there any measures we can look at? I mean, you've mentioned the black box and how things, you know, how things are checked. But I know that, sorry, just to kind of tangent for a moment, things like image recognition, that robots are starting to outperform humans in some fields. Is that going to, is that the case at the moment with the assistive surgery? 

 

Paurav:              Yes, the proof of principle for image recognition has already been done. And increasingly, the next generation of robots and even the Da Vinci system are doing integration of image recognition. Let me again give you an example. In prostate cancer, before we operate on a patient, we do some very high quality MRI images. These are called Magnetic Resonance Imaging. These really give unique and very, very detailed appearance of the organ and the cancer within it and the surrounding structures, such as in a man, the sphincter muscle, which keeps the man dry, imagine removing the cancer and making the man wet, and the nerves for erections, which go on two sides of the prostate. So, very concerning to patients. 

 

So, until now, what's happened is that these images have remained on a computer screen and have not been integrated into the surgical field in real time during surgery. But a number of proof of principles have been done. I myself, nearly five years ago, tested and published a system which was developed in collaboration with King's called the translucent system, whereby you take these scans into an iPad like device, you literally place the device on the patient, and you can literally see through, quote unquote, through the patient. That's why it's called translucent. 

 

So, imagine putting an iPad over your tummy and being able to see the inside of the tummy, see the cancer, see where the organ is, see the other structures. So, this has already been tested, and it's been integrated into a new robot which will soon arrive on the market. There are other systems in Italy, for example, hyper-accurate visualisation, whereby you can re-impose or superimpose the image onto the organ of interest. Obviously, the challenge there is while we are operating, the organ will move, and can we actually move the image along with the movement of the surgeon? That is a serious challenge.

 

So, those are two examples. Another example is you take the, rather than move in the image, you 3D print the organ with the cancer. So, you take the image, which is a 2D image, you volumize it to create a 3D image, you press a button. It's not quite that simple, but you press a button and overnight a 3D printed organ comes out. So, you can then touch the patient's organ without putting your hand inside. You can touch the cancer, you take it into the operating room, and then this is called precision surgery, rather like precision medicine in cancer, whereby you design the operation according to the specific patient. So, these are some developments which are in the pipeline and I think will become more and more integrated into the surgical care of patients. 

 

Sean:                  How do people, in your experience, how do people feel about robotic surgery? When you meet a patient and you discuss what's going to happen? What's the sort of feeling there? 

 

Paurav:              Well, today patients ask for it. So, nearly 20 years ago when I started in prostate cancer surgery, 1% of radical prostatectomy, which means removing the prostate for cancer, 1% was performed robotically. This is nearly 20 years ago. At that time, there were only two machines, one at St. Mary's and the other at Guy's. So, we were the pioneers in this. Today, the Royal College of Surgeons has just published results showing that today that same figure is 90%, 90%. So, we have completely, in under 20 years, transformed the way we perform this kind of surgery. 

 

So, the patients are actually now very accepting. In fact, if you tell them that you're going to perform an open operation or straightforward keyhole operation, where you don't use a robot, then patients almost look at you in a somewhat bemused fashion because they think you either don't have the technology or don't have the training. This was not the case 20 years ago. 

 

Sean:                  And presumably, recovery times are much improved because you're not, you know, I don't know the technical terms, but you're not opening somebody up, right? 

 

Paurav:              Yeah. So, typically these patients, if all goes well, and again, this is 80 to 90% of patients, go home the next day, in the NHS. Compare this to between three days to a week after open surgery, which is what used to happen when I performed the operation with the traditional method. So, this is a complete change in the way patients recover. 

 

Sean:                  So, we mentioned briefly going forward. So, what's this looking like? You know, how will it evolve? What are we looking at for the next, say, 5 or 10 years or hoping for? 

 

Paurav:              So, I say this and have written about this quite extensively. There are three things which I think are important, and particularly in the context of our trustworthy autonomous systems hub, which I am really excited and energised to be part of. So, the three things are cost, connectivity, and the role of AI and automation. So, those are the three things. Let me expand on them one at a time. We used to think that with such expensive systems, just reducing the hospital stay will make them cost effective. That is not the case. In fact, you know, going from two robots nearly 20 years ago to now, nearly 90 robots in the NHS has raised a few eyebrows whereby people have said, “Look, this is a marketing exercise.” I must hasten to say I have not made any money myself. I have no conflicts of interest at all. 

 

[00:29:46]

 

                             But people have often accused the monopoly leader as controlling the market and making it a business endeavour rather than anything else. So, we have only this year realised that robotics is cost effective. So, I am very relieved to say that. Why am I relieved? Because earlier this year, in collaboration with Harvard and Brigham, I published a paper whereby we looked at what happens not just with shorter hospital stay, less pain, quicker recovery, but what happens with out-of-pocket costs, i.e., what are the societal costs of robotics? 

 

                             So, if a patient goes back to work quicker, has less time needing painkillers, what about what happens to society after the hospital? And that shows that assessment of large databases, not just for prostate cancer surgery, but for female cancers, for bowel cancer, that robotics is cost effective. And we published this in the Jama Network Open, which is a very well read journal, international journal. And I must say that I am extremely relieved because cost to me was a major consideration. 

 

                            Secondly, with more robots coming in the market, so not just a single market leader, costs will inevitably go down and in fact is already going down. So, that's the first thing for the future, cost. Second is connectivity and I know we have a number of experts within our hub with Internet of Things. We at King's call it the Internet of Skills, i.e. can we transport a surgeon remotely across any part of the world, not just the expertise, but also the sense of touch? And we have done this by a number of methods. We have used a haptic glove called the Neuro Digital, whereby we can feel inside the patients without actually making a cut on their tummy. 

 

We have used for vision various augmented reality and mixed reality platforms like HoloLens, for example. We have a 3D printed gripper on the tip of a robotic tool by which we can feel dangerous structures while we are operating, making surgery safer. And then we have used slicing in between all this using low latency 5G networks. So, already we are thinking of 6G, but we have rather than call it Internet of Skills, we have said this Internet of Things, we have called it Internet of Skills. 

 

So, I think connectivity is very exciting and we have a number of augmented reality platforms. One example is Proximy, which again is led by one of my colleagues who is a plastic surgeon at St. Thomas's within King's Health Partnership, whereby you can literally transport and democratise surgical expertise across 5,000 miles. So, an expert surgeon in London could be in Seattle, in India or anywhere on earth just from his laptop or iPad. 

 

The third I think is the issue of AI and autonomy and I have always already mentioned this. And I think the issue of autonomy is very, very exciting. Recently, The Telegraph got hold of this and did an article which I would urge you to look at, whereby they said, “Professor, nearly 28 million operations have been cancelled worldwide due to Covid. This is not good for patients. Do you envisage a time when you are operating in one room and a robot or a set of robots are automatically or autonomously doing things in neighbouring rooms, just building capacity?” And I said, “Look, this is purely science fiction at this point in time.” 

 

Why do I say that? There are robots published, you know, four years ago in Science Translational Medicine was a machine called the STAR robot, S-T-A-R, STAR robot, which can actually stitch bowel in an autonomous fashion much better than a human being. So, there are certain parts of an operation like stitching, which a robot can do much better than a human being. I think that is where autonomy will go. But to take away human judgment and think that you can programme a machine to do complex surgery, completely in an autonomous fashion, I think today is too farfetched a thought process. 

 

But then comes the issue of trust. If you come into my operating room, you see, you will see that the definition of trust is a number of different things. First is technical. It is the surgeon trusting the robot because although the autonomy is not visible to the human eye, I can tell you that there are a number of things such as movements of little wrists, which we take for granted, which the robot is running in the background in an autonomous fashion. So, it is me trusting the machine not to go crazy and to do what I tell it to do. 

 

But then doing this kind of surgery means I'm not standing or sitting beside the patient. I'm remote from the patient. I could be in that room, I could actually be anywhere else. But generally, we are in one corner of the room at a computer console, our head is buried in that console. So, we are remote from the scrub team and the assistant. So, the trust also is between the surgeon and the scrub team and the assistant and likewise between the assistant, scrub team and the surgeon.

 

Then there is the issue of trust with all this data being stored as videos. Can we trust the connection? Can we trust the integrity of such a connection? Are there security issues? Are there regulatory issues of trust? Do our manufacturing regulators trust autonomy? And then are the hospital administrators going to trust this? And finally, are our patients, the most important end users, are they going to trust this? And within the TAS app, I think we are going to test this in a number of ways. This can be tested by interviews and focus groups, we are going to bring in social scientists and ethnography, but also through video analysis, not just of what the surgeon is doing, but of the old entire robotic ecosystem and Paul Luff and I are going to come together. 

 

We have just put in an agile project where part of it is precisely that robotic surgery, the other part is robotic, a cleaning robot. So, in healthcare, of course, the stakes are very high and the definition of trust is very, very different. So, I think I'm very excited about where we are going with this whole story. And I must say, it's a real pleasure to be part of the TAS app. The other bits of work that we are doing is with surgical data science. What I mean is that when we are operating on these patients, we are gathering a large amount of ethically approved compliant GDPR, whatever HIPAA compliance data, but no one really knows what to do with this large sums of data. 

 

So, you know, one, you can predict outcomes, of course, you can predict early warning signs by designing models and training these models and then testing them. But also with video labelling of these surgical scenes, I think there is an opportunity to transform the way we perform surgery, make it more accurate for patients, but also make it more accurate for training the next generation of robotic surgeons. And we have a MedTech Hub, which my colleague, Sebastian Ursuline, is leading at St. Thomas'. This is very much part of the King's Health Partners Surgical Academy, which I lead. 

 

And this will bring multiple surgeons across multiple disciplines throughout our five institutions within King's Health Partners and bring surgical data science through the MedTech Hub and surgical and interventional engineering. So I think watch this space. The next five years will be very exciting for patient care. 

 

Sean:                  That's fantastic. And as we know, the more data we get for these things, often, you know, the better predictions we can make and the better learning we get from it, of course. I've really enjoyed talking to you, Professor. Professor Prokar Desgupta thank you for joining us on the Living with AI podcast. We really appreciate you sparing us your precious time. Well, back to our panel now, and I have to say I learnt a lot from that chat from the professor, but one of my favourite parts was when he said the phrase, ‘a fool with a tool is still a fool’. It reminds me of my DIY efforts. This phrase also applies elsewhere in AI, doesn't it, Age?

 

Age:                    It does. And in fact, it's one of the most beauty and scary parts of AI in that we are building these really powerful tools that give us huge insight and huge capabilities. And yet, if we're not careful in how we use them or how we apply them, how we check that they're applying them appropriately, we can do real damage with them. 

 

Sean:                  What's the worst thing that can go wrong here? I'm not talking about the surgery. I mean, you know, we're the fool with the tool with a robot. I mean, there's power in these robots, right?

 

[00:39:58]

 

Christine:          Well, there is. It was pointed out in the interview that most of the AI, I suppose, goes into the sort of back-end processing, not necessarily the machine that's operating on a human, but how can we use artificial intelligence in order to evaluate how competent a surgeon is, for example, perform evaluation measures more objectively, and therefore come to perhaps a more efficient way of evaluating and training surgeons overall. I think what was very, very clearly highlighted during that interview as well was that actually those machines, that are operating on a human are very much guided by a human surgeon, an expert surgeon who knows exactly what they're doing. 

 

Because the biggest risk that I see is that there's perhaps a misunderstanding between what the machine is autonomously able to do, which it is not at this current stage, and what the surgeon as an expert user is aware of. And if you have that conflict of a machine trying to make autonomous decisions versus an expert trying to counteract those decisions, I think there were very recent events outside of surgery that have highlighted what fatal consequences such a situation can have. So I think it's very positive to say that, that was highlighted in the interview, that there is no such conflict at this stage in robotic-assisted surgery. 

 

Sean:                  Paurav, what's the perception here? I mean, from what Professor Desgupta said, people prefer it, they will be choosing to have robotic-assisted surgery over open surgery. I mean, there's all sorts of benefits we've heard about, you know, that it's a faster recovery time, etc, etc. But, you know, I was quite surprised by that, having never been in a situation where I've had to choose. 

 

Paurav:              Yeah, it's a very interesting aspect in terms of it traverses the boundaries of a patient's, as an individual's acceptance versus, or rather, I would put it and, or societal acceptance. So what Professor Desgupta was particularly focusing on was that the patients were quite ready to go about with regards to the robotic surgery, because they assume that robots are precise machines and surgery is an imprecise thing, you know, when you think about it, you know, you could go a millimetre here or a millimetre there, but robot would be very precise. 

 

So they are thinking it from a precision perspective. And that's why you see this particular set of patients saying, you know, they would be welcoming this. However, we have very little understanding of it as a society. So when you think about what Professor Desgupta straight away said in his first comment was, this were not robotic surgery, this were robot-assisted surgeries. And I think that is very important to understand, because society still thinks it's a robotic surgery, a robot is doing it. And it's not a robot doing it. It's actually a person who is managing that robot, an expert who is doing it in reality.

 

So there is this aspect and we haven't still seen any sort of horror story around it. The medical science, when you look at the medical journals, they are already pointing out that it is not exactly kind of, you know, an elixir of surgery. There are lots of new articles that are emerging that we need to approach this with caution and so on and so forth and doctors are certainly leading that charge. But at the same time, it has not come out in public domain. Like what we had with, if you remember earlier discussion on that automated driving cars and those fully automated cars in 2010 and thereabouts, it was said, you know, “By 2020, we will see these cars.” And so on and so forth. 

 

Then the realism hits. You know, suddenly media turns in favour of you, sometimes they turn against you, and so on and so forth. That has still not happened yet. So the field is still cocooned in its own way and only the patients are knowing. So once that societal acceptance challenge comes in, we are going to see some new dynamics emerge into this field. 

 

Christine:          I think there's also a very interesting perspective about the trust issue here. Why is it that as a human, I automatically trust a surgeon, but as a human who was faced potentially in the future with the choice of, do you want a robot or a surgeon to operate on you? And this is assuming that at some point in the future, robotic assisted surgery has evolved, I want to say here, which might be slightly controversial, but has evolved to a purely robotic surgery. What is it that makes me as a human trust a stranger over a machine where I know as a matter of fact, that machine is highly precise, and it's very accurate in making decisions that are actually free of emotions, free of stress, and free of emotional factors that affect human surgeons? What is that? And I'm just trying to understand what the dynamics there really are. 

 

Sean:                  I think that comes down to a certain amount of nurture rather than nature. I've done a few videos with a chap called Robert Miles on the Computerphile channel, where we've talked about AI safety and AI risk. And often the conversation devolves into a conversation about science fiction, I'm afraid to say, because it pervades our lives. Even if you're not into science fiction or fantasy, you can't have not been aware of the Terminator or any number of robots on Star Wars or Star Trek. It pervades our culture and our lives. And I think it really does bias your thinking on things. You know that in a lot of circumstances, a robot is going to do an efficient and consistent job at something, where a human might be, as you say, they might have a bad day, they might have a good day. There are inconsistencies aren’t there?

 

Christine:          I suppose it's also the same factors that actually affect adoption of robotics as a whole in our society. And I suppose it's not really a secret that in Western society, we're less adoptive, we're less accepting of robotics and autonomous systems than in other cultures and society. So, it would just be extremely interesting to work out what the influencing factors here are, and how we can perhaps transform society or the beliefs in society that prevent us, A, from adopting robotics, and B, to actually drive developments forward that would allow a sort of deeper incorporation of robotics in order to assist humans, rather than to replace humans.

 

Sean:                  And part of this comes down to experience as well, doesn't it Paurav? 

 

Paurav:              Very much so. I would like to offer two very interesting examples, with a point around what Christine particularly talked about. So when it comes to just precision, I think we would support robotics more than the humans. But when that precision is attached with some sort of ethicality or moral dimension, we need that human intervention. So let me give you those two examples from that perspective. When you think about defence and robotics, there has been a long term relationship between defence and robotics. And the first type of drone that was developed by defence and military contractors was called Firebee, that was in 1950s. And that was fully automated. 

 

Then the next one, and this is still probably the most used military drone ever built. But then after, as the drone started developing, and then came the Lightning Bug, and then came the Predator, and so on and so forth, as we know in defence. But what happened was that the fully automated actually became remotely operated. Because the precision now had a morality dimension. So it was not just precision. 

 

But then a personal example would be, I went through a knee surgery a few years ago and at that point in time, while my knee surgery was going on, when I woke up out of the anaesthesia and all that, and then the doctor came to me and the surgeon said, “Look Paurav, what happened was we did the surgery. But at the same time, we found that in your inner side of your knee, there was some sort of high level of arthritis. So we have tried to cure that by doing something. But we only got to know about it, when we opened that knee.”

 

Now at that point in time, if there was a robot, probably the robot would do only the knee surgery while the surgeon had my other health connections, at heart, and the surgeon could take that decision, and doing so, actually, they solve the precision as well as the morality and the patient health problem. Now, I may have to go through two operations. Instead, I went through one and recovered from it. So I think that, you know, having this human machine symbiosis is an extremely important thing for robotic surgery, because it's a patient and morality issue. 

 

Sean:                  There's the experience there of the, you know, the surgeon there being able to spot that something else is going because his eyes were open to more than just the tunnel vision of the task at hand. So that's obviously a plus. I think the other side of this is obviously the you can't argue with the machine side of it, which is perhaps not so specifically targeted, my point here, not specifically targeted at surgery. But when you mention things like military drones, and we've again, I'm going to quote sci-fi, but we've seen things like Robocop, where the machine has been told that it has to do X, Y and Z and there's no arguing with it. These are the things where humans want humans in charge, because they can reason with them, right? 

 

[00:39:50]

 

Age:                    It's actually taking what we're discussing now in terms of robotics and AI and our health and actually going outside of robotics. So right now, there's a proliferation of tools that are using AI and being directly marketed towards patients, right? So you can download on your phone, apps like Face2Gene, or Cancer Detection, is this a skin cancer or not? I'm not sure. And it's actually doing exactly the opposite of what we're talking about today, in that it's effectively removing that experienced clinician from the care and it's, on the positive side, they're trying to empower the patients, but on the negative side, they're removing that expertise, they're removing that greater knowledge of the patient's health and knowing what really is problematic and how that information should be used and how a patient should go forward with any possible diagnoses, as opposed to, here's an automated answer. 

 

Sean:                  I've had direct experience of a doctor telling me not to Google something. And for those exact reasons, because they have the context, they have the experience, they have, when I'm saying they here, I'm talking about doctors, obviously, doctors, even general practitioners have that ability to put something in context and look at the probabilities and to work out whether something is likely to be, as you say, kind of a dangerous problem or not. 

 

Christine:          I suppose the part about automatic diagnosis of diseases, though, is if you have a healthcare system that may be stressed to its absolute limits, of course, it doesn't replace an expert diagnosis, but at least if done correctly and regulated correctly, it may alleviate perhaps visits to healthcare experts that would otherwise have taken time out of the healthcare system. But at the same time, direct people who need urgent assistance to actually go to see a GP or a specialist when there's actually also time available. 

 

So I do see some benefits to it, but I also see the risk of if it's not done properly, and the app has perhaps been trained on the wrong data, or the data isn't regulated in an appropriate manner, though, of course, there's a lot of risks and dangers involved there as well.

 

Age:                    So in actually carrying on for that point, here in the UK, the institution NICE and in the US, the FDA both have recently released guidance about how they are going to be regulating AI systems to take all of these concerns into account. And it's wonderful to know that our regulators are indeed thinking through these problems so seriously. Some of the worries, though, is that it's very easy to launch an app through Google Play or one of these other stores, and it's like the vitamin situation in the US, which is not regulated by our governmental regulation agencies. And they're abused, and people die yearly from contaminants in the vitamins and everything else. And you worry that the same situation can play out here if we are irresponsible in our development and use of AI in our medical systems. 

 

Sean:                  I think also the problem with any kind of app like that is that, okay, we can think of the kind of happy downside, which is too many people go to seek advice when actually they don't need it. Okay, that's not so bad. It's the one that misses the glaring problem, because it wasn't, I don't know, trained on that type of skin or whatever it happens to be. And that person says, “No, the app says I'm fine.” And off they go, and it develops into some horrible stage four problem.

 

One thing that this opens up lots of possibilities with this robotic assistive surgery, and I don't know, this may already be going on, is the idea of having a patient in one locality and a surgeon in another. And I know this happens in engineering already, Rolls-Royce can deploy a robot near an aircraft on the other side of the world, where their expert is in the headquarters in Derby in the UK, and they will guide the robot to fix the problem. Does this happen with robotic surgery? And if so, does it start to cause problems with knowing that the person operating your device is qualified properly?

 

Paurav:              Sean, this is such a such a quagmire in its own way when you think about it, Prokar did talk about a very specific aspect in terms of the training regime a surgeon has to go through before they get certified. However, there doesn't exist right now a global recognition or a certification body in this area. What that means is, is that somebody in another part of the world may be trained less or more, and then may be performing this surgery in some other part of the world. Now this, when we talked earlier about that patient acceptance and societal acceptance, this would start creating those challenges thereafter, wherein we may start seeing the negative sides of it or our own stereotypes would come in picture. 

 

So if I'm sitting in the UK, and somebody is going to do a surgery from an emerging market somewhere in a different part of the world, do I feel the same level of comfort and trust towards that doctor who is actually doing that surgery, rather than having a surgery here? But then if I'm given a choice that, you know, your surgery by that other surgeon can be done tomorrow, but if you are, you know, either you are in a long waiting list and you can get it in a year's time, what am I going to do? So those are the kind of questions we will need to address through robotic surgery, because it would increase efficiency, but at the same time, it increases the chances of things going wrong also. 

 

Sean:                  I think you’re right and the other thing about it, of course, and we sort of experienced it a little bit a moment ago is, the internet does drop out. So how comfortable am I going to be that the surgeon is just going in with the tool at the moment that we have an internet lag problem, and his screen goes to blocky mosaic form? There's a liability question here, right, which is you're in hospital, you're a robot provider, manufacturer, you're a clinician, you're a patient, there's all sorts of different parties involved in this. How does it play out? We talked about this with the automotive idea of whose liability is, and I'm not expecting it to necessarily be an answer, but what's the problem there? 

 

Christine:          I mean, first of all, of course, I'm not a lawyer, so I can't give any expert advice on this. My personal opinion is that it's actually a very deep connection between all the different stakeholders in the system. So, there’s in the first instance, the manufacturer of a robot, who, of course, know the best way of how to actually operate their machine. In other industries, where machines are equipped with some sort of AI or expert systems, what sometimes happens is that, or hopefully always happens, is that the manufacturer will actually devise training programmes for people who will actually operate that machine so that those people are fully aware of every feature, but also every, perhaps, shortcoming of that particular machine and how to overcome that. So that's the first aspect, which is more from the industrial side. 

 

There's, of course, the surgeon themselves who need to be trained in a particular way, which involves that more expertise from other surgeons, from more senior surgeons, more experienced surgeons comes in. Coming back to the topic of a remotely operated machine, I think at that point, and also with any automated decision making in healthcare, it's important that this always happens as not a purely automated system, but that there's always an expert who oversees that decision.

 

If a machine makes a diagnosis, a healthcare diagnosis, there needs to be an expert who can oversee that diagnosis and confirm that that is the right decision. Otherwise, you will end up in very problematic scenarios in the same situation with remotely operated surgery. There needs to be in the room with the robot, in my opinion, at least, an expert surgeon who, if things go wrong, can take over and help that patient.

 

Sean:                  How long before one clinician is overseeing two robots, three robots, four robots, ten robots, and then, you know? 

 

Paurav:              One of the things which triggered a thought in my mind, when Christine particularly talked about this training aspects and expertise aspect, a question that arises of market competition, being a business school professor, one of the things that comes about is market competition. And what you hear right now in robotic surgery, a robot assisted surgery field, is predominantly there are one or two players who are very specifically focused on one particular area, and they monopolise that area. 

 

Now, what happens if a better system arrives in the market, however, a hospital has put in so much effort in training, as well as in equipment, that they are not able to switch to that better system. And so, we are stuck with a suboptimal system with every stakeholder knowing that it is a suboptimal system, but we are unable to change because of the resource constraint around it. And so, I would see the need for some sort of common standards within the industry. I think that would be a very important way forward for the industry itself.

 

Sean:                  Is this something that's happened in other areas that we can kind of draw a parallel to? I mean, I suppose an easy option that isn't really AI, but is the emergence of the car itself, you know, in the early days of cars, they all had different controls, and then emerged this standard placing of where the accelerator is, where the brake is, etc, etc. 

 

[00:59:56]

 

Paurav:              But a simple technology, which not necessarily is that affecting us as person, on our health, but a very important technology is the power cable charges. If you remember in 1980s and 90s, and 2000s, you know, every company has its own charger system, and so on and so forth. And the number of plugs and the wires and all that we needed, and slowly but steadily, the industry has gone towards a specific set of standards. And by doing so, it has made our life as consumers or users much easier. It has made the life, the supply chains more effective globally, it has made the whole industry more effective and more competitive. So, I think commonality of those standards of arriving makes a massive difference. 

 

Sean:                  Yeah, there's always still an Apple in every marketplace trying to make their own connectors for things. But yeah, you're absolutely right that things have got easier. And actually, if you go back far enough in the UK, devices didn't come with plugs on you had to put your own plug onto the cable in the early days. So even that is impressive that. I can see that might be news to Christina and Age, I don't know who didn't grow up in this country. But perhaps I don't know. I don't know. 

 

Age:                    When we were in the example for cars, I was going to say, do you know how often I've shifted the door handle? Because it's the wrong side. 

 

Christine:          Yeah, knock against the door.

 

Sean:                  I've done that. I'm wiping right now. Well, that's just about all we have time for today on the Living With AI podcast. I'd like to say thanks once again to Ash Chapman. 

 

Age:                    Thank you so much.

 

Sean:                  To Christine Evers. 

 

Christine:          Thank you very much for having me today. It was a pleasure. 

 

Sean:                  And to Paurav Shukla for joining us. 

 

Paurav:              Thank you and stay healthy. I would say. 

 

Sean:                  Yes, and hope to see you very soon on another Living With AI podcast. If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub. Audio engineering was by Boardie Limited and it was presented by me, Sean Riley. Subscribe to us wherever you get your podcasts from and we hope to see you again soon.

 

[01:02:19]