Living With AI Podcast: Challenges of Living with Artificial Intelligence

AI and Choices (Projects Episode)

Sean Riley Season 3 Episode 9

Can Machines make good choices? We're asking several researchers about their TAS Hub projects related to choices.

This episode was recorded on May 10th 2023

Featured projects:
Asieh Salehi Fathabadi - Verifiably Safe and Trusted Human-AI Systems (VESTAS)
Sachini Weerawardhana - Leap of Faith
Benedict Legastelois - Methodological perspectives on the ethics of trust and responsibility of autonomous systems​

Podcast production by boardie.com
 
Podcast Host: Sean Riley

Producers: Louise Male  and Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.




Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 3, Episode: 9

AI & Choices (Projects Episode)

Can Machines make good choices? We're asking several researchers about their TAS Hub projects related to choices.
 
This episode was recorded on May 10th 2023
 
Featured projects:
Asieh Salehi Fathabadi - Verifiably Safe and Trusted Human-AI Systems (VESTAS)
Sachini Weerawardhana - Leap of Faith
Benedict Legastelois - Methodological perspectives on the ethics of trust and responsibility of autonomous systems​ 

 
Podcast production by boardie.com
 
Podcast Host: Sean Riley

Producers: Louise Male  and Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

 

Episode Transcript:

 

Sean:                  Welcome to Living with AI where we get various contributors together to look at how Artificial Intelligence affects us and what impact it has on our wellbeing for instance.

 

                            Today we are looking at whether machines can make good choices. We are on season three of the podcasts so one good choice could be to  look up our previous episodes, links are in the show notes. Or if you search for TAS Hub you will find us I am sure. 

 

                            As we record this it’s the 10th of May 2023. I am saying that because AI moves so quickly. This episode is a project episode where we group a few TAS Hub projects around a theme and as I have said today’s theme is about whether machines can make good choices. 

 

                            If you have not heard of the TAS Hub it is the Trustworthy Autonomous Systems Hub and there is plenty of details about that and if you look up the TAS Hub website.

 

                            Joining our podcast today to discuss their projects are Benedict, Sachini and Asieh. I will just ask each of you to sort of introduce yourselves and then once we have met everyone we will go round the projects one by one and we will hear a bit more about what you have been doing. And once we have heard about those projects we can have a little discussion.

                            

                            So let’s start with Asieh.

 

Asieh:                Thank you and hi everyone. I am Asieh Salehi a senior researcher at law and computer science at the University of Southampton. And I am PI of a TAS Hub project called VESTAS, Verifiably Safe and Trusted Autonomous Systems in proposing the multidisciplinary approach to utilise social science and computer science and aiming to impact trust levels of stakeholders in autonomous systems.

 

Sean:                  Sachini?

 

Sachini:             Hi Sean it is a pleasure to be here and welcome to this episode everybody who is listening out there. So I am Sachini, I am based at Kings College London. I am a post-doc researcher in the TAS Hub itself. And I am the PI of the TAS Hub integrated project called, which we call  Leap of Faith. So it is a bit of an artsy name but we are very much focused on answering this question, that they are autonomous machines, robots, drones, what have you and whether or not they can be designed to persuade a human being to do something based off of trust. 

 

                            It sounds a very out there kind of a topic but we have made a lot of progress on it and I am looking forward to talking to you all about that in the next hour or so.

 

Benedict:          Hi thank you. I am Benedict and I am a post-doc research associate for Kings College and I work for the governance within the TAS Hub. My work is basically focused on explainable AI especially for medical applications and I am leading a project about the ethics of trust on which I can give more details later.

 

Sean:                  Fantastic. It’s great to meet you all and thank you all for taking the time to join us on the podcast. As I said we will get a bit more detail then. So we will go to Asieh and just if you could tell us a little bit more about VESTAS then please?

 

Asieh:                Yes absolutely. When we are talking about AI replacing human decision making with machine decision making it can result in big challenges associated with the stakeholders trust in autonomous systems. We all know that trust itself has got a challenging interpretation as well. The meaning of trust can be very different with different categories of stakeholders, different  users and even in different kind of domains.

 

                            In VESTAS we believe it is a key point to understand how different stakeholders perceive autonomous systems as trusted. I will give you a very small simple example, if you imagine autonomous vehicle. So autonomous vehicle as an autonomous system can be seen to be trusted by a specific group of users, for example when you are talking about delivering groceries it is easier to trust the autonomous vehicles. But when you are talking about transportation for example maybe the trust interpretation will be completely different and it will be more difficult to reach.

 

                            And these sorts of different categories of the stakeholders and different domains is something that we are investigating in the VESTAS project. And in VESTAS we are proposing a multidisciplinary approach to access the trust level, if you consider trust as a social science notion and has certain techniques as the computer science notion then we can talk about social technical approaches. 

 

                            A lot of work has been done in literature in the state of art about technical solutions to for the safety. But in VESTAS we are exploring how these technical solutions can impact the levels of trust in the stakeholders. I will just give you one opening sentence here really and later in the podcast we can talk about it, we are looking to how making awareness about the existing technical solutions for the stakeholders can impact the level of trust. Yes that’s it for now.

 

Sean:                  Brilliant. Could I also then ask Sachini just to take us through the Leap of Faith or send us across the Leap of Faith, I am not sure.

 

Sachini:             Well yes, so in Leap of Faith we are specifically interested in human robot interactions. As you know we have already got autonomous systems, robots, drones, integrated into our day to day lives. We have got autonomous delivery services where this little car comes to you fully autonomous with groceries, I think that’s already deployed in certain parts in the UK. And we have got rescue drones that are being used by search and rescue units, first responders and even the London Fire Brigade. So all these autonomous machines, very capable, very smart machines, they are being increasingly co-located with human beings.

 

                            And what we are looking at is when these machines come into contact with human beings who may or may not have any relationship or have seen the machine at all might be hesitant to interact with the machine. When these machines come into contact with people of different capabilities like tendencies towards technologies, their biases, when these systems come into contact with these different types of people trust, and when the interaction that occurs within the machine and the human require some sort of trust. 

 

                            What we are exploring in our project is whether we can design these complex machines to generate that trust instantaneously. How can we embed these systems with capabilities with messaging the physical characteristics, the physical appearances and things like that. What is the right blend of these capabilities coupled with what the machine says during the interaction. How does that correct blend impact a creation of trust and interaction between the human and the machine.

 

                            As far as the project is concerned we are for the time being restricted to simulating these interactions in hypothetical scenarios like evacuations and office assistance type of stuff. But moving forward we are looking into developing a few real life case studies that involve emergency responders and even medical professionals which I will be happy to talk about in a little bit.

 

Sean:                  Fantastic, thank you very much. And Benedict can you tell us about methodological perspectives on the ethics of trust and responsibility of autonomous systems please?

 

Benedict:          Yeah. So it is a project that is between two nodes within the TAS Hub. So it’s the awareness node which is where I am working and the functionality node located in Bristol University. The main goal basically is to have each nodes running from each other and understand how they approach different or can merge into some similar frameworks around trust and trustworthiness.

                            

                            So it is a lot of discussion and also some analysis of case studies that we are doing just now. In more details we explore the theoretical and empirical approaches to understanding ethics of trustworthiness in autonomous systems. So the awareness approach is more theoretical it’s more based on conceptual frameworks that relates to social trust more legal responsibility and how trust is built within the actors of technology and the users themselves. 

 

                            And functionality takes more ethical approach which is more undertaking research with the coders to inform how trust is built and how trustworthiness works within all the actors of technology and the users.

 

[00:10:03]

 

                            So our hypothesis is that theoretical and empirical approaches connect to ethics that have different limitations and different strengths but they can inform and complement each other. And what we are doing in this project is discussing how different approaches and try to find how to fill different gaps and how we can merge them.

 

                            What we want to do in the long term is to test our approaches of this approach of trust and both empirical and theoretical differences, how they can be tested in case studies, especially medical applications. But we look at mostly automatical diagnosis and how AI assisted diagnosis interfere in the trustworthiness dynamics in the medical relation between patients and clinicians.

 

                            And we mostly have discussions and we jointly we publish papers about this and we are organising some panels and workshops next year to open the discussion to other nodes within the TAS hub.

 

Sean:                  Fantastic. I know AI has played a big role in the kind of medical diagnosis and being able to kind of screen through things at an extremely high speed. But obviously in making that process kind of you  know, better for the clinicians so that they don’t have to kind of wade through every single, I don’t know, say X-ray or whatever, but obviously the implications there of a false kind of positive and false negative are quite serious aren’t they? Is that the sort of thing you are going to be talking about?

 

Benedict:          Yes. So it is kind of the limit case that we need to look at but the applications behind this is that we would not put to the general public an assisted tool that would make such mistakes because it is too important. So we assume that it should not happen and in case it happens. 

 

                            The main purpose of this discussion is more determining how it affects trustworthiness and also how responsibility framework can help assessing the quality of the tool.

 

Sean:                  You perhaps need a leap of faith to get there I don’t know. That is a terrible segue to go over to Sachini and ask a bit, because again life and limb, you know when you are in rescue situations we are talking about a potential life and death situations aren’t we. I mean we know that when people get to trusting these systems they are often more successful then kind of manual methods. It’s just there is a bit of a, I don’t know, there is a hump to get over in terms of the kind of graph of trust if that is an extremely terrible way of putting it. What do you think about that?

 

Sachini:             Yes I agree with that idea actually. And even in the situation where I think my, I am beginning to realise that given a situation building up trust and how the trustworthiness of the machine that comes to help you in that situation. That connection between the machine and the human depends a lot on whether or not the human has a choice in the matter given the situation.

 

                            For example take a rescue situation if the only, say you are on a sinking ship and an autonomous rescue boat comes to rescue you. In that instance the decision to, the only options you have is just to get on the boat or get killed, you would ideally take, you would go into self-preservation mode and kind of like say okay this is the help I have and I will take this help anyway regardless of whether I trust this or not.

                            

                            So I think in certain situations when we talk about trust and taking a leap of faith towards interacting with a machine in a trustworthy context, I think it is a discussion us as researchers we should have. Whether or not the scenario actually builds or gives an opportunity for the person to create trust and not work in a binary mode where you either do it or something good happens and you don’t do it and you are in a perilous situation that sort of thing.

 

Sean:                  I know we have talked on a podcast before about the idea of okay something much less kind of perilous where you are given the option to, I don’t know, I am going to use Uber as an example. You can take an Uber driver now who will you know, is autonomous and will come in five minutes or you can wait fifteen minutes for one which is not autonomous. How desperate are you to get to the place you are going and how deeply do you feel one way or the other about autonomous vehicles?

 

                            And as you say sometimes the fact that you don’t have much choice means that the trust is kind of built automatically almost. Asieh you wanted to talk?

 

Asieh:                Yeah thank you, yeah. I take Sachini’s example, the rescue example, as introducing a new dimension to what we are doing in this task. What we are proposing in this task is to consider the different domains of application and different categories of the stakeholders which will impact the level of trust and the meaning of trust to the stakeholders.

 

                            But what I get from the rescue example which is certain critical systems is that also the new dimension of situation, the situation of if you are sinking in water, so perhaps we can reach the higher level of trust more easily. So I will take this as a new exploring dimension to the project board.

 

Sean:                  Yeah perhaps we don’t set those situations up but then you could simulate this potentially right in gaming or some kind of simulation. I suppose you could put somebody in a position where in the game they lose their life if they don’t take that choice, it’s not quite as perilous? Go on.

 

Sachini:             Yeah I would say imagine if you have a same category of a stakeholder using the same domain but a different situation, as you said for example gaming and another situation. So this situation it is keeping the other two factors same but even the situation is affecting the level of trust, so that’s what I am talking about here.

 

Sean:                  It’s an interesting one because they are all different ways of kind of building trust aren’t they? Just we have got some very extreme methods on the spectrum going on here.

 

                            And this then comes back to the kind of overarching idea of this particular episode of the idea of the choices. What have we found the machines doing in these, has this been simulated, have the choices been good I suppose I am wondering? Anyone got any kind of cast iron examples of good or bad choices?

 

Sachini:             Well I will answer that question with another question, good for whom?

 

Sean:                  Well there is subjective and objective and lots of your projects approach these problems or dilemmas. But yeah good for whom? Hopefully the person interacting with the system is the person hopefully who you want a good outcome for. But yes there is an overarching thing where that might be a bad decision for a wider kind of group, but we may be getting into extreme ethics here.

 

Sachini:             Oh yes, that is always fun I think.

 

Sean:                  Absolutely.

 

Asieh:                I think when you talk about good choices it is really depending on what we really mean by good. I take the interpretation of good in VESTAS project as trust of choices because we are concerned about level of trust and how the technical solutions can affect the level of trust. But good in other projects, I look to ask Sachini and Benedict how do they interpret good in their projects?

 

Sean:                  Shall we start with, yeah we will start with you Benedict?

 

Benedict:          Yeah, so I think from my perspective and my project it’s more about how the decision will impact the people using the machine. If the impact is mostly positive I guess it’s a good choice. And also it’s a long term process because there is one decision doesn’t have one impact but it is also, especially in medical application, a long term process and each decision has long term consequences.

 

                            So it’s hard to determine if one instant, one decision is good or not. It’s more what are the positive impacts and how much positive long term is probably a question I would ask more.

 

Sean:                  Yes there is too much nuance isn’t there I suppose with particularly anything medical. You know you fix, holding air quotes up here, you fix one problem with somebody’s health and it may well lead to side effects or other things. And things that may not even be noticed for years to come.

 

[00:20:07]

 

                            I can’t help but think of the animated film ‘Big Hero 6’. If you haven’t seen it, it has an automated medic that asks you at the end of your session to let him know, let him there’s me anthropomorphising, let him know if you are satisfied with your care. Which you know, I don’t know is that the sort of thing you need to do at the end of each interaction, were you satisfied with this? But you don’t know what happens six months down the line do you? I don’t know.

 

                            It’s interesting just briefly coming back to Asieh with the VESTAS because the project is different categories of stakeholders but also different domains of application, so what sorts of things were you looking at? What sorts of areas did you look at? It is nice and vague in the description so.

 

Asieh:                It is yeah. It is, the purpose is we are aiming to cover different range of stakeholders and different domains but what we didn’t expect is the complexity with the ethics approval. That took us you know, all I can say is that took us half of our project time which wasn’t a full time researcher so we are a little bit like behind to deliver things. 

 

                            But it is just the start with having some stakeholders, nominated stakeholders from the STN. Not necessarily military application but what is being provided is from the BSL industrial partner. So we are going to approach the stakeholders with different kinds of tools from surveys, interviews, focus groups. And finally we are aiming to have a joint workshop to see how the awareness about the technical solutions can impact the level of trust of stakeholders.

 

                            And this is something that I think I take this opportunity to talk to you about in the introduction I said I leave it later to talk about some details of that. This is the most interesting part of our project to me that, and this sounds quite novel to us, lots of work has been done in terms of trust verification and safety techniques to make autonomous systems safe. But what we are doing in VESTAS is how raising awareness about existing safety technical techniques can affect the level of trust in the stakeholders. 

 

                            We are exploring in VESTAS different approaches to making awareness and one of them is public engagement. So with public engagement we are aiming to engage the stakeholders in a different level of engagement. And one of the identified challenges in our project is the effective communication with non-expert stakeholders. And we are planning to make the public engagement activity that any level of the stakeholders with any level of knowledge can engage with the activity.  And the activities are aiming to raise awareness about the techniques.

 

                            And then redo all of the assessment things like the interviews and surveys with the stakeholders to see if this new piece of knowledge which we provided to them affected the level of trust or not. So yeah this is the core I can talk about.

 

Sean:                  It is so difficult to quantify isn’t the idea of trust? I mean you can do surveys, you can ask people to rate things on a scale but everybody’s scale is different. Benedict you also, you know, your project was to do with perhaps having workshops and things like that. Is this something that this kind of, is this something you are challenged with as well the kind of level of trust rating it and?

 

Benedict:          It’s, so we, I guess we have a different definition of trust as well which is another issue. And why we are having this project is that everyone’s approach of trust and factors of trust is very different depending on the expertise of people, depending. Because for example the people I am working with are from an ethics background, a philosophical background, so they will have a different approach then computer scientists which we are in my node.

 

                            So because of this we try to compare and merge some like idea we can have in common in what we think is trust. For example, one of our perspectives on this is that trust in a machine is a very simple vision of it, there are different things that people trust when they are using an autonomous system. And one big factor is the people behind the autonomous system more than the tool itself, so who is owning the system, who is proposing it to them and who they are using it with. It’s more people’s interaction around them that will impact trust more than the trust the tool. 

 

                            But there are other factors that are inherently related to how the tool is efficient, how it’s transparent, how they understand it and how they interact with it. So all these different categories and all different types of trust I would say that way, one is more related to the issue and dynamic around the system. And one is more about the liability of the system and how people feel confident using it.

 

                            There are many, many works around this and trust factors are still studied a lot. For example I have studied the link between explainability, transparency and trust and it is not that simple. Sometimes just the fact that a system is accurate and working well makes people trust it more than if they would just understand or it was transparent, they just want something that works well. So it also depends on the context.

 

Sean:                  It is a strange one. I mean my go-to example of this is I buy a second hand car and the  more journeys it completes the more I trust it. When actually counter intuitively the more journey it completes the more likely it is to go wrong at some point as parts wear out or whatever. 

 

                            I mean the other thing is you kind of think, you touched on this idea of trusting something because you have good experience of it or trusting something that is not going to be doing something, I don’t know, behind your back or doing things you don’t understand. It is such a tricky thing to kind of put into a box almost isn’t it?

 

                            Back to the idea of kind of choices, that again can be a difficult thing because you know we have got the idea of choosing yes or no. Or then more nuanced questions like, you know, multiple choice through to things that we are seeing, you know, these big projects like ChatGPT which you can ask it something and it will choose to write an entire essay for you if you like.

 

                            How do we look at kind of choices and kind of like, what am I trying to say? How do we rate a machine’s ability to make choices? Is there something you can build into your kind of projects about whether the choices are the correct choice at the right time? How do you do that?

 

Sachini:             As far as the project is concerned we have constrained ourselves thinking okay the scenarios that we are going to simulate in our experiments in Leap of Faith project are all founded on the machine being able to make a good choice. So we are running on the assumption which is wrong and very restrictive in deploying these machines in the real world.

 

                            Our hypothesis for the second project is that the decision the machine is going to make when interacting with the human will always be the right decision or the good choice. And as we talked about, as Benedict brought this up a little while ago, there is always this temporal aspect of a good choice. 

 

                            As far as the project, the Leap of Faith project, is concerned we are looking at making instantaneous good choices in the sense the interactions that we are trying to simulate with any robot interactions are very short in duration. The robot approaches you, tells you to do something and you do it and like I said the assumption in that scenario is what the robot is telling you to do is the right thing.

 

                            And based off of that assumption we are  further looking into expanding our project on the ethical implications of such choices. Because imagine at the end of this project we will know the right blend of capability a machine can have to instil trust in a person which will result in the person doing something the machine asks you to do. 

 

                            And we talked about this just a little bit ago, a little while ago in the sense there are a set of measurements of trust that we can capture when interaction takes place besides the subject ratings we get from surveys and other qualitative measurements. And compliance is such an objective measurement when a machine tells you to do something they do it.

 

[00:30:14]

 

                            And it has been a long established objective measurement in the trust literature that compliance, that compliance equates to trust and as we have found out in our simulations and experiments that in and of itself is a very modern concept. Because given the context, given the scenario, your compliance may not come strictly from trust. There could be like a hundred different, million kinds of things that are intrinsically, your personality, intrinsic through the situation, intrinsic to the robot itself that is trying to conversely, that will cause you to comply with something.

                            

                            So measuring even what is being referred to as objective measurement in a trusted interaction is challenging and it is kind of like what keeps me going in this project. I want to investigate further to see  how we can isolate these specific traits that points us towards trust and take us away from all the noise that surrounds it. I think that’s an interesting thing.

 

Sean:                  I am also imagining particularly in kind of scenarios maybe we take the one where the autonomous boat has turned up to try and save you. Where the sailor is thinking well I will give it a few more minutes, I will give it a few more minutes. I am not sure yet whether to, okay the water is rising, the water is rising, now I am going to comply. Now I will go on because I don’t have a choice, right. So yeah sometimes the nuance is stripped away by the data collection method right?

 

Sachini:             Yes that’s right. That’s absolutely right.

 

Sean:                  If we are talking about you know, that responsible research innovation is that built into all projects here or tell me about it? Shall we just go around the room as it were?

 

Asieh:                Yeah so in terms of when it comes to our, because in VESTAS we are approaching with stakeholders so we are cautious of following the RI rules in terms of being inclusive to cover different categories of the stakeholders. And also being contextual if you like in covering different applications. To be honest the RI aspect is covered with our social science researcher and the computer science side of the project but I have heard that they are having some framework to approach the stakeholders.

 

Sean:                  Benedict how is RI being addressed in your project?

 

Benedict:          So because people working in my project are from the responsibility project, responsibility framework project, they work on making responsibility frameworks. So their contribution in the project is to see and study how responsibility frameworks can impact trustworthiness frameworks and how they can be intricated together. So especially in terms of medical applications, responsibility is a massive question it is one of the most important probably. Also because of the regulation around it it is very important.

 

                            And because we are  looking especially at this kind of application we do consider the responsibility as a factor of trust and something that should impact every approach that we are thinking of.

 

Sean:                  Thank you. And Sachini tell us about the RRI in your project, Leap of Faith?

 

Sachini:             RRI specifically because of the nature of the project because at the end of the day we are trying, the products of this research will inform the research communities and the general public about how trust can be created with external, like with the right blend of physical attributes of an autonomous machine and that’s a scary thing.

 

                            And it’s, well we have taken into consideration, because of this very issue, we have taken into consideration how we are designing our experiments for our research purposes as well as getting other stakeholders involved in our project tasks early on and making sure that they are involved throughout the lifecycle of the project. 

 

                            And we found that the RI tools that Trustworthy Autonomous Hub produce, specifically the prompt and practice cards that they designed specifically for facilitating RRI conversations within the team, we found them to be very useful in shaping our research. And being mindful about where the risks are in the research process itself as well as moving forward when the research eventually goes out into the public.

 

Sean:                  I appreciate you had the choice as to whether to join us today and appreciate the listeners of this podcast have the choice to listen, so thanks for listening this far.

 

                            Hopefully this has been an interesting, well  it’s definitely been an interesting topic for me to  look at choice and AI and these projects have really been some good ways to illustrate it. So it only remains for me to say thank you for joining us today on the Living with AI podcast. Thank you Benedict.

 

Benedict:          Thank you very much, that was very interesting topic to talk about.

 

Sean:                  Thank you. Thanks Sachini.

 

Sachini:             Thank you for having us and I want to leave a short message to everybody out there, we are presenting our research at the Kings AI Festival on the 26th of May and the 7th of June. If you are around London Kings College visit us and talk to us and we will be happy to receive you.

 

Sean:                  Take a leap of faith and go and find out more. Thank you very much also Asieh for joining us today.

 

Asieh:                Thank you for providing this opportunity, I really enjoyed that. And also I think I can take the new dimension of trust to our project board.

 

Sean:                  If you want to get in touch with us here at the Living with AI podcast you can visit the TAS website at www.TAS.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The living with AI podcast is a production of the Trustworthy Autonomous Systems Hub. Audio engineering was by Boardie Limited. Our theme music is Weekend in Tatooine by Unicorn Heads and it was presented by me, Sean Riley.