Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Equality and Autonomous Systems
This is a projects episode where we discuss four TAS Hub projects related to equality:
Guests and their projects featured on this podcast:
Trustworthy Accessible Robots for Inclusive Cultural experienceS (TARICS) - Marise Galvez Trigo
Reimagining TAS with Disabled Young People - Lauren White
Intersectional Approaches to Design and Deployment of Trustworthy Autonomous Systems - Efpraxia Zamani
TAS Benchmarks Library and Critical Review - Peta Masters
Podcast production by boardie.com
Podcast Host: Sean Riley
Producers: Louise Male and Stacha Hicks
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 3, Episode: 6
Equality and Autonomous Systems
This is a projects episode where we discuss four TAS Hub projects related to equality:
Guests featured on this podcast:
Trustworthy Accessible Robots for Inclusive Cultural experienceS (TARICS) - Marise Galvez Trigo
Reimagining TAS with Disabled Young People - Lauren White
Intersectional Approaches to Design and Deployment of Trustworthy Autonomous Systems - Efpraxia Zamani
TAS Benchmarks Library and Critical Review - Peta Masters
Podcast production by boardie.com
Podcast Host: Sean Riley
Producers: Louise Male and Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: This is living with AI from the Trustworthy Autonomous Systems Hub. AI is everywhere right now and only seems to be growing in application and abilities but how far can we trust it? This podcast is here to discuss AI and are subject of trust and trustworthiness. It is season three of the podcast so there are a couple of seasons of episodes for you to binge on search TAS Hub or check out the show notes for links. If you are a regular thank you for your support.
We are recording this on the 25th of May 2023. And today it is a projects episode which means that we get to hear from some researchers on some of the TAS Hub projects and hear them discuss some of their findings and the challenges they came across.
Each of them is going to introduce themselves in a moment. And we will hear after that about the projects they have been working on and discuss how AI is levelling things up, hopefully, with our overarching theme of TAS and equality.
So please welcome to the room Peta, Marise, Lauren and Efpraxia. Just because you are on the left of my screen Peta I am going to come to you first. What’s your name and what do you do?
Peta: Yes I am indeed Peta. So I’m Peta Masters and I am a research associate at the TAS Hub, so I am at Kings College London. And what I am working on at the moment is this benchmarks library for the whole of TAS. And the idea is that we put together all the kind of use cases that people are working on across the network and I am working on some of the other projects at TAS too.
I don’t know what else you want to know about me. I came to this quite late. I got my doctorate in Melbourne at RMIT University. And I used to be an actor and a writer. And I could see that AI was going to be pretty important and it seemed to be, the decisions about it all seemed to be being made by young men. And I thought it is time for a few older women to get in on the act so here I am.
Lauren: Hi and thanks for having me. So yeah, my name is Doctor Lauren White and I am a lecturer in social research methods based in the Sheffield Methods Institute at the University of Sheffield. But I was the lead researcher for a TAS Hub funded project which was entitled Reimagining Trustworthy Autonomous Systems with Disabled Young People.
Efpraxia: Thank you very much for having me. My name is Efpraxia Zamani. I am a senior lecturer at the Information School at the University of Sheffield. And I am here today I suppose because I am one of the co-investigators for the TAS funded project called Intersectional Approaches to the Design and Deployment of Autonomous Systems.
Marise: Thank you for inviting me. My name is Marise Galvez Trigo. I am currently lecturing at the School of Computer Science and Infomatics at Cardiff University and until very recently at the University of Lincoln. I have been involved in TAS Hub activities since the beginning really being a researcher at the University of Nottingham. And all this time I have been focussing on human robot interaction, accessibility and trust around it. And today I will focus a bit more on a project that has just ended where I was the lead and the project was called TARICS, Trustworthy Accessible Robots for Inclusive Cultural experienceS.
Sean: Excellent, thank you very much, thank you all of you for joining us today. So we are going to get a bit more detail on all the projects and in fact we will stick with you Marise. So you can tell us a bit more about TARICS then? What is the, yeah what was the challenge and how did you go about it?
Marise: So the main idea about the project came because this robot called Lindsey had been deployed at the Museum of Lincoln for a number of years. I think since 2018 actually. And it had been used by many different people but we found that it wasn’t really very accessible, especially if we talking about potential visitors to the museum that were autistic or that had learning disabilities or learning difficulties.
What we wanted to do is co-design with end users that were autistic or had learning difficulties or disabilities how these robots that were going to be deployed in cultural environments should look like, how they should be behave like and to see if they would change anything from the current robot. We presented them with two different robots, Lindsey which I have mentioned already, a telepresence robot and really that was what we tried to do.
I can talk a bit more about what we found when you ask later.
Sean: Okay fantastic. And so maybe we will just carry on in reverse order as we were then, Efpraxia can you tell us about the intersectional approaches to design and deployment of trustworthy autonomous systems please?
Efpraxia: Ah yes of course. So in our project, I will call it ITAS for short, with this project what we were trying to do was essentially look into how we could translate and opportunalise intersectionality into the design and deployment of trustworthy autonomous systems. And we focused specifically on the maritime sector where these systems are actively used and considered but inequalities are not always meaningfully addressed.
And I can say a few more things about intersectionality. I suppose using the intersectionality as our framework let’s say, we went into the field with a view to uncover how institutional inequalities, say experiences of discrimination potentially disadvantages based on how multiple aspects of a person’s identity. So basically their gender, their ethnicity, disability, neurodivergencies and so on, these all come together at one time, at one place and create inequalities for that person, that intersectional identity.
Sean: Great thank you for that. Lauren tell us about reimagining TAS with disabled young people please?
Lauren: Okay, so I guess basically the sort of premise from the project came that, you know, we, you know TAS is becoming sort of more and more prominent, more and more talked about. And you know they are often being positioned as designed for people’s future whether it’s in education, employment, people’s social lives. But often these technologies-, and I think it’s important to kind of sit with the definition of TAS as well, because I think as a project we have kind of tried to find ways to understand that in an accessible way as well.
But you know these technologies don’t often include the people that they are creating them for in the conversations and the process and this is especially important for disabled young people.
So our project basically wanted to kind of celebrate and kind of bring in disability as the kind of driving force of the conversation and the research agenda. So we wanted to move disabled young people, not as end users but as kind of co-researchers and co-designers of technology and of TAS.
So we have got an interdisciplinary team for this project so we have got social researchers, so that’s myself as a sociologist, Professor Dan Goodley who is in critical disability studies, Kirsty Linyard who is also in disability studies as well. And then we have got computer scientists, law, various different members of the team from the University of York.
And then we have got our partners, Green Acre School, they are a special school and we have nine student co-researchers who form part of the project. We also work with the Sheffield educational team Make a Future, who hopefully I will be able to talk about in a bit more detail later on, and the Advance Manufacturing Research Centre and Sheffield Robotics.
So we have had a lot of moving parts of the project but mainly I think the important thing to takeaway is, you know, our nine young co-researchers who kind of very much led and steered the project. And I guess I am here to kind of centralise their voice today as kind of contributors and designers and makers for this project.
Sean: Thank you that’s excellent and great to hear about that integration of those young people in the process. Peta can you tell us a little about your role and what’s been going on with the projects you have been involved in?
Peta: Sure. So the project that I am working on, that I am PI for, is this benchmarks library. And the idea of the library is basically it’s to support the rest of TAS in a lot of ways but it has got two aspects to it now that I have realised. It started because when I joined TAS I wanted to start to do research, like everybody else, and we wanted to look at what would, we had certain ideas about what would make an autonomous system more or less trusted or trustworthy. And so we thought we need a use case, we have got these ideas for what we can do but where is a use case that we can try it out on and then test it with people?
[00:10:08]
And we started to ask around you know, where does TAS, where are all the use cases? What is everybody using? And the answer came back well you will have to look in the papers because there is no repository, you just need to ask, just find out or make one up. And we realised that on the whole that’s what everybody does is you have an idea for something and you want to test it so you make up a way to test it.
Which is fine, but the idea of the library is, no instead of doing that let’s put them all somewhere and then the good ones, the ones that are transferrable we can share and different people can use them. And then they acquire the status of benchmarks by being reusable in that way.
So what that means, and this is the sort of, so it’s for researchers essentially, the viva is for researchers and for them to share their news cases and to critique them. There is the opportunity to critique them as well first of all in the process of being built, and that is a lot of my energy has gone into trying to make that happen.
But once they are there and what you get in there then is this little story, the story of the robot did this and the person did that and the consequence was. So you then have something that’s quite interesting, I have realised or I hope, for people outside research, outside AI, the general public to be able to look and see oh here’s this, this is this, is what cutting edge research in AI and trustworthy systems, this is what they are using to do their testing.
And it’s very accessible because it’s non-technical so you haven’t, and you haven’t got to wade through a scientific publication to find out what did they do, there it is, this is what they did anybody can understand it. And so I have realised that there is a big piece here that we can then promote to the general public to you know come and take advantage of the transparency of the resource.
I have got two angles in coming on this podcast, one is across TAS and people researching trustworthy systems, please once this library is out there, please contribute your use cases. And to the general public hoping that we reach them, please take a look and see what you find.
Sean: And what form does that take? Is that a website or how will people interact?
Peta: So yes it will be a website. And it will be completely open access from the point of view of reading and searching but then you will register to upload use cases or to add commentary to cases.
Sean: And one thing I would ask about that is what happens when TAS finishes? How does that, is it going to be a lasting repository? How will that work? Sometimes when projects or research projects come up with an idea and they fund a website and then the moment the project finishes it seems to disappear or, you know. Will that continue?
Peta: Yes, what a good question. So yeah, that is something that I have to work out during this coming year and part of my programme will be to try and find. So I’m not on faculty, I am a research associate. I am employed by TAS Hub for the duration of TAS, so I need to, there is a couple of possibilities.
I need to find somebody within, who is on faculty, who is interested enough to take it over, somebody within the network who is interested enough to take it over. Or find a way as we, as we develop community for the community to begin to take it over themselves. So I imagine there will be quite a lot of duration of what comes in. Because everything that comes in will be, have to be moderated. But it may be that we can, if we can get enough of a sense of community we can find people to take that on. But it is definitely the plan is that it should exist well beyond TAS.
Sean: Excellent stuff. Just one thing that kind of as an open question, when Lauren was talking about her project we were talking about how the young people were integrated in the process and part of it. And I know that in research obviously there is a concept of responsible research and innovation which TAS has always tried to embed in the beginnings of projects and not just bolt on afterwards.
How do we approach the kind of inequality, you know inequalities can be from neurodiverse to physical difficulties through to ethnic, you know there are so many things that can cause inequalities. How can we ensure that those are all bolted on? This is an open question but I am going to go to Peta first because you look like you might have a bit of an answer but I would like to hear everyone’s thoughts on that.
Peta: Well for our project I suppose it begins with the team, so you aim to have a bit of diversity within your team. You can’t always control that because you are not necessarily hiring and firing, so there is that. Even if it’s not quite there you can usually find advisors to add in for that.
And then we’ve had a lot of, we have done a lot of work to try and get input from stakeholders. From a wide range of stakeholders which are largely initially our fellow researchers within TAS but then as we go out to the general public, we hope to get that feedback. I can see a lot of people have got a lot of things to say, I will shut up.
Marise: From our perspective I suppose we found out that we had to be very adaptable and open minded and reflect a lot throughout the project. Because it maybe that you thought that oh we can do things this way and then you need to trash everything and start over because the way you envisaged things couldn’t work for those people you want to work with. And you want them to be able to access that technology in the end and contribute to co-design and everything.
So that is the takeaway that we have, that you have to be very reflective and open minded.
Sean: Thank you Marise. Efpraxia?
Efpraxia: Thanks. So in our case what we found was that it was very important to give voice to the stakeholders that we had previously identified with the help of our external partner. And I suppose this opportunity, this, well it’s not really an opportunity it was our responsibility to give voice to the stakeholders.
This was really something that they considered it as really valuable because not only they were able to express their concerns, their issues they had faced in the past for example. But also and more importantly for us as researchers as well, was that giving voice to stakeholders was an opportunity to go back to the basics of our project, our co-design process.
Which by co-design, co-producing or co-creating anyway, project output but also research designs is for me at least, the crux of including people previously, not necessarily excluded, but not actively included in research and similar projects.
So I think being able from the start of the project to talk about what people want to say, what the stakeholders want to see. What they want to use or what they need is a very direct way into making sure that they are included and that the project output or the output more broadly, whatever these are, will be of value to them.
Sean: Thank you. Lauren?
Lauren: Yeah I mean I think just sort of echo much of that. And I think for us, you know what is essential is, you know, not thinking about end users or engaging with stakeholders like at a particular point in the process but actually co-production as sort of very much front and centre to our project and to kind of inclusive research more generally. And by that I mean kind of involving, in our case, nine young co-researchers at all stages of the research process.
And the special school that we worked with they were involved in the actual bid writing as well and the project design. And then thinking about the methods and the kind of, and picking up, you know, being flexible with what you do and letting co-researchers lead and shape that. And thinking about dissemination so every single stage of the research process has co-researchers involved in it and they shape that. It’s not, you know, it’s not the academic researchers who are, you know, setting that agenda necessarily.
So I think it is always working to kind of redistribute sort of power relations and think quite critically about how we can make sure that everyone is involved and included at all stages of the research process.
[00:20:06]
And I think as well one of the things that I wanted to highlight as well is working with our nine co-researchers, our young co-researchers, you know, the activities that we sort of worked through together, you know they very much kind of offered really philosophical perspectives on trust. We asked them what they thought trust was, and maybe we will come on to that a bit later about that question of trust, but they also fed into the design and thought about design.
And then with the team Making Futures, they made stuff and I think actually being involved in thinking through and imagining what these concepts that we talk about actually mean and how people might perceive them differently. So thinking through the kind of perspectives and philosophy of what we talk about. Thinking about the things that we design and the production of it and the making, you know, it’s important that we include people in all stages of all of those different things, so yeah.
Marise: I just wanted to echo the importance of what Lauren said involving people that should be benefiting from the research and from whatever it is that you are doing in your project from the very beginning. Even from when you are designing the bid, the idea. Because that doesn’t happen in many projects and I think it’s vital.
Peta: Marise said about being flexible and open minded and our project it looks like it is just, oh it’s just software development what can possibly go wrong? But we did actually, as soon as we, as soon as you open the question actually what is a trust related use case scenario with the potential to be a benchmark? We realised that what we arrived at was actually different from our stakeholders which are the research community. And we also found we are having to do quite a lot of thinking on our feet to make sure that we really do meet those needs.
Sean: Did you encounter any concerns or worries from people about trust in this kind of area? I mean there is quite a spread of different projects here so I appreciate that the idea of equality is difficult to kind of quantise but what worried people about kind of trust in this AI sort of space in the projects that you have worked on?
Lauren: I mean I don’t know if this is directly answering your question but we had to really break down what TAS is as a way of introducing the research project. And one of the ways that we did that is we asked what is trust and do we trust technology? And actually you know we had some really insightful and kind of really meaningful contributions on what we actually perceive trust to be as humans.
But one of the things that, well I guess there is two sort of key points that we talked about, and it sort of really centred away from do we trust the technology and do we trust TAS and how do we make it trustworthy. Because, so there were two kinds of things really that young co-researchers highlighted and the first was them being trusted to have a say in the process. To be involved in the research and to be kind of autonomous beings who have something to say or might feed into designs and have ideas about what these technologies might look like in the future.
So that really flips that on the head about what trust is, you know as a definition. So I think it’s not only just thinking about do we trust this technology but who are we trusting and involving in the process when we have these conversations in the first place?
But then also another thing that they talked about was trust and how that comes about through meeting the designers and meeting the people that are involved in these creation, the creation of these technologies. And again emphasising that kind of, I guess those personal relationships really of you know we want to know who is making them and how we relate to them. So I think they are really important sort of contributions when we think about trust and technology.
And I guess another thing that I would say is, you know, some of the things that we encountered in the project. So we went on a few sort of field trips with our young co-researchers and often the technology didn’t actually work or didn’t pick up our young people’s voices and things like that.
So I think it is worth thinking about kind of what if they don’t work or what if they go wrong and how does that feed into trust and especially when they might be designed and not think about disability in mind as well. So I think it is important to think about that and to consider what might be missing in design and how that might feed into trust, especially if technology goes wrong or lets you down.
Sean: One thing I was thinking is that people with health conditions or impairments often are using a lot more technology then somebody who doesn’t have those considerations. So perhaps, yeah more experience in a way with these things. I am aware some other people want to speak though so Efpraxia do you want to?
Efpraxia: Yeah thanks. So in our work we didn’t focus exclusively or let’s say explicitly on trust. We focused more on how we can make, how we can design TAS systems specifically for the maritime sector in a way that is inclusive and so on.
And what we found out when we sort of entered the field and entered the conversations with stakeholders and participants and the maritime agency was that trust for the maritime sector is something different to how we perceive, how we would generally perceive trust. So for them trust is more about safety and security, making sure that the technology will work in a pre-defined expected way.
And obviously you mentioned earlier that the technology may break down or it may bridge the expectations of let’s say the end user with a user being in air quote. And they have ways to solve this issue, they build on redundancies essentially they have redundancies for every system, for every person working this system so they sort of solve this issue like that.
But then when we started unpacking the trust concept on the basis of being trustworthy, being a system that is fair for all involved, inclusive, respecting diversity, then we started seeing that there are, not tensions in the sense of conflict but tensions between how these things are understood, from the perspective of the designer from the perspective of the operator, from the perspective of the regulator and so on.
And in many cases, trustworthiness or perceive it or diverse and all these concepts we were working on on the basis of our intersectionality approach, tended to be included in the design but it was very silent approach. Like a tick box exercise let’s say. So they were, there were designers for instance thinking okay but I have made this collar to be smooth or whatever so I am respecting neurodivergencies. I am following the guidelines of my sector. But then there was no consideration of who the operator of that system would actually be.
There were assumptions there that it would be an able bodied probably male, probably white, without any disabilities and so on. And that caused discrepancies in how the system would actually be used, right. And that left it open for the system to not be trustworthy anymore if we changed the operator of the system.
Sean: Yeah it’s a classic problem with siloing isn’t it where, well there are multiple problems, you know. You have got terminology problems where some people use the same words for different meanings or in different contexts. And then you have got, as you said, you know that idea of designing something with yourself in mind effectively often rather than the bigger picture. Marise?
Marise: Yeah so I guess we did explore trust. It came across in different ways so we found out that one important factor was contextualising so we run several workshops and one of them it was the special school with not much context. We brought in a telepresence robot and we ran a workshop there with some students from the school and we told them that the robot was supposed to be deployed in our museum. But because when they were interacting with the robot they didn’t know exactly what to expect, what the robot would do, because it wasn’t a museum, they expressed that they, some of them didn’t trust it very much.
Whereas the other workshop that we ran in the museum actually where they were interacting with the robot in the museum and the robot was moving around and was enabling somebody else to access remotely the museum, we observed that they were much more comfortable when interacting with the robot. They started to talk a bit more about maybe all of them wanted it to have a humanistic voice. They didn’t like it having a robotic voice. Most of them said that he should be dressed for the context, depending on the museum he should be dressed or have stickers or something based on the context that it would help them trust it more.
[00:30:21]
But also then we followed a bit of a conventional approach during the co-design workshops. We tried to take a questionnaire that is used very often in robotics that is called the negative attitudes towards robot skills questionnaire. And we had to adapt the questions to use with the people that we were working with because they are over complex I think. They agreed, the people that work with us, they were either teachers of the school or people who knew them very well, also agreed so that really helped because they helped us adapt them.
But basically these questionnaires explore the attitudes that people have towards robots and we used it to guide the co-design of how the experience of interacting with the robots should look like etcetera. We observed that they wanted the robots to be friendly. They wanted the robots to be happy for them to trust them. They didn’t want them to really feel or be able to feel any bad emotions, only a few of them wanted that.
And it was in those cases where they were also feeling those emotions and it wasn’t, they didn’t definitely want them to be angry or be able to get angry because they didn’t feel that was something good to have on a robot.
And there is a very interesting question in that questionnaire that is about the robot dominating society. ‘So I feel in the future society will be dominated by robots’. And what we noticed is that whilst most of them were pretty comfortable with a robot making decisions on its own, they weren’t comfortable at all if the robot tried to enforce decisions that it had made that affected them directly and they didn’t agree with. Like for instance you can’t eat chocolate or you can’t go out with your friends or you cannot do this specific thing.
So that was very interesting because suddenly some participants that were really open to having a robot as a companion, as a friend, as a tool at work, they started to say well if the robot can make decisions and can lead me to the place or enforce them I couldn’t trust it. I wouldn’t want it next to me or I wouldn’t want to have it as a friend, a companion or even as a tool at work. So that is very interesting that how we design them or what they are capable of doing the robots has a direct effect on how it’s perceived, how they are trusted.
But I mean that’s the main takeaway I think. And they were all people that were either autistic or had some sort of learning difficulty or disability. They were, the age range we were working with teenagers but also with adults, it was across all of them the same. So I found that quite interesting to be honest.
Sean: It’s almost like you know, this is a terrible analogy, but if you use a dictionary for getting information or a reference book for getting information you will trust it, but if it started hitting you over the head if you didn’t do what it told you then you perhaps wouldn’t trust it so much anymore. Lauren you wanted to speak?
Lauren: Yeah I mean I think this is maybe a separate point in some ways and going in a particular direction, but I think one of the things as well that I wanted to highlight is its building trust and relationships, the research team as well. And I think such a huge part of our research project is building trust and relationships with our community partners and co-researchers.
And I don’t think that should really be sort of like understated really as part of, you know, co-designing, co-researching that you have got to take time to build these relationships and to build trust. Especially if you are having conversations around these things that are potentially going to be really important in shaping society for the future and that affects people’s lives.
So I think, yeah not forgetting the kind of trusting relationships we have. And being trusted as university researchers is something that I think is also important and we mustn’t forget as well. I don’t know if that is a slight tangent but I just kind of wanted to raise it really.
Sean: It’s all trust. And then trust, the people building the systems need to be trusted as well as the people who are, and the systems themselves and the people who are running the systems.
Marise: I think it was very helpful that we had joining all the workshops people that knew very well these young people that were autistic or had disabilities, they had known them for many years. I have known most of them for many years because I had worked with them on other projects and I think that really helped and it helped them open up a bit more. Criticising the bad as well as mentioning the good because sometimes you have, in previous projects I observed that sometimes people coming to a workshop etcetera try to please you as a researcher. Maybe they don’t trust you or they don’t trust the technology or whatever, but they don’t say that.
So building that trust between the research team, technology designers and during the co-design process is very important.
Sean: It’s almost irony really isn’t it that people will try to tell you what they think you want to hear to build a trust or a connection, which is obviously the opposite of what you want in those situations.
Marise: Yes it is but then those are products of research outputs that I use less.
Sean: Absolutely, absolutely.
Efpraxia: We had similar issues in terms of trust and trusting relationships between participants and researchers and the research team. And like in the other projects that we have been listening to right now, in our case our participants were, you know, people who were either actively or passively being marginalised in the maritime sector in some way or another. So it was really important that we gave them space to talk about their experiences in a way that it wouldn’t further jeopardise their position let’s say in the sector right.
And we were talking about difficult themes like exclusion and issues they have experienced in the past by others as a discrimination against them. So I think the approach we took we brought them together around the serious game workshop essentially, so what we told them well actually okay we are doing research but we are just going to play a game.
And they could use this environment as a low stake environment let’s say, as a low stake opportunity to relate their personal experiences to that of other people participating in the same workshop who were similarly disadvantaged or discriminated against. And that sort of empowered them to develop, so we developed together with them the solutions to the issues they were experiencing but within the context of a board game essentially so it was easier for them to talk and open up.
Sean: That raises something I wrote down quite early on in this conversation which is context, okay, so context is so, so important for the systems but also for the research. And disarming people by making them play a game and yet actually getting some interesting stuff out of it is a good bit of context to use I suppose.
Peta: I am rolling back a little bit to your question about problems with trust just more generally. Because the big issue that I see, it didn’t necessarily come from outside but is this thing of and when Marise was talking about that particular questionnaire and all those findings about what it is that persuades people to trust a system.
There is this great danger that when we do this research always that we find ways to persuade people to trust systems which then aren’t necessarily trustworthy. And we are trying to sort of lock that into, you know raise that issue within our library to make people or persuade people to think about that but it is always going to be a problem.
And my background is in deception and deceptive AI so I am very aware of this idea that you, if you can persuade people to trust you or something, then that can be the first step to persuading them to part with their money or you know whatever the next step is.
So there is always that danger and yet. And this finding about, that we don’t like this idea that your participants were happy to trust as long as they weren’t being told what to do, and I am working on another project which is a leap of faith project which is all to do with instantaneous trust. And it is exactly on that point how do you persuade people to trust a system that you have never encountered before when it does need to tell you to do something? For example there is a fire, this is the way out follow me, so there is a problem there.
[00:40:00]
We are looking for how do we persuade people to trust without knowing anything and yet if you don’t know anything and we persuaded you to trust it, well then you could be in big trouble.
Sean: I suppose equally the context is key there. If you have got choices of there appears to be a fire and there is something here offering me salvation then fight or flight kicks in doesn’t it?
Lauren: And I guess one of the things, so a huge part of that project was working with a team called Making Futures and they sort of run an educational programme that encourages children and young people to sort of build and create and develop skills that sort of bring arts and kind of science together.
And so we did these sort of make space workshops to sort of think about how we might create our own tasks or our own sort of robotics. And I think it was just such a crucial part of the process to co-producing knowledge together and to co-design but also to imagining, for our young co-researchers to imagine themselves as users and makers and designers.
So I think that kind of creative process and the work that Making Futures do, I just want to kind of promote their work really. It is just such an amazing way of thinking through these kinds of conversations on technology and how you might get young people involved in design and making.
So yeah I just wanted to give those a bit of a plug if that’s okay?
Sean: That’s fine, yeah. Some of these organisations, I mean I went to a conference once and they were using this Lego learning system. Some of these ways of getting people to just use their brains in a different way just by making and doing is always really good stuff to see. Peta?
Peta: In a way that’s, when I first kind of introduced myself and I said I saw all these people working in AI and feeling that I need to be involved in this. And it’s the same, for young people its similar, there are all kinds of marginalised groups I suppose that can be involved. I think of all those groups of people that, you know, if you use the moral machine, that automobile thing, all those groups of people that you are encouraged to knock over, they have a stake in all this too.
Sean: That’s just about all we have time for today on Living with AI so many thanks for joining us today, Marise.
Marise: Thank you very much for inviting me. I also wanted to say thank you very much to the NICE group, which is a group of researchers of autistic and learning disabilities and it’s the group that worked with us during the project, Alford School in Nottingham and members of Nottingham University, University of Nottingham and of course the University of Lincoln.
Sean: I feel like I am at an awards ceremony, fantastic. Efpraxia?
Efpraxia: Thank you very much it has been a very interesting conversation. From my side I would like to thank obviously the Maritime and Coastguard Agency our partners to this project. But also the participants across the numerous workshops we had who were eager, willing and able to share their own experiences which was very important for us and also illuminating let’s say.
Sean: Thank you. Thank you Lauren.
Lauren: Thank you and thanks for having me involved. And yeah I just want to kind of say thank you to our nine co-researchers, young co-researchers at Green Acres Special School, our academic team both at University of Sheffield and the University of York, Sheffield Robotics, the Advance Manufacturing Research Centre and Making Futures. And if I can plug our project animation as well, it’s available to view online. I can send a link then please do go and watch it because it summaries the project in a fun and creative way so yeah I would like to direct people to that, so yeah thank you.
Sean: I will put the link in the show notes. And thanks last but not least Peta, thank you.
Peta: Yes thank you for having me and I found it really interesting this whole conversation. And of course I would like to thank every researcher in TAS, the whole community and beyond, especially all of those who will be contributing to the benchmarks library.
Sean: If you want to get in touch with us here at the Living with AI podcast you can visit the TAS website at www.TAS.ac.uk where you can also find out more about the Trustworthy Autonomous Systems hub. The living with AI podcast is a production of the Trustworthy Autonomous Systems hub. Audio engineering was by Boardie Limited. Our theme music is Weekend in Tatooine by Unicorn Heads and it was presented by me, Sean Riley.
[00:45:10]