Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Early Career Researchers in the TAS Hub
We're rounding off this season of Living with AI with a bonus episode! We'll meet three Early Career Researchers (ECRs) who have benefitted from a TAShub grant. We'll find out what they did with the grants, how the money benefitted them and what legacy TAS leaves.
ECRs:
Andriana Boudouraki specialised in telepresence
Balint Gyevnar specialised in explainable AI in applications such as Autonomous Vehicles
Eike Schneiders focussed on the role of responsible AI in interpersonal relationships
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 4, Episode: 13
Early Career Researchers in the TAS Hub
We're rounding off this season of Living with AI with a bonus episode! We'll meet three Early Career Researchers (ECRs) who have benefitted from a TAShub grant. We'll find out what they did with the grants, how the money benefitted them and what legacy TAS leaves.
ECRs:
Andriana Boudouraki specialised in telepresence
Balint Gyevnar specialised in explainable AI in applications such as Autonomous Vehicles
Eike Schneiders focussed on the role of responsible AI in interpersonal relationships
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to Living with AI from the Trustworthy Autonomous Systems Hub, or TAS Hub. Today we have a bonus episode, where we’re going to meet a few of our early career researchers, or ECRs. My name’s Sean Riley, and we’re recording this on August the 6th 2024.
This is, as I said a bonus episode to showcase our ECRs and what we’re going to do is get a brief introduction from each of them and then we can have a conversation about what TAS has meant to each of them. We’ve got Eike, Andriana and Balint here. So, Eike, tell us what’s your name, what do you do?
Eike: Hi, Sean, my name is Eike Schneiders, I’m a transitional assistant professor at the Mixed Reality Lab at the University of Nottingham. And broadly speaking I investigate human computer and human robot interaction, which, of course, also brings me to the Trustworthy Autonomous Systems Hub.
Andriana: Hello, hi, yes, my name is Andriana Boudouraki. I am currently a research fellow also at the Mixed Reality Lab at the University of Nottingham. And I’m also a human computer interaction, human robot interaction researcher focusing on also telepresence technologies and technologies for communication.
Balint: Thanks Sean for this podcast. Yes, I’m Balint Gyevnar, I’m from the University of Edinburgh, I’m a PhD student there. I’m working broadly at the intersection of Multi-Agent Systems, natural language processing and cognitive science to develop explainable AI systems for autonomous agents.
Sean: Excellent stuff, brilliant. Thank you, all of you for joining us today. We’re going to start then with Eike, can you tell us about what brought you to TAS. I’ve got notes about a grant, can you tell me about your grant and what was going on there. What was the idea?
Eike: Absolutely. So, when is it, last June, I guess, so June 2023, and this of course goes for all of us, we’ve had the honour to be awarded the UKRI TAS Hub Early Career Researcher Award, which came with some money that we could spend more or less however we saw fit. In my grant application, I suggested that I would like to use the money in order to widen my academic research network, i.e., I would like to visit labs, give lab talks and start basically future collaborations. And that is broadly speaking what I’ve done with the money in the last year or so.
Specifically I used it to travel to the Human Robot Interaction Conference in Boulder, Colorado. And during this trip I’ve met new collaborators which has now resulted in one paper under submission with academics from all over the world that I’ve never worked with together, so that’s great.
Furthermore, a follow up trip to that was a trip to Austin, again, to engage in conversations with researchers from all over the world, focusing on robot encounters in the public space. So, another crowd of human robot interaction people if you will.
And then a few months later which was this May, I had a trip to Boston in order to give a lab talk, an invite to talk at MIT in the interactive robotics crew, which now also has resulted in follow up publications that will hopefully be submitted for publication later this year, and yeah.
Some other stuff happened with the money as well, so some of it was used for participant recruitment, for studies, but the majority of it was absolutely spent for networking and striking up new collaborations.
Sean: Fantastic. Andriana can we just talk about what was your introduction to TAS and tell me about your grant monies?
Andriana: Yeah, so as I mentioned I mostly have been studying technologies for communication. So, one of these that I’ve been focusing is robotic telepresence. So, this is basically video-conferencing but also utilising robotics to give people more capabilities. And, so within this field automation is slowly becoming more of a thing. Like, automating the things people do when they communicate through technology. So, that’s how I started thinking about TAS and that I should kind of become part of that community as well and learn more about TAS.
And, actually quite similar to Eike, I used my award to go to conferences and just connect with more people. Yeah, build collaborations, keep discussing things that I had already been doing in my research, but bringing autonomous systems more as part of that.
Sean: So, you physically went rather than using the telepresence to the conferences?
Andriana: I did, yeah. Unfortunately, it’s still- You know, meeting people in person and having discussions is, yeah, it’s very valuable.
Sean: Definitely, definitely. Balint, tell us about your introduction to TAS and the grant that you had?
Balint: Yeah, so being a PhD student I am really at the very start of my career. I, you know, started out like exploring what I was interested in. And I came in to TAS through self-driving cars, so I started out with that. And, sort of, we realised early on that, you know, the motion planning in these vehicles is really sort of inscrutable and we’re hoping that if you could explain how and why they do the actions that they do it could make them more trustworthy and more accountable.
And so my supervisor sent me the TAS application my way and, you know, I thought it was like a really great opportunity to try to get some extra funding to do some actual testing with people. And, so I applied and that’s mostly what I’ve done.
So, I spent half of the grant award on recruiting participants and testing different explanations with them. And how we can calibrate that trust with explanations. And then I used the other half on travelling to New Zealand to the Autonomous Agents and Multi-agents Systems Conference where we have a full conference paper published on a system that we are proposing. And also a workshop paper on sort of the human subjects study results.
And it was an amazing experience, I’ve actually met another fellow PhD student from the University of Southampton and we are now trying to organise an explainable reinforcement learning competition based on that to try to actually create a baseline for these systems, because everyone is sort of going in circles not really building on each other. So, that’s sort of my experience so far with the TAS.
Sean: I looked at some of the details and one of the projects you’ve been working on, Cars that Explain, I like the idea of it, something explaining why it’s made a certain decision. I think that is really important.
Balint: Yeah, thanks. I mean we are really hoping that if we can do this right then people would be better able to judge whether the decisions of a system is good or not. And, you know, rely on that system accordingly.
Sean: It’s got to foster some trust, rather than going why is this thing doing this, which is kind of like a refrain you hear from people who are frustrated with technology. Why is this happening. I mean I had some problems with my tech just joining this conference. I had to turn the computer off and on again. If something had said it’s because of this, I would have been less frustrated, right?
Balint: Yeah, hopefully. But also we don’t just want to focus on sort of improving trust but also making sure that if the system misbehaves that people don’t just blindly trust it, right. Like, especially with these new chat applications, everything like that, I think it’s going to be especially important to make sure that we know when it’s not giving the expected results.
Sean: One thing that’s come up just from that initial conversation with all three of you, has been that you’ve obviously, managed to use some of the money for travel and for networking. And it strikes me that this is really important. Have you found this a real benefit to your careers? We’ll just go round the room and, you know, who wants to start with that one? Eike, just because we started with you, let’s go to you. Tell me about the benefits there of going and networking and meeting all these different people?
Eike: Sure, sure. So, I mean in general one of the, if you ask me, major strengths of an interdisciplinary hub to some extent, such as the TAS Hub is really that you are exposed to people who think differently than you. People who come from different backgrounds, such as your people from medicine, your people from psychology and anything else basically.
And being able to travel and having the autonomy of where you travel with that pot of money that we’ve each been given, has absolutely strengthened interdisciplinary collaboration because often you run the risk, at least that when investigating autonomous systems or AI based systems, that you end up in silos, think tanks if you will where everything happens within computer science and everyone else- AI is just something that happens to them. But this interdisciplinary nature, meeting new people from departments and from parts of the world that you might not otherwise engage with has been tremendously valuable, I think, to broaden and diversify the way we think about these systems and who they impact at the end, which I think is quite important, yeah.
Sean: And one of the things that sometimes gets brought up by computer scientists, you know mentioned computer science then, is that they are sometimes brought in just to make the software for the research, right. Is there anything new that you’re learning through these collaborations as a computer scientist or is that the case, you know, are you like oh well you can just do the software and we’ll do our research in whatever, in psychology in whatever the other disciplines are?
[00:09:27]
Eike: Yeah, yeah, yeah. Not at all. So, just to give one specific example on TAS Hub from the project that I’m currently on, which is the REGALS project. It’s a project at the intersection between generative AI, so traditional computer science and a law department, and several law departments around Europe. And it’s very much a symbiosis if you will. We can’t investigate what we want to investigate from a computer science point of view without the legal expertise on board. But the same goes the other way around. So, we’re not just there to implement the study and crunch the numbers, we’re there to investigate what’s the impact of, in this particular instance, generative AI, generated legal advice, specially large language models. And they, of course, are interested in how does generative AI change the legal landscape. So, I think it’s very much a hands in hands, and a true collaboration more than we need you to implement something, go.
Sean: Good, good, well that’s promising then isn’t it. I’m going to go Andriana next, and ask about some of the benefits but it strikes me that one of the things you’ve just mentioned there is this interdisciplinary kind of aspect of TAS and some of these collaborations you’ve been all working on. But your project specifically is communications and telecommunications. How is the communication? You know, there’s sometimes terminology differences or similarities, the same terms mean different things in different disciplines. How did you find that?
Andriana: It’s tricky. So, for example, I went to the HRI conference to speak as- The kind of technology I use also is robotic, but, there, yeah, people are more focused on robotics both from a technical aspect but also, yeah, interactions with robots not so much interactions between humans through robotics. So, there was definitely, for me, the challenge of how to get people interested and show that- It’s also about between people, what do people do with one another. What technology is around them, in front of them, between them. So, yeah, there is always that thinking not just where can I publish but where can I have the right conversations or if I end up going somewhere how can I adapt the things I want to talk about to make sense to people. Because there is, for example, the CCW conference where everyone knows about communication and it’s a different conversation there. I would have to make robotics relevant to that audience.
Sean: And are you having to kind of then translate all of that in to something that, you know, for publishing purposes or for your grant, you know, to do the work you want to do? You know, bring in all these different types of conversations and put them in to- I don’t know, I don’t know what the right phrase is, Rosetta Stone, something that is, you know, sits in the middle.
Andriana: Yeah, and kind of speak to the right audience or give them something from your resource that is relevant to them. But, it’s also I mean, it’s a challenge but I think it’s also good because it pushes you to also engage with different ideas. And expand your work. So, going to HRI for me this year, I attended a lot of workshops and did like shorter position papers instead of a full paper that comes from my own data. So, I reflected on previous work I had done to make it relevant to topics relevant to HRI. So, I enjoyed that because it meant I wasn’t just finished with my past work, I could continue to expand on it and bring it to a different audience. And for me to also see a different point of view, yeah.
Sean: Yeah and integrate it in to some of the newer stuff and some of the things other people- I mean it sounds like- Yeah, we mentioned a couple of challenges, but obviously there’s been a lot of benefit then, you know, to doing this and to going to these places?
Andriana: Yeah, yeah, absolutely. I mean with having that extra money where I don’t have to be so, like, selective in where I go, I also was able to go to a conference specifically on accessibility, which is not my area of focus but I think all of us should consider it.
Sean: Yes, yeah.
Andriana: But because I had the chance to go to that event, I just learned so much that otherwise I wouldn’t have from people from a different discipline.
Sean: Yeah, excellent, good stuff. And Balint, tell us about some of the benefits that the grant has brought for you, you know, what is it you perhaps wouldn’t have done without this?
Balint: I really have to echo everyone else’s opinion here about interdisciplinary work. I think it’s super-inspiring and all too often computer scientists are just holed up in their Ivory Tower and thinking about stuff, thinking about the broader consequences. And I found that really, really exciting and engaging, especially in such a widely, fast moving world.
I mean for me the ability to run these participation studies with people has been really interesting. I’ve got some comments on them and how different people thought about just, you know, interacting with these explanations and the [unclear 00:14:44] cause, and some of them expressed concern. Some of them expressed excitement.
And so I think this is one of the crucial points that maybe has not been mentioned that much yet, is that, for me, personally, engaging with the lay-public, as well, that is really important to me. And I started at a very personal level. I mean I talked to my parents, I talked to my close friends who are not an expert in this field. And they always ask me, you know, about what do you think about generative AI, what do you think about all these autonomous things going around changing our lives. And you really have to think about how you phrase your words. How you express your ideas, because a single word can change their opinion in a very interesting way. And, so I’m always slight stressed about how I talk about this. But I enjoy this sort of public engagement and interaction with the lay-public, a lot really.
And I think- You know, part of going to the conferences and part of the money that I spent on going to conferences, has really shown me that people that are really, really experts in their fields but they may be not as keen to talk to the lay-public and I think that’s something I’m really trying to push. It’s like talk to your friends, talk to your relatives, talk to anyone on the street if they ask you about AI and try to educate them about these issues and about these questions rather than, you know, someone on X, formerly known as Twitter, would in the comment section say something about it. So, to me that sort of public engagement is really important.
Sean: I think that’s fabulous, I mean that was- Coming on to the next questions was going to be some of the outputs and some of the outcomes that you’ve found. And it feels like that’s a nice segway in to that, because that is an outcome isn’t it, that is an output. It’s outreach, you know, get the word out in as many times as I can there, but, you know. Have you found that the- I’ll still with Balint here, but, you know, off the back of that did you find that that changed maybe how you published anything or, you know, the way you talk to people, did that have an impact on how you talked about it even within the discipline?
Balint: Yeah. Within the discipline whenever, for example, I go to a post to talk about someone’s work, at the end I always like to ask so how do you intend to communicate this to the public. And it’s really interesting to hear other people’s opinions. And there’s been a lot of discussion about that.
In terms of how I publish, I try to write in a clear language that perhaps has made- I wouldn’t expect, you know, people to, in their free time, download a paper from online and just like read it, of course, that’s not the idea here. But I try to- When I’m writing a paper I already try to think about the sort of clarity of language and the ideas that I put in to them and how I can then translate that later on in to like say a talk on trust in autonomous systems or just a conversation with my friends, for example.
Sean: Eike, how about you, what about outputs and outcomes, tell me about some of those?
Eike: Yeah, absolutely. So, in terms of engagement if you will, these are primarily engaging the academic community as we do so much of. I give a conference talk on this as well as an invited lab talk. In terms of publications, I think, not to reach the general public, the publication strategy hasn’t necessarily changed, but definitely since joining the TAS Hub in 2022, I have started thinking more about how can I have societal impact, such as the [s/l LM 00:18:31] project that I mentioned earlier, it’s ultimately trying to inform policy which I’m aware it’s not directly- It can have a long term, if you well, that impacts the general public.
Then in addition to that, most of my projects now they include the general public from the very early stages. So, in this particular case we’re looking at how the general public uses large language models. So, being part of TAS definitely has changed the way I’m thinking about how to involve lay-people, how to involve non-experts because, as mentioned earlier they are ultimately the ones that have to live in a world where these systems exist. And, therefore, I think we should absolutely involve them during the conception of these systems.
Sean: Of course, and then they are the majority of course, you know, there are far more many people who are not experts in this field than there are experts in this field.
Eike: Absolutely.
Sean: Andriana, what about outcomes and outputs that, you know, you’ve had as a result of the grants?
[00:19:35]
Andriana: Yeah, so it’s mostly been for me, also kind of small publications, so posters, position papers. So, I just finished my PhD, so I didn’t have new data to publish, it was just kind of reflecting on old work or presenting early work. I think Balint made a very good point that we need to engage with the public more and that is definitely something I need to take on forwards.
So, what I try to do is, at least, in my publications that address the academic audience, talk about people in a very kind of humanising way and just- Yeah, use straightforward, clear language and then just- Yeah, talk about people, describe things people do in just kind of plain terms and make that part of how we think about what happens when technology is used. But, yeah, going forward, I think it’s, as Balint, said important to engage with the public.
One way that I saw this being done recently was when I went to the accessibility conference that I mentioned. So, that was- It was actually the ACM summer school on, I believe accessible and inclusive technologies. And they had, not just academic but a lot of attendees from local charities and kind of cultural venues, schools. So, people that don’t just do research on this technology but also use it or actually bring it to the public. And that was really nice to see them also present their work and their approach. And also it was a very good way to meet those people and build relationships so that we can collaborate in the future as well.
Sean: And I think often people who, you know- You mention accessibility, they’re often heavy users of technology right. They’re often using more technology perhaps than, you know, most of the general public. Are there any pleasant surprises, Andriana, you know, things that you weren’t expecting, things that you noticed that popped up that perhaps, you know, surprised you?
Andriana: I always find it a pleasant surprise when people are interested to talk to me about my work, whether it’s they see a poster or, you know, I present something at a workshop and then they come up to me. I find it very inspiring and it’s a very good reminder that we don’t work in isolation. And, fortunately, one of the big benefits of going to in-person events that you have those kind of interactions whereas if you’re just reading a paper in your own free time, you might like it, but not have the thought of reaching out to the author.
Sean: Yeah, it’s difficult to bump in to someone at the water cooler with a telepresence robot, isn’t it, I suppose.
Andriana: Yeah, it’s so true. You would think just giving people movements through robotics, this is going to solve the problem but it’s so much more complicated than that.
Sean: Absolutely, a lot more sense is in play, I suppose. Balint, what about you, any kind of pleasant surprises or unexpected things that happened during the grant?
Balint: One of the things that was a pleasant surprise to me was when I was running the participation studies. People actually wrote in the comments section, or like in the comments of the study that, you know, they really enjoyed this survey. And that was to me really gratifying because, you know, you put in hundreds of hours in to designing these things and someone says hey this is cool and I’m like, “yes”. So, that was a pleasant surprise.
I suppose in terms of personal interactions, and what has been said so far, of course, someone coming up to my poster and talking to me is always really cool. And I always really find it humbling when people come up with new ideas or have some constructive criticism. Yeah, I suppose that’s the sort of pleasant surprise that I could think of right now, not much else.
Sean: That’s great, Eike, you know this question is coming to you now. Hopefully you’ve had a chance to prepare.
Eike: Yeah, absolutely, absolutely. Yeah, so because this award came with quite a lot of flexibility as we’ve all mentioned, and how we use it. I found it very surprising how much impact in the academic world speaking relatively low grants, so the amount of money, how much that can set in motion. I mean we’ve had access to that money for now, around a year, I guess. And how many collaborations have come out of it that I’m still working on. On the follow up things that one trip to a lab somewhere started off. Where you think, okay, I’m going there to give a talk or something. I’m going there to meet person X and now suddenly you’re 10 months later and you’re still in talks. You’re still collaborating on the next thing. So, I was really impressed by and surprised by how much can you set in motion by the freedom to spend a little money how you see fit, if you will. That was great.
Sean: Yeah, just a small amount is an accelerant isn’t it. And like you say you walk in to a lab, there are 10 people there and one of them happens to have a spark in the same area as you. Just talking wider on the idea of TAS. I mean TAS is coming to an end now, but why are programmes like TAS so important, you know, let’s just, again, we’ll carry on round- I’ll go to Andriana if that’s okay. Why do you think TAS is so important?
Andriana: I think TAS is important because it’s I think it sends a message that this is an issue that matters and that we need to be looking in to. Especially for issues like Trustworthy Autonomous Systems, it has a lot of ethical implications and a big impact on everyone’s lives and society. It kind of shows that, yeah, this is something we need to research and talk about, so I think that’s very good. And, I think, obviously, it brings a lot of people together, it’s a multidisciplinary space. And I think that’s very- I think especially for computer scientists and people from a more technical background, it may be more challenging for people from that background to engage with these ideas, so this is a good opportunity to create that multidisciplinarity.
Sean: Breaking down those silos, I think is essential, you know, to move forward with any kind of broader subject, it’s got to be hasn’t it, it’s got to be. Is there anything unique about TAS from your perspective, just while we’re on the subject of TAS? Anything specific that, you know, you’ve found that’s unique about it?
Andriana: Oh, I don’t know, that’s a good question. I mean for me I enjoyed TAS because it felt like a small event or like a community that is just beginning. And for me as an early career researcher, that has been very exciting to see because I read papers from conferences that have been going for many years now, for decades, and some of the other earlier papers that just initially made the field what it is. I always read them and think oh wow, you know, it must have been really cool to be one of those people that wrote the first papers for this conference and so on. And, so it’s quite exciting to be part of TAS as it’s becoming a thing.
Sean: Yeah, okay, yeah, cool, makes sense. And some of these questions I think run together, you know, I’m going to pass over to Balint now and ask him about why he thinks programmes like TAS are important to the sector but, again, the uniqueness part perhaps plays in to that as well. But, what do you think Balint, why ae programmes like the Trustworthy Autonomous Systems Hub so important?
Balint: So, listening to Andriana, So, we’ve been mentioning multidisciplinary interactions and talking to people and I think I agree with all of that, I mean it’s really, really important. The other thing is that this gives a really solid and reasonable foundation for future work I think. There’s been a lot of talk about AI safety nowadays from various different groups and various different interests. And I think having something like TAS really makes sure that we bring in that sort of knowledge base and experts and interactions that are really needed to put this on a reasonable foundation rather than based on, for example, hype and so I think that is really, really important.
And being a PhD student I’m sort of not familiar with the overall scale of TAS, I know it has various different funding programmes and branches. I know in Edinburgh there is a new PhD programme starting specifically for responsible natural language processing. And I think there’s going to be a lot of off shoots coming from this as well which I’m hoping will bring more really cool work that supports trust with the autonomous systems.
Sean: Good stuff, good stuff. And a uniqueness, what’s been unique from your perspective?
Balint: Yeah, that is a good question. I think it’s one of a kind. Like if I think about it I haven’t seen anything else like this in the UK. So, having that exposure to the trustworthy- Like having that sort of really wide exposure to Trustworthy Autonomous Systems is really unique. I think not another programme has attempted this before. And I think that’s really cool.
Sean: Great stuff, great stuff, and Eike same questions to you then, tell me why TAS is important?
Eike: Sure, absolutely, yeah, I mean as the last speaker on this question, I’m basically equating what everyone-
Sean: Sorry, yeah, yeah, I try and change who’s the last person to answer every question for that reason.
[00:29:35]
Eike: Yeah, absolutely, absolutely, that is perfectly fair. No, I mean, absolutely, I think the interdisciplinarity that’s been mentioned by everyone is quite important. For me in terms of your next question, which is the uniqueness to jump to that, is the focus on public engagement using the creative pillar of the TAS Hub. So, the TAS Hub specifically worked with creative ambassadors, such as artists in order to create conversation in the general public around these issues and around these, not just issues but opportunities as well. And thereby engages the public through, for instance, art in a different way than the research papers that we all write. Because, as Balint said earlier, we don’t expect lay people to go in Scholar and download and read our papers. But we might be able to reach them through artistic insulations and art galleries that they choose to visit of their own desire to learn something about this. And this has, for me, so the collaboration with the creative ambassadors as part of TAS Hub, on specifically, Cat Royale, has absolutely accelerated my career, but also the platform if you will that I have in order to engage the general public on questions surrounding autonomous systems, which makes the TAS Hub quite unique from that point of view.
Sean: Absolutely, that bridge away from just pure tech over to art as well, it is really important. I’m glad you mentioned Cat Royale, so maybe we just need a couple of sentences on what Cat Royale is for people who are listening, who’ve not heard of Cat Royale, because I think you can’t guess what Cat Royale is.
Eike: Absolutely, absolutely, happy to do that. So Cat Royale is a project between the TAS Hub as well as the Artist Collective Blast Theory down in Brighton. And it’s artistic insulation if you will that aims at reflecting on getting the general public to reflect on care taking situations of autonomous systems, or through autonomous systems. And the project does that by, through a lengthy process, recruiting three cats, Ghostbuster, pumpkin and Clover, and putting them for 12 days in to a bespoke environment that has been designed with a cat-centric lens together with a robotic arm at the centre of this environment. And the robotic arm, using computational vision, detects where is each cat and what are they currently engaged in. And then in 10 minute intervals suggests play activities in order to increase their happiness. And that, of course- I’ve spoken with quite a lot of people who have seen the movie that was produced, a eight hour movie at the Science Gallery at Kings College. And that, of course, sparks quite a lot of reflection. Well, first of all can AI in the first place detect how happy we are. Can we even, if I asked you how happy are you on a scale from zero to 100, well what does that even mean. Is it even reasonable that AI tries to do that for cats and would we, in the long run, be willing to let a robot take care of us and what does that mean. So, provoked quite a lot of reflections on our relationship with AI and with embodied AI, in this particular instance a robotic arm. So, that’s Cat Royale in a nutshell.
Sean: Well that leads really nicely on to the sort of final question that I’m going to pass round to each of you, which is about what does the future hold. Now, I know, as I’ve mentioned the TAS Hub is coming to a close, the sort of reins are being picked up by RAI UK, the responsible AI UK. But, you know, taking what you’ve, you know, got from this grant, where do we go from here. I’m going to stick with Eike just for the moment and then somebody else gets to be the last one to say what they said. Go for it.
Eike: So, as yo mentioned RAI UK has, well not taken over, but it is kind of the succession if you will, and I’m also part of RAI UK. And, for me as part of this programme, public engagement is absolutely one of the key pillars, public engagement through art as well. And currently, well I’m awaiting the result of a funding bid that looks at public engagement through art, so fingers crossed on that. And, secondly, I will absolutely focus and continue to focus on making sure that all the research that I do leaves computer science, includes computer science but it’s interdisciplinary and involves other stakeholders, preferably, of course, where appropriate, always members of the general public, but also other disciplines, some of which I’ve mentioned earlier.
And then, lastly, I guess from an academic point of view, I feel I’m at the stage of my career now where I’m shifting a little bit from purely publication mode to publication and getting external funding mode. So, that is something where I also think that programmes such as TAS Hub as well as RAI UK have quite a nice infrastructure in place in order to support early career researchers in achieving this goal and getting experience in applying for grants. And, like the ECR grant that we all received definitely helps with that.
Sean: Brilliant stuff, Andriana, tell me, you know, what does the future hold?
Andriana: Yeah, so for me, so as I mentioned with my research background in the technologies for communication and autonomous systems becoming more and more prevalent in this sort of technologies, I think it’s important to build a clear sort of foundation for what research in the intersection of these fields should look like in the future considering, not just how do we automate communication, but what does that mean for users, ethical implications, accessibility implications, and so on.
So, for me, future goals are to build collaborations that look at this from multiple perspectives and maybe research that looks at frameworks and kind of a foundation for having a clear image of what research needs to look like to properly tackle this area. And I think, yeah, RAI UK is a great space to also continue doing that.
Sean: Great stuff, and Balint, sorry, last again.
Balint: Can I just say first that I haven’t heard of Cat Royale and it sounds amazing, I really have to like look up more about it, it is really a cool idea. Yeah, what does the future hold, okay, being a PhD student I am not thinking of grant applications just yet. So, I think my answer might be a slightly higher level than that. You know, I’ve recently read an MIT Technology review article about addictive intelligence. And I thought, you know, a lot of these systems that we are building might lead to over reliance and, you know, we are taking away people’s autonomy little by little, if you don’t take care. And so I think to me it is really important to make sure to keep the public engagement going and make sure people are aware of these changes that are happening really subtly under the hood. And also to try to then build systems that, you know, actively try to educate people about the importance of Trustworthy Autonomous Systems and make sure that they are really aware of these things. And I’m sure that will come in many different ways, we will all have our different approaches. Some people will call for regulations I’m sure as well. I think, for me, I’ll keep this explainability where I’m going for a while, see where it leads.
I have to say that Andriana’s insight on accessibility is something that hasn’t crossed my mind, regrettably. I really think that accessibility of explanations is something that we have to think about a lot more. And, you know, just generally making sure that everyone is aware that they are interacting with these systems I think is very important.
Sean: Brilliant stuff, well thank you for that. I mean I think this is a conversation that could go on for several hours including things like, like you say, regulations and how things are, you know, put down in law. But I think today, you know, we’ve had a really good chat and there’s one thing I will say to all of you is that you’ve missed something of your outcomes and outputs and that is a wonderful episode of the Living with AI Podcast. So, thank you all for joining me on the podcast today, and it just remains for me to say thank you. So, thank you Eike.
Eike: Thank you Sean.
Sean: Thank you Balint.
Balint: Thank you Sean.
Sean: And thank you Andriana.
Andriana: Thank you, thank you it was a great discussion.
Sean: If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living with AI Podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited and it was presented by me, Sean Riley.