Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
InterNET ZERO: Towards Resource Responsible Trustworthy Autonomous Systems
As the Internet expands through paradigms like the Internet of Things (IoT), massively multiplayer online (MMO) games, videotelephony and the Metaverse, Autonomous Systems (AS) are increasingly used to mediate society’s dataflows. These systems are often promoted as resource-efficient and used to mitigate the impact of the Internet’s expanding data-driven ecosystem.
Because of their ubiquity and scale, the environmental impacts of hyper-scale autonomous systems are intensifying and their sustainable trustworthiness is frequently undermined.
Using methods from Design, Human Computing Interaction and Science & Technology Studies research, this project collaborated with people such as technologists, policy-makers and citizen end-users. The idea was to rethink current AS infrastructures and anticipate resilient and efficient digital energy transition pathways for Resource Responsible Trustworthy Autonomous Systems design.
Project website: https://imagination.lancaster.ac.uk/project/internet-zero/
Joining the podcast are:
· Dr Michael Stead – Lecturer in Sustainable Design Futures – Imagination Design Research Lab – Lancaster University
· Professor Paul Coulton – Chair in Speculative and Game Design – Imagination Design Research Lab – Lancaster University
· Dr Neelima Sailaja – Assistant Professor – Horizon Digital Economy Research Centre – University of Nottingham
· Dr Ola Michalec – Research Fellow – School of Computer Science – University of Bristol
Also involved in the project:
· Dr Nuri Kwon – Senior Research Associate – Imagination Design Research Lab – Lancaster University
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 4, Episode: 12
InterNET ZERO: Towards Resource Responsible Trustworthy Autonomous Systems
As the Internet expands through paradigms like the Internet of Things (IoT), massively multiplayer online (MMO) games, videotelephony and the Metaverse, Autonomous Systems (AS) are increasingly used to mediate society’s dataflows. These systems are often promoted as resource-efficient and used to mitigate the impact of the Internet’s expanding data-driven ecosystem.
Because of their ubiquity and scale, the environmental impacts of hyper-scale autonomous systems are intensifying and their sustainable trustworthiness is frequently undermined.
Using methods from Design, Human Computing Interaction and Science & Technology Studies research, this project collaborated with people such as technologists, policy-makers and citizen end-users. The idea was to rethink current AS infrastructures and anticipate resilient and efficient digital energy transition pathways for Resource Responsible Trustworthy Autonomous Systems design.
Project website: https://imagination.lancaster.ac.uk/project/internet-zero/
Joining the podcast are:
· Dr Michael Stead – Lecturer in Sustainable Design Futures – Imagination Design Research Lab – Lancaster University
· Professor Paul Coulton – Chair in Speculative and Game Design – Imagination Design Research Lab – Lancaster University
· Dr Neelima Sailaja – Assistant Professor – Horizon Digital Economy Research Centre – University of Nottingham
· Dr Ola Michalec – Research Fellow – School of Computer Science – University of Bristol
Also involved in the project:
· Dr Nuri Kwon – Senior Research Associate – Imagination Design Research Lab – Lancaster University
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to the Living with AI Podcast from the Trustworthy Autonomous Systems Hub. This episode we’re going to talk about one of our TAS projects, this is an episode about InterNET ZERO: Towards Resource Responsible Trustworthy Autonomous Systems. I’m Sean Riley, and we’re recording this on 24th of April 2024.
So, our guests today are Michael, Neelima, Paul and Ola. Welcome to all of you. So, we’re just going to go round the virtual room, well it’s an actual room where you’re sitting but I’m virtually connected. So, I’ll go round the room and just introduce each of you and I’ll just start on the left with Michael.
Michael: Thanks, Sean. Yes, I’m Dr Michael Stead. I’m a lecturer in Sustainable Design Futures at the School of Design and Imagination Design Research Lab at Lancaster University.
Paul: Yeah, I’m Paul Coulton, I’m a Professor of Speculative Design within the School of Design at Lancaster.
Ola: Hi, Ola Michalec, I’m a Research Fellow at the University of Bristol.
Neelima: Hi, my name is Neelima and I’m a Transitional Assistant Professor based in the School of Computer Science and Horizon Digital Economy Research Hub at the University of Nottingham.
Sean: Excellent, well it’s brilliant to have all of you on and thank you for sparing the time to be on the podcast. Michael, could you just give us an overview of what the project was about, you know, what were you hoping for and, as I understand it’s still running isn’t it. So, tell us about the project?
Michael: Yeah, it is, Sean, yeah. So, it’s called InterNET ZERO: Towards Resource Responsible Trustworthy Autonomous Systems. So, it’s an interdisciplinary project between Lancaster, Bristol and Nottingham combining design, computing and science and technology studies methods. And we’re looking at the sustainability and trustworthy of autonomous systems, particularly, artificial intelligence.
So, the kind of focus has been this double-edged sword of how digital systems are, you know, they have a potential to help sustainability but they are also having unsustainable impacts. So, it’s kind of looking at this kind of moral agency of digital systems and how they can help us transition towards Net Zero effectively.
Sean: Fantastic, well that’s going to be fabulous to talk about on the podcast. I did a bit of research before starting this episode. I noticed a report from the World Economic Forum from 2021, saying what would it take to make AI greener. So, this isn’t a brand new topic, but even back then they were saying we need to change the focus from bigger is better, you know, to something else. So, is that what you’ve found in your research?
Michael: Yeah, so I think the scale, the hyper-scale of these systems is a real problem. Obviously, in the last maybe year and a half, two years generative AI has come in, in a big way, you know, things like the ChatGPT, all the image processors, they’re having a huge impact. And Cloud computing especially, you know, all this data that’s being generated and the transmission of this data. So, I think the scale is the problem. So, with this project we’ve kind of been looking at the notion of scale and thinking about like, you know, could there be hyper-local, decentralised opportunities for AI. And how does that impact in terms of like energy consumption. Also things like water usage, you know, because- There’s all sorts of utilities and resources that are used to power these systems. So, it’s kind of cutting across a lot of those kind of issues.
Sean: And so there are obviously a lot of challenges here, I mean not least getting the data out of big tech for instance to find out some of this data and find out what these numbers are. How have you approached that then?
Michael: So, it’s been kind of exploratory and a design led kind of process. So, we’re actually looking at kind of the systemic nature of this space. So, we’ve been using some speculative design methods, what we call Provotyping, so, it’s a mixture of prototyping and provocation. And it’s actually using Gen AI, so it’s kind of like this, kind of ironic kind of thing to say, you know, we’re using Gen AI to sort of envision these futures for these technologies but also getting people to question it. So, it’s kind of a reflective process in that way.
Then we’ve been trying to map the system and see what participants feel where, you know, the sort of barriers are, the risks are, all those kinds of things to actually be more sustainable, more trustworthy. And then we’ve been looking at timelines as well. So, thinking about the future and where do people want these systems to go. So, it’s all been kind of workshop, kind of processes.
Paul: And in some ways it’s kind of provoking a conversation between the kind of mythology of AI that we’re told is going to solve all our problems. It’s going to keep us healthier, it’s going to make the world more sustainable. And the reality of what most of the generative AI is doing, which is actually generating a lot of stuff with very little purpose. So, there’s a kind of disconnect between the mythos that we’re being told and sold as a future and the practicalities of the [s/l emerging 00:05:41] reality. And I think what we want to have is a kind of open and honest conversation about what are the practical challenges [unclear 00:05:50], because what we find is that things like cyber security and trust of systems is one aspect that people- But they will often offload it, but actually, sustainability is something that people care about more substantially.
So, I think it’s interesting to have that conversation, is, yes, you can use all of this digital services in this mythical Cloud that sounds lovely and fluffy and nice that actually comes with a material cost, and are you prepared to accept it?
Ola: I can add to it as well. I think the key thing with this project is that the, kind of, the unit of the analysis, let’s say is the future and humans whether they’re experts or they’re lay-publics, they’re really notoriously bad at predicting futures. If you look at any past, you know, predictions of what the kind of the superhuman intelligence will be like, we’re very, very bad at it and the reason for that is because we usually try to envision just the technology itself but forget about the rest of the world surrounding it.
So, what we tried to do in this project, we tried to kind of turn it upside down. So, ask about all the surrounding infrastructures, relationships, politics around AI technologies rather than just predict technologies. And we’re using design methods which are not so much relying on the actual numbers, you know, data from big tech as you suggested, but they’re relying on human imagination, human speculation. We can open people’s minds up a little bit more to think about the lateral connection between technology and the rest of the world, let’s say.
Sean: That sounds like a good approach, yeah. I mean I was thinking, because we’ve all kind of been sold this- I mean marketing is a lot to blame, isn’t it, we’ve been sold this Utopian future where the AI will just make everything better and more efficient. And, you know, everything is sold as being AI powered these days. I mean, is that something we need to get away from?
Paul: I mean for me it’s a kind of techno-solutionism, it’s solving problems that don’t really exist. And we’ve seen that probably as well in the InterNET things, you know. All sorts of apps that will automate various things that probably don’t need to be automated in the first place. So, it’s asking that critical question, you know, are these things valid or is it part of this move towards a kind of networkification, and identification of our lives. And I think that’s something that comes out in some of the conversations we have. Particularly, as we’re looking at sustainability and long term repair and maintenance, the more technology you put in something, often the shorter the lifespan becomes and E-waste is a growing problem.
So, when we talk about responsibility, do we also not have a responsibility to the world and the biosphere along with humans within this process. So, I think it just needs more- I think expand it.
And one of the things we think about is business models, which often are never talked about in this conversation of, you know- The horrible phrase that was knocking about years ago of data being the new oil.
Ola: Oil-
Paul: Yeah, you know, maybe it is as equally bad for the environment as oil was, which- So, I think it is kind of letting people honestly explore those narratives that we’re often kind of told on what the practical reality on their lives might be in terms of energy usage, water usage and how much of it will provide practical help.
[00:09:39]
Neelima: And I think it’s important to kind of highlight that because that’s something that the project does. It has all of these big questions, it brings together stakeholders, you know, experts through the workshops. But then we’re also developing an experience which will go back to the public and they get to have their say. So, they get to, you know, kind of represent their voices. They get to say what they think about these big questions and what their take is on, you know, what the experts have to say. And kind of their vision of the future, because usually when these big questions about the future and things like that come in, especially when you bring in multidisciplinary concepts, sustainability, digital, energy all of that together, it’s not the people that gets to talk. But through this project we can actually kind of bring that in as well. And see how these different stakeholders’ views kind of play about within this ecosystem.
Paul: Yeah, it gets to explore the messy reality of their lives as well, because often the future presented through corporations is always white-walled and chromium plated, where there’s no mess anywhere. But the reality of life is we’ve got all these legacy systems that we need to take in. All infrastructure to build, proper governance to put in place. But we only talk about this kind of shiny bit at the top.
Ola: Yeah.
Michael: Yeah.
Paul: And, actually all the stuff underneath that we actually would be better having an honest conversation about [s/l fore-grounding 00:11:11] where’s it all going to come from, where’s it all going to go. It’s something that we feel is often ignored.
Sean: And you touched upon something there that I was going to ask about, which is that obviously, part of this is- Any project the sort of thing that you’re approaching there is, it’s got to be multidisciplinary. I mean is that something that you’ve found, did you find it challenging to kind of- You mentioned governance, obviously, we’ve talked about computer science. There’s all sorts of things, you know, the social element of it, everything coming in to play in to one place. How did you find that working on this project?
Michael: You know, I think it’s really important and vital to research to be interdisciplinary these days, you know, you can’t work in silos. And, you know, to work across different approaches and methods is really important. It’s not always easy because everyone’s got their thing, but I think what we try to do is bring them all together. I mean bigging up design, design as a discipline is quite good at kind of bringing other things in to play because it’s quite a good convenor sort of discipline. It likes to pull in different things anyway, it always has. So, yeah, it’s been okay I think. But I think it’s really important to cut across the different boundaries that you see in research, in industry, in innovation, so.
Neelima: Yeah. And I think it comes to the question of, you know, how practically applicable you know the findings are, the outcomes are and for that multidisciplinary becomes very central. Again, you need to know your governance and your constraints there. You need to know how far you can extend it through the science. You need to know how you can apply through computing. So, you’ve got to bring all of these together to make sure that, you know, what we find is worthy enough to, you know, be reflected in the future, in our every day life that Paul is talking about, the messy everyday life.
Michael: Definitely, and I think because, you know, we’re looking at these systems and systemic issues- We need the different viewpoints and we’re talking to different participants from different kinds of disciplines and industries. So, it’s having that in the team, these kind of, you know, different approaches is really, really important I think.
Paul: It’s important to realise that we’re often in interdependent relationships with business, with business models, with governments, with regulations but we all have actually independent perspectives on what that problem is. And actually understanding that there’s multiple perspectives going on. And how you balance and judge and who has the power and who doesn’t have the power in those kind of relationships is important to pick apart because, yes, we’re interdependent on each other, we’re all grouped together, but actually we don’t all have the same perception of what we’re doing in this world.
Ola: Yeah, that’s true.
Michael: Yeah, it’s like an alternative realities effectively, isn’t it, of the system and that’s what we’re trying to do is like- Some of the questions we’ve been asking around like positionality, you know, where do you see yourself in the system. Your expertise and your knowledge, and you know what you do. And also how that then impacts on like power dynamics and both sustainability and trustworthiness across the system, so that’s a really important point, I think.
Ola: And I guess the reasons for the [s/l disciplinarity 00:14:53], that’s only normal, it’s not only that oh we shouldn’t we working this way but also, I think interdisciplinary in a way is, in a way, really old. And there’s this certain kind of myth about AI that it’s this product of lone geniuses, you know, usually mathematicians, men who are slightly maladjusted slightly and they just come up with these great models. And it’s never been like this so, sorry to spoil it to some audiences, listeners. But it’s always been a collaborative endeavour. And I guess it’s only becoming more collaborative, more, you know, disciplines that are kind of infer their edges of their kind of spectrum are coming together and are like new, unique partners. But in a way I don’t think interdisciplinarity is that new, it’s just a stage of the art for a few techniques now at least.
Sean: That does concern me anyway. We have enough problems with kind of bias in systems and things. But one individual, you know, deciding how it all works is just a recipe for disaster. Just thinking about the technical side of it briefly, there’s a sort of centralised model of AI, and I know that you’ve been looking in to things like Edge computing. But I mean presumably there’s not one single kind of panacea for this because a centralised model would work in one sphere, whereas Edge or, you know, kind of decentralised model might work in another sphere. Have you found that or am I kind of just over-simplifying it massively?
Michael: No, I think there’s tensions across, you know, what we’re seeing. And there’s barriers and things like that. And, yeah, I don’t think we’re trying to like come up with a one size fits all with this project, it’s not about one size fits all kind of system or anything like that, it’s much more- It’s kind of exploratory and, you know, we are seeing different perspectives of these systems and that’s what is kind of the outcome of the project to understand that there is this complexity and those sort of expert narratives or the rhetoric of kind of business around AI is often, you know, not the case. There’s lots of different things going on. So, it’s trying to expose the issues that are underlying really.
Sean: And have you found anything that’s surprised you with this? Because I mean I think when people realise how much energy or, as you said before, water is used in these systems, they’re often quite surprised. Have you been surprised by anything you’ve discovered or have you found other people or colleagues surprised?
Paul: I think people are naturally surprised, you know, when we talk about it, the Cloud seems to have no materiality. It doesn’t seem to have things- But also the way we use things is interesting as well. Like energy, we turn on lights, we don’t think about energy as a finite resource necessarily. Perhaps more recently with the energy crisis, we’re starting to do that. But, you know, it’s always something there that we turn on. So, we don’t necessarily see that this comes with this big cost. And I think that is starting to change. And it is interesting, things like electric cars are helping with that because when you drive an electric car you see the battery going down and you know it’s a finite resource. And you have to recharge it.
And so, I think, within these systems it is understanding the finality of the resources that we’re using and that the planet has finite resources which we’re stripping the limit out of. So, it becomes a question of, yes we can built it but do we really need to build it and what is the cost of doing it that way. And does the cost outweigh the benefit. And I think we need to have more honest conversations, particularly, around whether our responsibility is just to the economic value but also what effect on the climate and future lives is going to be, which I think there’s often a kind of short-termism. As Ola said we’re very bad at thinking about the future.
Sean: Absolutely, I’m still awaiting my hover car.
Paul: Yeah, McLuhan said we march backwards in to the future or we look at the future through a rearview mirror. It’s always based on what we currently understand, not what we can- We’re actually really poor at imagining something different. And often what we’re talking about is we’re going to need systemic change, not just product change. So, actually you need to take a systems view if you’re going to make fundamental changes to the way things operate and live. And I think- About energy, about water and about thinking those are- There’s more resources that you also have to consider.
[00:19:54]
Neelima: I think also from kind of like a digital perspective through the workshops that we’ve done. We’ve seen often in research people say that if you give more agency to users they don’t want to get engaged, you know, with digital systems and data management things like that. But within the workshop what we saw is people were tending more towards like the Utopian side when they were given an alternative where they could, you know, manage their energy and their data, which I found surprising, again, purely from a digital perspective, I’m saying. I don’t know how it works in energy research, Ola, what you’ve seen there. But from data and numbers and things like that, I was pleasantly surprised to see that maybe it’s the context of energy. It’s more vital, and maybe that’s why people want to have, you know, more awareness and agency. And be able to do things, but I think that was a pleasant surprise. It shows promise as well, yeah.
Michael: Yeah, positivity there. Yeah, it’s not all bad, despite what you might, you know, do scroll on the internet and the News and stuff. But, yeah there was a lot of positivity as well as, you know, the criticality across the data that we found.
Sean: My background is I used to be in BBC News and we were always told that people are interested when it hits their heart or their wallets, those are the things that people get interested in.
So, I mean when we look at some of these, you know, AI has a lot of promise for, and is already showing promise for reducing costs in certain areas, but it’s those hidden costs, isn’t it. Those environmental costs and that’s what you’ve been looking at with the project.
I mean the project is still ongoing, what are you hoping for as kind of conclusions or what’s the end goal for the project?
Michael: Well, we’ve got a couple more workshops to do to collate some more data. So, we’ve been sort of finessing our workshop activities, so we’re going to run those again. We’re actually- So Neelima has mentioned the experience, which is- From Paul’s previous research project, we’ve got a caravan which is kind of a smart home of the future inside, isn’t?
Paul: Yes.
Michael: So, it’s kind of like a test bed, would you describe it Paul?
Paul: Yeah, yeah, so it provides a kind of experiential future in that you’re sat in something that’s recognisable. It looks like a living room as we imagine but it has an AI that controls and you negotiate your control over a new energy system through the AI and it displays the consequences of your decisions, which are never perfect. You know, there’s always a balance to be made.
And, you know, if you suddenly want more energy in your system it’s most likely going to come from coal power resources. But if you want to balance your load then you have to decide who gets the power and who doesn’t get the power. So, who do you prioritise and who is willing to turn things off or mitigate or say, you can’t have your streaming service today because that’s putting a load on our system.
So, it’s highlighting those kind of consequences that you have to make a balanced decision. So, whether you make them very selfishly or you make them as a kind of more pro-social thing, you know, thinking about the wider benefits, which I think we’ve become very bad at.
Sean: Yeah, absolute, but I mean we’re also- The modern day kind of world is full of smart chargers, smart meters, all of these devices that are supposed to be- And we’ve talked about this on the podcast before but they’re supposed to be kind of, not necessarily negotiating but certainly say charging the car when it’s most efficient or the load is less on the grid. From what I’ve seen, and this is purely anecdotal, they’re not very smart at all, they’re just connected. It’s the system that has to be smart right?
Michael: Yeah, I think that smartification thing, you know, it’s a rhetoric, you know a narrative we’ve been sold. We’ve just seen in the News this week about smart motorways, you know the whole thing around smart meters. The government roll out wasn’t particularly successful. So, you know, with the project, you know, that’s what we’re trying to challenge and talk to people about. You know, is smart energy and smart digital systems- You know, they’re going to use AI as part of the management, you know, how smart actually are they if they are, as Paul said, having these implicit consequences for the environment. And we know that, you know, with all the- We’re hitting planetary boundaries now with the IPCC assessments over the last couple of years. You know, the earth’s heating up, so it’s kind of- We need to make some proper decisions. So, that’s what the aim of the research has been to unpick all that, I think.
Ola: And I guess the importance of communication cannot be understated. So, you know, for a long time there’s narrative, you know, AI will optimise, I don’t know, energy production, for example, has been sold to us, but I think we have reached this point of the stage and, say, like public [s/l electricity 00:25:27] where people actually [unclear 00:25:30] optimise your energy production, how actually it’s happening and- For example, recently there has been a consultation in Ofgem about consumers essentially sharing personal data from meters, from their smart energy products and so on, so on. And from this consultation I was really, really surprised how all the experts in the industry were responding to it, they just saw it as a given. They’re all of course it’s a good thing, of course people are going to love it. Like why would they not agree because it’s good for the planet. But I haven’t ever seen this proposition being communicated to the public, and actually evidence that this is going to happen. So, I’m hoping that activities like the Carbon Experience actually will be able to ask these questions. You know, is it any good, you know, is it actually better for the environment. You know, can we see the evidence for that, because without that there is no public engagement. And there will never be a society wide consent for sharing your data. So, it will be quite bad.
Paul: Yeah, I think, and yourself, so, whether the infrastructure is actually in place to support these visions. We’re often told like, oh if we all get electric cars all of those batteries can then be used as a storage facility but there is no infrastructure that would enable that. And the actual- How do you sell people the notion that investing in infrastructure is a vital thing. We tend to commodify the services on top, but, actually we’ve become really bad at infrastructure. And we’re seeing it across energy, across water, across drains.
The infrastructure is the way that you deliver these really bold visions. But actually we tend to present the kind of shiny bit on the top. But I think we need to kind of move people towards the notion that infrastructure is a community value that we should all see the benefit of investing in rather than it’s going to make your energy bill slightly cheaper if you do this. Actually there’s a benefit- If we invest in the infrastructure that will apply to everybody but without the infrastructure it’s not going to happen.
Sean: Yeah.
Michael: It’s about [unclear 00:27:44] isn’t it really and, you know, the broader system or communities we all sit within effectively in the ecosystem, which gets lost I think when you drill it down, often it’s like focused on individual consumers and, you know, yeah, the economic costs for you as an individual or as a family or something, rather than the actual, you know, we need this system and it will make it fairer for everyone.
Sean: It’s hard to sell infrastructure. It’s not shiny, it’s in the background. I mean, how often, you know- I don’t know if you’ve had to do any renovations to a house, but when you do something like, I don’t know, the wiring, you don’t see any benefit to it. Yeah, you might be very lucky that your house now doesn’t burn down. But generally you don’t see it and it feels like a cost you don’t want to have to bear. So, yeah, interesting, interesting.
What are you doing towards the end of the project and- You mentioned the caravan, how will that work and what’s going on there?
Michael: So, the idea is to take it out to events. So, previous projects we’ve used it where we’ve taken it out to Science and Technology festivals, places like Bluedot Festival in Cheshire. It’s been to the V&A Digital Design weekend. Other places-
Ola: [S/l Wes 00:29:02] Fest.
Michael: Yeah, [Wes 00:29:03] Fest.
Paul: And we took it out on the streets in Manchester as well. And the idea as well with it being a kind of portable thing is that you can take it out to communities who might not necessarily get talked to. So, the caravan, itself, has been out on the west coast of Cumbria, which is the bit of the lake district that nobody ever really goes to, you know, everybody goes to Windermere, Ambleside. But there’s a whole side of it, there’s a whole kind of exports and mining communities on the west. So, there’s a kind of part of this larger discussion.
So, it does provide that kind of unique way of taking stuff out and letting people talk and experience it in places where they probably would not get access. It was lovely doing stuff at the V&A in London, but you know what the kind of audience you’re going to get in certain respects. Whereas actually it’s quite nice to take it out to smaller, slightly odder places that might not have the high profile but are actually really interested because you’re engaging people-
Sean: Well you’re getting a better spread of the community, aren’t you, that’s a fantastic outreach, yeah.
Well, look we’ll put in the show notes details to connect to you guys and find out about any of those events that you’ve got going. We’re kind of running out of time, so I’m just going to say thank you to all of you for joining us. So, thank you, Michael.
Michael: Thank you, Sean.
Sean: Thanks, Paul.
Paul: Thank you.
Sean: Thanks Neelima.
Neelima: Thank you, Sean.
Sean: And thank you Ola.
Ola: Thank you.
Sean: If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living with AI Podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited and it was presented by me, Sean Riley.