Living With AI Podcast: Challenges of Living with Artificial Intelligence

AI and the Arts (Featuring Blast Theory)

Sean Riley Season 3 Episode 8

AI and the Arts (Featuring Blast Theory)

We're discussing AI and the arts, featuring Blast Theory with a discussion of their latest work 'Cat Royale' where an autonomous system attempts to create a cat utopia.
 
 Feature interview with Matt & Ju from Blast Theory recorded on May 11th 2023
 
 Panel Discussion recorded on May 26th 2023
 
On the panel :

Tim Smith-Laing Executive Master Programme Director of the Royal Academy of Arts
Ali Hossaini a TAS artist in residence
Steve Benford, TAS Creative Lead, Founder of the Mixed Reality Lab at the University of Nottingham.

Chat GPT summary when prompted to talk about the lionisation of art:
"In summary, AI's role in making art more accessible is indeed a positive development. By removing barriers and providing opportunities for engagement, AI can contribute to a more inclusive and diverse appreciation of art, empowering individuals to explore and connect with the artistic world in ways they may have thought were unattainable."

A few notes:

55mins Ali: Bach music mentions:

Pablo Casals - Wikipedia 

Yo-Yo Ma - Wikipedia 

 1hr 8mins Tim: Lotus Eaters:

Lotus-eaters - Wikipedia 


 
Podcast production by boardie.com

Podcast Host: Sean Riley

Producers: Louise Male  and Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.



 

Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 3, Episode: 8

AI and the Arts (Featuring Blast Theory)

We're discussing AI and the arts, featuring Blast Theory with a discussion of their latest work 'Cat Royale' where an autonomous system attempts to create a cat utopia.
 
 Feature interview with Matt & Ju from Blast Theory recorded on May 11th 2023
 
 Panel Discussion recorded on May 26th 2023
 
On the panel :

Tim Smith-Laing Executive Master Programme Director of the Royal Academy of Arts
Ali Hossaini a TAS artist in residence
Steve Benford, TAS Creative Lead, Founder of the Mixed Reality Lab at the University of Nottingham.

Chat GPT summary when prompted to talk about the lionisation of art:
"In summary, AI's role in making art more accessible is indeed a positive development. By removing barriers and providing opportunities for engagement, AI can contribute to a more inclusive and diverse appreciation of art, empowering individuals to explore and connect with the artistic world in ways they may have thought were unattainable."

A few notes:

55mins Ali: Bach music mentions:

Pablo Casals - Wikipedia 

Yo-Yo Ma - Wikipedia 

 1hr 8mins Tim: Lotus Eaters:

Lotus-eaters - Wikipedia 


 
Podcast production by boardie.com

Podcast Host: Sean Riley

Producers: Louise Male  and Stacha Hicks

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

  

Episode Transcript:

 

Sean:                  Welcome to the Living With AI, the AI podcast where we look at what impact artificial intelligence is having on all of us. Today we’re looking at AI and the arts. We’re recording this on May 26 2023- I had to quickly check the clock on the computer there just to make sure I got the year right. Days I’m okay with, years are getting trickier. Today, we have a wonderful pair of guests joining us from Blast Theory who are a group of artists who’ve been making interactive works to explore social and political questions for over 30 years. I had the privilege of chatting to Matt and Ju, and you’ll hear that conversation shortly. But before that now, I’ll just introduce our panel. So joining me today are Tim Smith-Laing, Ali Hosseini and Steve Benford. So it’s better if I ask them to introduce themselves, so I’ll start with Ali. Ali, what’s your name and where do you come from

 

Ali:                      I’m Ali Hosseini, I’m an artist. I’m also a senior research fellow in engineering at Kings College London and the cofounder of National Gallery X.

 

Tim:                    Hi, I’m Tim Smith-Laing, I’m programme director of the Executive Masters in Cultural Leadership at the Royal Academy of the Arts and I’m also a generalised art and cultural critic.

 

Steve:                Hi, yeah, I’m Steve, a professor in computer science in the mixed reality lab at Nottingham where I’ve been working with artists for 20 or 30 years with interactive technologies including Blast Theory dating back to round about 1996. 

 

Sean:                  I’d like to introduce now Matt and Ju from Blast Theory. Welcome to Living With AI both of you.

 

Ju:                       Lovely to be here, thank you. 

 

Matt:                 Yeah, thanks Sean.

 

Sean:                  Thanks. Just a quick note for our listeners, particularly in the word of AI, this is obviously a field which moves very quickly. This chat we’re having now is being recorded- The date today is 11 May 2023. So where do we start with this one? So Blast Theory, perhaps you can just give us a kind of, between you, perhaps a few lines on what it is and what you do and you know, we can go from there. 

 

Ju:                       Well, yeah, very hard to describe actually what we do, but we’ve been making work together for over 30 years. There has always been a live participatory element in our work. There has always been technology from low to high to experimental to everyday. We’ve always made work about different issues going on in the world  currently and the work has been games, it’s appeared in galleries and festivals and theatre festivals. It cuts across lots of different artistic and sort of science and research sectors really. So it’s quite hard to describe. We’ve got an app in the App Store, we’ve got a piece in the Red Cross Museum in Geneva. We worked at the World Health Organisation, we were the first artists in the residence. We’ve done a parade about pandemics with sound. Lots and lots of things, online, on the street, interactive feature films. Hard to describe, but somewhere we are in there. 

 

Sean:                  It’s clear that technology has featured quite heavily in the work you’ve just mentioned but I’ve had previous experience of seeing some of your work because I’ve worked with Nottingham Computer Science for a while and what’s the big idea of kind of being so heavily involved in technology? Is that something where you lead and technology answers a question? Or is that you use the tech and that is a springboard? Or something in between?

 

Matt:                 I think we’ve always been very interested in whether art can create change and how you can think about art as something that engages with the wider world with social and political questions and so we came to technology right from the beginning because that is the language we are all speaking today. And so we started making work in the 90s and became interested in how the internet might change how we all talk to one another and it turns out that was quite a big deal. So we’re always thinking about what is going on in terms of how culture is made and how we talk to one another, and we believe that technology just has to be central to that. So there are really rich and interesting questions. It’s one of the ways that we came to start working with Nottingham with the Mixed Reality lab in the late 90s is because we felt like there was a series of really interesting questions there and then we realised that those are also research questions and they are scientific questions. So there are ways in which we as artists can contribute to a research agenda and be part of that. So there’s been a really rich sort of dialogue backwards and forwards between us and researchers at Nottingham and elsewhere. 

 

Sean:                  And you’ve been, if my facts are correct, you did some video streaming work in the late 90s which predates YouTube by five, six years right? I mean, how does that feel now to look at how- I mean, we’re having a chat now on a video conferencing system and seeing each other across different ends of the UK.

 

Ju:                       I don’t feel like we’ve had any predictive abilities about these things. But I think we have, like Matt said, always been interested in what is going on in technology and in the wider world and how people every day are using these things. So I think it just feels like- It almost feels like a trend. There’s evolutions. It’s like- I don’t know as much about video streamlining but the sort of VR for example, we were doing VR in the late 90s and there was a sort of big VR excitement about 10 years ago again, or less than 10 years, as if it just happened. So the excitement about it and where it’s actually at is always shifting I think and video streaming is a big spectrum of what does that actually mean. Right now, we’re on Teams or Zoom- Teams,  but we could be streaming on all sorts of platforms. We could be streaming to a cinema. We could be streamlining from an NHS bed. It’s where it is and what shape it is and what the context is are very vast and various, so I don’t feel like-

 

Matt:                 I think it’s a combination of two things. I think one, it’s sort of paying attention to where things are developing. Obviously we were nowhere that pioneering in terms of looking at video streaming in the mid-90s to some people, but then you know, the second thing is, as artists, we’re able to work slightly intuitively. Sometimes you just get a hunch that something’s interesting or are curious about something and we can just go after something and try and pull on that thread and see where it leads and sometimes that leads us to do things that then become more widespread or- You know, you can see there’s something in the ether that actually lots of people are starting to think about things in the same way. The app that we have in the App Store is an interactive game called Karen and it’s about data scraping and data collection and manipulation. We started working on that in 2012 and by the time it came out in 2015 it was already a thing that was quite widespread and then, of course, since then, that issue has blown up massively. But it was- It’s obvious from the trends this issue about how big corporations collect and use data is a serious question, it’s just, you know, it’s- I suppose our challenge is, can we take those things from marginal places into mainstream conversations or can we shed light on those things in some way?

 

Sean:                  And that brings us nicely actually to the overarching TAS Hub goal which is to do with trust and hopefully we’ll chat in a moment about autonomous systems. But things like data scraping and use of that data, obviously there’s a huge potential trust issue. Is that something you seek to highlight in the work or is that just a byproduct of what you’re doing that that is a kind of question that comes up?

 

Matt:                 For me, it’s always been central to all of the technology work that we’ve done. We’re always interested in what are the kind of hidden costs that come with these new shiny toys that promise so much? It’s not that long ago that Mark Zuckerberg or others could stand up with a straight face and suggest that their product are improving the human race, you know? That they are increasing the happiness of the human race. And I think even now- Now they would struggle to do that in quite the same way without some sort of caveats and they understand that looks slightly ridiculous. So the question is, how do we, in the broadest sense, we as a culture, we as a civic society, look at these things and wrestle with them and roll them around and look at them from different perspectives and ask ourselves is this the future we want? And can we contribute to building a slightly better version of that future because there is a war going on all the time around these things. I mean I know that sounds kind of hyperbolic but I really think it is contested as to what is data? Is it something that belongs to me? Is it my data? Or is it data that is an exhaust? Or is it the new oil or, you know, these metaphors that we use when we think about new technology. Is it really a cloud? Why do we say it’s a cloud? Those things, they’re very powerful metaphors and those are there for a reason, so it’s improtant that we have tools to try and explore those, unpick those a little bit and just have a conversation together about what actually is going on inside.

 

[00:10:12]

 

Sean:                  I think it’s really important, as you say, you know, that these, what are sometimes pithy marketing buzz phrases like the cloud do offer these kind of threads for you to pull at, but I suspect there’ll be people out there who, perhaps they’re more on the techy side of things, and they’re listening to this podcast and they’re thinking what’s art got to do with trust and autonomous systems? So maybe this is a chance to talk about Cat Royale, one of your more recent projects. How does that integrate? Perhaps you could give us a kind of- Maybe Ju could give us an idea of what Cat Royale is for those who’ve never heard of it and, yeah, maybe take us to somewhere new.

 

Ju:                       Hopefully I can do that, I don’t know that’s quite tricky. So Cat Royale is a piece of work that we’ve made recently but been developing over the past several years as part of TAS and we are cultural ambassadors of TAS which is very lovely, where we have created- Attempted to create a utopia for real cats where their play and prey task needs are served up to them by a robot arm with an AI system suggesting with they- It, not them, complete error there, it, through learning, through computer vision, can see what the cats like more or less. So we wanted to create something to try to look at maximising happiness, to expose the black box hidden nature of AI and what does that actually mean. What is going on there? Can we see it? Do we trust it? Do we trust what we’re even seeing ourselves with our own eyes? And with cats, you know, obviously they’re quite emotive domestic animals that we have a very strong immediate reaction to and we think perhaps we understand them and we think perhaps we know what’s going on inside them, but both AI and cats have similar mystery and lack of transparency and we wanted to try and propose some of these questions around what you see and what you think you see, what you know to be true or not true. We wanted to raise those questions out into the public domain through this environment and so we created Cat Royale as an installation where there were eight cameras in the space which filmed the cats for 72 hours, but two lots of three hours a day, so they weren’t in there continuously, and the footage was made into a composite film which was sent over to the World Science Fair in Brisbane, where people could see what was going on. But at the same time, we were making assessments as to the cats’ happiness, and so that data was laid over the top of the film that was seen in Brisbane, and that was done in a few ways and we were looking generally at their welfare, that’s a whole other discussion perhaps, and every day we created highlight videos for people who weren’t in Brisbane, so people around the world would get a snapshop of what was going on. But it very much includes the film with the data overlaid, a sense of what that narrative is, for people to make up their own minds but with us giving, I suppose, an opinion or a position on that.

 

Sean:                  It’s an interesting thing, because of course- This is going to sound slightly throwaway but of course cat videos, cat pictures is the kind of running joke of what people use this powerful internet for. I mean, was that part of what you were doing with this? Was that kind of a bit of an in joke?

 

Matt:                 Not necessarily an in joke but knowing that people care about cats a lot and that cats are famously opinionated and have their own minds, their own autonomy, and that cats, also- It’s a meme online that cats interact with robots, they’re sitting on robot hoovers all the time, robot vacuums, so you know, we already know that cats can find playful aspects of robots. But then when you say to someone we’re going to put a cat in a room with a robot, some people find that disquieting and a bit unnerving. So that question of what does it actually feel like to watch a cat and a robot interacting, we were really interested to try and lean into that and to explore what that might actually feel like.

 

Sean:                  And there’s also kind of another kind of- I don’t know quite the way to describe it, but just thinking of this as a- You set out to make cat utopia and if you then put humans rather than cats into that system with multiple cameras and watched all the time and attempt to- You’re into the sort of George Orwell realms aren’t you? It’s kind of, the opposite might be said to be true. So one person’s utopia, one creature’s utopia might be another person or creature’s- What’s the word, the opposte.

 

Matt:                 I mean I think technology changes these norms all the time doesn’t it? Because if I’d said to you 20 years ago, I do want you to carry a device that’ll track your location every single place that you go including to the toilet and it will be reporting that in real time to a corporation in California, you perhaps would have said no thanks. We’re all now doing that. So the trade off that we’ve made is there, you know? It’s the same with cameras. If we all had CCTV cameras, or big box cameras on a plinth outside our house I think we would all have found that really unnerving and unacceptable, but Ring cameras that are doorbells that happen to have a camera built in, suddenly that is the new norm. So this question of where- What trade offs  we’re willing to make for certain technological advantages, it is something that’s shifting isn’t it? You know? It is fluid.

 

Sean:                  And I don’t know if this is the time to think about this as well, but obviously using AI and autonomous systems in that work, isn’t there a threat from AI and autonomous systems from outside pieces like this? You know, where people can type some words into a box and then suddenly ‘art appears’? Is that our- That perhaps is not the question, but you know, how does that feel as artists that people are able to do things like that?

 

Matt:                 To me, this is very similar to the conversation that was had when Photoshop first arrived in the late 80s and early 90s which was, oh, photography is dead, you’ll never know what’s a real photograph ever again because now you can put someone’s head on a different body and photographers will all be out of work because now, you know, they’re going to make it in Photoshop. Well, who uses Photoshop most? It’s photographers, you know? It’s actually a tool that photographers are well-placed to use and I think it’s the same with AI. The artist will find amazing uses for this, new kinds of art forms will be derived from it. There are, of course, issues around large companies hoovering up peoples’ work and then enabling you to create a painting in the style of David Hockney or something, you know? I think there are issues. It’s not to suggest that it’s all fine but I think it’s a little bit overblown to suggest that because you can type a set of prompts into an engine that that’s the same as making work. I think it’s more complex than that.

 

Ju:                       Also you know, we do things, we do things with art all the time which maybe are more acceptable. Like we put a Monet on a mug in a museum and it’s like, is that okay? Or a mouse mat or a- You know, what’s that? Are we worried about that? I don’t- You know, and it’s like is that some sort of bastardisation of art? You know? Why are we not worried about that? Is that because there’s a buck involved for somebody?

 

Sean:                  Well often these questions will come down to the mighty dollar won’t they? When actually that’s perhaps not the point is it? Yeah? That’s probably not the point at all. 

 

Ju:                       An artist’s work as always been used and transformed by other people, even- Yeah, maybe it’s down to money again, but like in advertising and it just moves fast, ideas move fast, and how do you copyright ideas?

 

Matt:                 As artists, we’ve had our work stolen countless times by people who are bigger or more powerful than us and essentially, we can’t afford to do anything about it. We have occasionally taken legal action against people who have really flagrantly stolen our work, but most of the time you just have to accept it and let it go because we don’t have the resources to defend it. So I think that process is ongoing anyway. 

 

Sean:                  What are the kind of broader questions about ethics with AI and art? What do we need to think about with that?

[00:19:44]

 

Matt:                 We spoke earlier about the kind of metaphors that we use and the language that we use around things, and when we were working on Cat Royale and doing the research for it, I was really struck by the comment of Kate Crawford, the researcher into AI and the author of The Atlas of AI about the fact that there is a kind of motivational reason for the use of the terms ethics around AI because ethics suggest that these are thorny, slippery problems that are to do with judgement and philosophy and it’s a very anti-regulation way of framing the problem, and her contention is, any time someone’s talking about ethics, they’re not talking about regulation and regulation is what we need. I think it’s an interesting and important point to recognise that there are some aspects of this that are about laws nad regulations and then there is also a field that might be about ethical problems that sits within that. So you know, there is clearly a lot of discussion and debate about the ethics of AI and what that might mean. I personally believe it is possible that AIs will enable us to go beyond human judgement and actually to go beyond some of the inbuilt biases that we’re all so concerned about in terms of AI systems, but of course, all of that is debateable and contested anyway as to exactly what is a non-racially biased AI? What does that look like and who gets to decide? And at the moment, we know that those things are decided by a very small cohort of people who have a lot of power and a very particular worldview. So at some point, we need a broader set of voices who are helping to frame what AI is when it deals with your mortgage approval, or your parole hearing, or your sentencing. You know, it’s already being used in these critical domains. So part of that is about who gets to actually decide what is an ethical approach.

 

Sean:                  And obviously ‘ethics’ varies from territory to territory. People believe different things. As you say, a small group of powerful people, probably most on the west coast of America are making a lot of decisions about how the tech is implemented at least and lots of these things come down to the data that it’s trained on which can also be flawed as we know. How can art kind of shine a light on some of these things to kind of open this up for people?

 

Matt:                 I think it really can because one of the reason that AI is an unusual piece of technology is that it is a black box, even to the people who have created it and this is- The system of sort of back propagation and these kind of techniques where AI has become super powerful while at the same time being opaque as to how exactly the flow of information is working inside that, that puts it in a brand new category, and it can behave in ways that are unpredictable even to its designers, that’s a systematic change and I think that almost everyone is in the same boat as me which is not really being able to define what AI actually is, not really knowing what the difference is between, say, AI and machine learning, for example, and not in any way able to grasp the nature of this technology and what exactly it’s doing. When we talk about it being intelligent, where and how? And that- Even within the field, the idea of intelligence is shifting all the time. Once upon a time, AI researchers said if a computer can play chess, that’s intelligence. Well, the moment computers could play chess people went, yeah well, that’s not really intelligence though is it? That’s just moving pieces on a board which is true. But the question that remains, what do we mean by intelligence? So as artists I think what we can try to do is make these things visible in some way. Is make an invitation for people to come towards some of these questions and start to be able to get a handle on it. We try not to sort of oversimplify it or to serve it up in bite sized chunks. We try to do things where it’s inviting and open but also reflects the complexity of the question. That’s always a real challenge to achieve that. But I think the best artworks that deal with technology really do do this and the best artists working in digital media can really create a platform for discussion and also, sometimes, enable us to see these problems in fresh ways. I hope in Cat Royale, people who watch Cat Royale, they watch a robot playing with a cat and the feeling that you have in your stomach as you watch that tells you something about what your response is to what’s going on there. Is that okay that the robot and the cat are playing happily together? Or is that a dystopian nightmare and needs stopping? That- Something in there enables you to kind of think about AI ina new way.

 

Ju:                       I think also for us, what’s maybe unique about our work in some ways is that that invitation is directly to the audiences to engage with the work. It is a work about conversation and even though we set up a proposition or a frame in a way, throughout the whole of Cat Royale we had an audience advisory panel working with Nottingham as well to step by step have a diverse, almost slice of audience, opinion and view about what the questions are, how we’re describing it, what they want to see more of or know more about. Do we need to put that in? Do we need to hold that back? For us- So yeah, we did a call out for a diverse group of people. There were 16 people. Different backgrounds, ethnicities, age, economic backgrounds, geography etc, and they- Every about six weeks we had a specific theme that we wanted to talk to other people about that wasn’t just the artist. It wasn’t just the scientist and the researchers, it’s where it lands with the public. So we had a very systematic process of going through the environment. What does utopia mean? What is cat happiness? What is the language we’re using? What are the questions or the kind of data you want to see onscreen? Does that make sense? Is the language that you’re using for that data obscure, interesting? And so for us, a part of our work is those conversations as well as what the work itself automatically hopefully suggests, and also as a part of the work there’s a massive aspect which is how we disseminate the work through comms and social media and again, thousands of people discussing it was a part of the work for us. How do they engage? And a lot of the developments with AI systems, there are no public anywhere near it. So as artists, we’re able to get public closer to the ideas, actually closer, you know? Who is in the seat of driverless cars doing that research? Well obviously there’s safety but when does that actually land with the public? Is it when it’s in the showroom? Or is it a questionnaire in a studio? What is that? So for us, it’s really important because we don’t know the answers, we are trying to learn more, and that’s a part of our process as artists to learn more, and that’s why we work with audiences. That’s why we work with researchers. It’s like we don’t know, we’re not experts, we all maybe have our skillsets and experiences and our passions but none of us are experts and it’s an acceptance of that as our position as people in the world and how do we all meet on that?

 

Matt:                 It’s quite unusual in the field of technology for anyone to express any degree of ignorance.

 

Sean:                  I think what you’re saying there is really interesting that you’re not just, forgive the kind of pithy- You’ve not just made a cat video factory. You’ve used that as a springboard to open a wider conversation and that halo, if I’ve understood you right, is still part of your work. That- The conversations that are happening beyond the first interaction or video view or whatever are actually part of the work and that continues and continues. Will that feed into something new for you or is that project self-contained and you’ll do something new now? How does that work?

 

Matt:                 The next thing we’re doing is making an eight-hour long film from the footage that we shot that will then be presented in galleries. It will be presented at Science Gallery London as part of their show about AI and care that opens in June 2023 and will be there for six months. It will go to the Wales Millennium Centre in Cardiff later in the year and then be touring through 2024 as well. So yeah, the idea is that that film will then become a touring work that we can present in different galleries and musuems. And there’s research around that as well. We’re working with- Kate Devlin from Kings College London is going to do research around the film about public attitudes to AI. So the team at Nottingham have done an enormous amount of research and study. We’ll start going through the data that was captured. We’ve worked with Professor Daniel Mills from the University of Lincoln who’s an expert in animal behaviour, with Professor Zara Mancini at the Open University who specialises in animal computer interaction, so these dialogues with researchers are really rich for us. They enable us to think about our work and take our work further. But we also like the way that we’re able to feed the research agenda and give the researchers hooks to think about- To study the work that we make and produce knowledge from it that is usable more widely by the sector.

 

[00:30:33]

 

Sean:                  I think it’s great that you’re integrated in the tech because it’s too easy to say okay, very simplistically, I’m an artist, I’m going to take this off the shelf piece of kit and I’m going to do x, y and z with it. Well this is more back and forth isn’t it?

 

Ju:                       Yeah. We’ve never done that. We’ve never just taken from the shelf and-

 

Sean:                  You’re putting it on the shelf as much as anything.

 

Ju:                       Well the very first piece of work we made in ’91, somebody came to see us and they said I’ve made this thing, and they’d made a giant video projector that was the size of a Mini car, they were like what do you think? Can we do something with this? We’ve always had a sort of- We’ve always gone oh my God, that’s amazing, and we’ve always- We’ve just always been engaged with it. Of  course we all use tools and we all take from tools, that’s kind of honest and authentic to say that. But it- But we have to find a way that we connect with anything that we use, whether that’s subject matter, tools, technology. There has to be a connection and there has to be bridges and jumps and reasons and handles and, you know, I like to think that we’re not people that just take things willy nilly to use. For us, it’s really important the history, the personality, the programming of the agenda of anything we look at.

 

Matt:                 I think we’re unusual in working very closely with software but without being coders. So there are artists who are coders themselves and are deeply immersed in the making and the writing of the code. What’s distinctive about us is that we are- We work very closely with software and developers but cannot code ourselves, but we try and have a very tight integration with the design and development of the software. So we believe that the software is the material with which we’re working, but we have a perspective, as outsiders, and that sense of being both outside and very close I think is a really key part of our practice. It enables us to kind of ask those critical questions but also take responsibility for the implication of those critical questions, you know, which is actually these things are really hard and to code effectively with these technologies is a real challenge. It’s a tremendous skill and there are always trade offs, you know? There is no neat way to resolve lots of these questions.

 

Ju:                       I think also as well, I’m not quite sure what I’m saying here but like, I think there’s always- There’s also a thing of because we might work with ideas all the time, that doesn’t mean that we necessarily do it the best and if we work with technology all the time, it doesn’t mean that we necessarily use it well. So I think there’s an acceptance from us that- And with working with researchers in the MRR, there’s a sort of mutual respect of we have our areas of expertise but it’s not total and that it’s not impermeable. That it is not invaluable in many ways. And I think maybe as artists, we’re used to flowing across boundaries and taking risks and maybe that’s an important thing with looking with new ideas and new ways that technology can or is changing society. We’re less fearful of that. But I don’t feel like anyone should be in worship of artists for sitting around and coming up with nice ideas because that’s not exactly what we do, and technologists, because they are deeply embedded in code are the complete experts of code and all its possible outcomes. That’s also, you know, those things are true as far as I’m concerned and that’s no disrespect to artists or technologists or anyone in between. It’s actually acknowledging that we learn together, things blend together, they’re cultural, they’re technological, they’re societal, they’re communal, they’re individual, you know? It’s acknowledging the kind of flow between things. Otherwise we get stuck. And that’s where we get stuck with divisions between, we can’t trust this and animals are this, and you know, things become very boxed and it’s like well, we can’t do anything with that then. If we can’t flow and move between, and that’s not- You know, yeah, it’s scary, yeah it’s unknown, yeah we none of us know, you know? But we can turn and look around at what everyone- Each other are doing, what people are doing in the world with their tools, what they’re doing with their cultural entertainment endeavours and kind of go okay, what have we got here? What can we do with this? And it feels important to us.

 

Sean:                  That kind of feeds into something I was thinking which is this idea of silos of technology lives here, art lives here, you know, mathematics lives here, whatever it might be is really naïve and completely out of date these days. And not only are you getting something from this technology, I can almost guarantee, particularly having known a few of the people you’ve worked with from the tech side of things, that they will have got a lot out of the projects they’ve worked on with you as well.

 

Matt:                 I think it’s important to say that there’s an innovation agenda here which is that interdisciplinary teams make better projects and can think in different ways and precisely in the gaps between our knowledge, new knowledge can be created and that, you know, the UK has a really strong tradition of interdisciplinarity and we feel very fortunate to be able to sit in a room with a bunch of computer scientists who will listen with interest to what we have to say and also help educate us about things. So yeah, I think, you know, it’s important to say that we’re fully signed up to the idea that we are helping to make better technology and better research through interdisciplinary practice.

 

Sean:                  This has been a really fantastic and interesting discussion and I’m really glad to have had you both on the podcast. In my videography world there’s this saying never work with children and animals. I’m just wondering if it should be updated for never work with AI, children or animals. Anyway, I’ll leave you with that thought. Thank you so much for joining us today. Thank you Matt.

 

Matt:                 Thanks a lot.

 

Sean:                  And thanks Ju.

 

Ju:                       Thanks Sean, thanks so much. 

 

Sean:                  Amazing to hear from Blast Theory there, but I just have to quickly get in that I’m so annoyed with myself I couldn’t put the word dystopia out of the back of my brain and kept saying opposite of utopia, opposite of utopia. Art can be a really contentious subject. The old adage, I don’t know art but I know what I like and all that sort of stuff. Perhaps we can kick this conversation off with Steve. Steve, you worked with Blast Theory many times for quite a long period of time off and on. Where do we go with this? Art and technology and AI and trustworthiness? 

 

Steve:                Speaking from a computer scientist point of view, why do we work with Blast Theory and other artists? I think there’s a bunch of reasons I think. One, they are super creative in where they take technologies. They have ideas, surprising ideas of what you might do with technology that we just would not have, and so it pushes the boundaries technically and it’s great to work with them. But at the same time, you know, many artists do pose societal questions and it’s a great way of confronting the broader issues. It’s not just about tech development. It’s about the experience and meaning of tech. Touring and public deployment is a fantastic lab. I mean if you want to try out a new piece of tech with real people, getting something medical deployed takes, I don’t know, a decade or so. Getting something deployed in an arts context, you can develop it in three months and involve people in the loop. And you know, the final reason is art and culture is really important, and computer science should devote a good chunk of its effort to understanding the requirements and needs of artists to make exciting work and I try and champion that view in our discipline. 

 

[00:39:28]

 

Sean:                  Just flipping over to Tim there briefly, I was going to use- There’s a contentious phrase which is,you know, there’s this kind of cultural kind of highbrow view that people sometimes have of the word art right? Can we break down those barriers with technology? Is that one of the ways we can kind of get into art a bit more?

 

Tim:                    Art definitely is simply an offputting term to people. We have- It’s a kind of a holy grail now where it’s quite separate from everyday life and of course, historically, it wasn’t separate from everyday life in the way it is now, put in museums and held up as something that only certain people can understand. So there is that issue. As soon as it crosses over into art it leaves the realm of the ordinary, which is difficult for people. On the other hand, any way that you can push stuff into the public arena in ways that people are ready to engage with it, you know, if there’s one thing that’s more offputting to most people, it’s technology. We use it every day, but the thought of actually understanding it, whether it’s grappling with what even cryptocurrency is and how it might work, how your computer’s doing what it does, all of this stuff is- Everyone is very quick to say I don’t really understand it. Even quicker than they are to say they don’t understand art. So funnily enough, I think there’s a little nexus between the two where they can act as introductions to each other.

 

Steve:                I agree and I think, you know, some of the works that I like best are the ones that really sit at that boundary and at first sight are a simple and really understandable proposition. Okay, I get what’s going on there. But as soon as you begin to engage with it, you’re off thinking about no, there are lots and lots of questions raised by that. 

 

Sean:                  There’s an interesting point you make about the black box nature of some of these things, you know, the fact that you might not know what’s going on in your computer and do you know what’s going on in art and it’s, yeah- Just thinking of Blast Theory’s Cat Royale project, for instance, I notice Steve has got a Schroedinger’s cat t-shirt on, which obviously on the podcast doesn’t translate unless I explain it. But that whole concept of having a utopia for cats that is served by an AI overlord, it’s a really interesting concept and is it kind of smacking a bit of what our own futures might look like, or somebody’s idea of that? I don’t know. 

 

Steve:                I mean the benign overlord is kind of something that people have always been longing for, the idea that you can have, you know, the philosopher king and the well distributed government of the people by the people for the people. We have difficulty enough and have had difficulty enough putting that in the hands of humans over the years, let alone humans working within systems, you know? And that’s the kind of the fear that Kafka brings up, systems that you can’t understand any more. It’s the fear that Orwell brings out, systems that deliberately defy understanding. When you get to systems that are explicitly designed with, as it were, benevolence in mind, but which have no, as yet, actual emotion of the desire to be benevolent behind them or within them, you have, I think, that combined thing of well, it can be wonderful because it’s neutral, it’s not going to be corrupted. It’s terrible because it knows no notion of what kindness is. And also, my God, it’s a system I don’t understand it. So yeah, I think the cats is quite a good microscope to put it under because you do think well, what if the robot arm goes crazy and crushes the cat? What if it strokes it too hard? Because cats love to be stroked, you know? All these things you do begin to wonder. And yeah, I can see those fears breeding quite neatly in the context of that piece.

 

Sean:                  It’s also a kind of great thing- We mentioned in the conversation about Orwellian kind of futures and the idea of cameras everywhere and all this sort of stuff and then Matt quite- Pointed out the obvious, if you pardon me saying it that way, that we all carry around smartphones, we’re being tracked, we’re sending all our data off, and the fact that you can have these art pieces which basically expose that to people, it is really good isn’t it Steve?

 

Steve:                Yeah, yes, well I- When the word utopia first came up I remember thinking it’s a really ambiguous and interesting framing for the project that does sit in the middle of the debate. So on the one hand, you kind of- You can imagine, quite rightly, how robots could look after people and help people in their homes and in all sorts of places and you could think, well, why shouldn’t that extend to companion animals and, indeed, companion animals are going to encounter robots that we put in our homes anyway, so why shouldn’t we actually think about how to design for them rather than letting that be random. I think there are lots- You can imagine how people who are sufficiently frail that they’re now struggling to play with their cats could find new ways of doing that. On the other hand, framing it as a utopia of course immediately raises the challenge to everyone who hears that statement, in what ways is this not a utopia? And the project, you know, begins to reveal some thinking there. There are some moments where the cats get feisty. They take control. There’s a bit of a Jurassic Park moment where nature will have its way, and I think that comes in the project. And there’s this whole kind of question about AI learning what’s good for the cats. Do the cats just want to eat treats? Can we let the cats just eat treats? Will the system self-regulate eventually? These are all questions that started to unfold as the thing rolled, you know? 

 

Sean:                  And certainly, anyway in literature and ficture, utopia is very contentious in its own right. Almost always in fiction, one person’s utopia, one character in fiction’s utopia is obviously somebody else’s dystopia or somebody is gaining at somebody else’s expense often. I’m not saying this is necessarily the case here, but how long before we hear the pleas for robot rights.? Ali, what’s your take on this kind of Cat Royale project?

                            

Ali:                      I think Cat Royale is a really good way of testing the deployment of autonomous systems because the cats serve as accelerated proxies for human interactions with autonomous systems because cats, they’re not very thoughtful. They quickly reveal the primal emotions which really drive us. Each cat has a pretty unique personality and they’re not inclined to go with the herd. So they will have individualistic responses, and in that way I think, they’re a microcosm of human society where we have a very diverse set of responses. So I think cats were a good choice in the sense that one cat can be quite happy, the other one may be unhappy. And this mirrors what happens in society. I also think the engagement with the notion of utopia is quite important because I came up- Silicon Valley in the 90s, tech utopianism was really powerful then and we were motivated by extremely naïve ideas about what would happen, say, when you democratise media channels. We’ve now got social media. That used to be confined to just a few television channels. So I think Cat Royale is actually a good model for what happens when you deploy systems of gratification, instant gratification in the human society.

 

Sean:                  I did a bit of research before having this chat, and one of the things I did was a Google search of AI and art. And once I got past the sponsored links, which I’ll mention in a moment anyway, one of the first things that popped up was a Guardian article, when AI can make art, what does it mean for creativity? Which I thought was interesting. The next thing that came up was the Alan Turing Institute talking about their AI and arts interest group, a multidisciplinary effort rooted within the Turing Institute. But the sponsored links were quite interesting. There were two from universities offering courses in AI, and there was one from an advertising agency. And obviously, the ubiquitous now AI art generator that was being advertised. There was a brief discussion about whether AI can actually create art which is a massive topic and perhaps a podcast in its own right. But what impact is AI going to have on the art world? Perhaps we could go round the room and see what people think on that? Steve, do you want to start as a computer scientist in the room?

 

Steve:                Okay, well I’m not  going to attempt to set out all the implications it might have because I think there many and some of those could be about job dislocation, particularly at the more factory ends of the industry. But I think that one of the things that really interests me is there’s going to be glitch moment and I’m not sure we’ve discovered it yet. Think vinyl, and what it was originally intended for. Think about somebody scratched it, it made a sound that nobody thought was music, think what happened next. That’s the moment I’m not sure we’ve found yet with AI, you know? We’re busy reproducing stuff, making stuff that looks like stuff. Yada, yada, yada, that’s not really that interesting is it? What’s interesting is once we find its distinctive glitch, its fingerprint, its material quality and mess with that, and somebody does something that initially everyone goes whoa, no that’s not art, that’s not a thing, that’s not allowed. And at that point the game is afoot I think. 

 

Sean:                  That probably is art. That’ll be the USP maybe?

 

Steve:                Yeah, yeah. Then that’s art right here.

 

[00:49:51]

 

Tim:                    I mean I agree with Steve. I think there’s sort of a very basic thing which might be a slightly luddite point to make but the everyday production of art in the small A sense will be, I think, completely revolutionised, meaning people in the short term, medium term, long term, will be out of jobs. You know, illustrators, designers, that’s going to really affect things. I tend to think that it’s a little bit like the same moment with chess. There was this fear that no one would bother playing chess once you had Deep Blue and actually it turned out that all those strategies which are now- That we had the process power in our mobile phones to play that well, have improved top level championship chess. So people are learning from AI and they’re creating new strategies from it. Artists, we tend to think of them as producers but they are receivers. They’re kind of data processing facilities for culture. They take stuff in. And any tool that helps them take stuff in, whether it’s the camera obscura back in the day, mirrors and things for Vermeer, photography in the 20th century. AI will be another tool for them to take things in and play with. And I think Steve’s absolutely right. There will be this wonderful moment where the tool turns against itself. You know, scratching a vinyl record, creating new noises, thinking in terms of new methods of production that are accidental is really exciting and can be turned to purpose is really exciting. But I’m tempted to say that overall, we’ve got to a point in history where the definition of art is so flexible that what it comes down to is something where human intervention has turned itself up, and it’s the human intervention that’s key and I think there’ll probably be as little appetite for AI art with a capital A as there is for paintings by elephants and monkeys. That it’s a nice trick and it’s momentarily interesting to think that something aesthetically engaging can be produced that way, but it’s not contact with that human intervention that makes us think what’s going on? Maybe I’m wrong, you know? Maybe we’ll get to the point where we begin to see these things as minds that are having their own capacity to intervene in ways we want to engage with. But I think for the moment, aside from generative strategies designed and re-employed by humans, I doubt it’ll change very much. 

 

Sean:                  I think part of that problem there perhaps is the fact that it’s so easily able to copy existing things, isn’t it? That actually, it’s not a monkey or an elephant doing a painting. But yeah, I do totally take your point there. 

 

Tim:                    Yeah, a bit like a photocopier.

 

Sean:                  Yeah, a photocopier. Everything’s derivative right? We chatted about that in the conversation, you know? Everything’s derivative, you know, whether we talk about Oasis and The Beatles or pick your own other example of that.

 

Tim:                    I mean don’t get me started on Oasis. But we- The derivative thing is very true and is a kind of fact of creativity. We don’t, as art critics, for instance, particularly stop to look at the guy who’s doing a really amazing copy of the Mona Lisa in chalk on the pavement. I mean, it’s incredible. Maybe this is stupid that we’re not stopping and thinking about this. But the capacity to copy is not that interesting aesthetically, even if it produces things which have that kind of, oh it’s pretty, it’s impressive. 

 

Sean:                  It’s technique rather than art, right? It’s like illuminating the manuscripts and copying out the bible however many times it was done. Ali, what’s your thoughts on this>

 

Ali:                      I’d like to echo a couple of points that Steve and Tim made, first about the glitch and also about the fact that a lot of art is recombinant. And the thing about the AI system, if you look at how it works, it’s recombining and learning particular styles, but it’s always that glitch that gives it the tincture of humanity, and it’s knowing which glitch to pick to make something great. I’ll give an example from music. I really like Bach’s cello suites and there’s a lot of people play them, Yo-Yo Ma’s played them, I’ve had the pleasure of hearing them. My very favourite rendition of it is Pablo Casals’ where it’s completely- It’s the same notes, completely different to the other versions of it because it has the weight of the world in it, and it has a weight of experience and he actually hangs and moves the- Changes the tempo in a way that Bach didn’t anticipate. So I don’t think AI is going to be able to do this and I think there are mathematical reasons that you can relate to actually evolution of biological organisms versus mechanisms to explain this because, you know, a mechanism always has to work within a very predictable space and in this case, AI art is working on its training set. We call that a vector space actually, in mathematics, and in the vector space, every position, or in this case every pixel and colour depth is, in some sense predictable. Biology doesn’t work that way. It works on the glitch. So if you think about how fish had swim bladders, you know, to maintain their buoyancy then some fish discovered- Well, there was a glitch in evolution where that swim bladder could function as a lung and all of a sudden you had creatures moving out of the sea and crawling around on land. So life is unpredictable and that’s what creativity is. It’s this unpredictability. It’s based on a glitch or some unknown function that wasn’t predictable in that machine learned vector space. So we see- In artistic terms we see an even stronger separation of the craft from what we consider art and that creative spark, and as a photographer grew up in the optics era, it was very hard to get a blue sky or even that kind of shade of grey. It took quite a bit of knowledge with filters and whatnot in the photograph. Now, a mobile phone can do it. But there’s still photographers and there’s still people working in the profession, even though you have algorithms that can fix a lot of the things that once took craft. So we’ll see. I think AI will become part of the artistic toolkit and it’s going to democratise illustration in a way that is going to be unhappy for many illustrators but it’s still going to leave plenty of room for innovation and creativity because ultimately, even if it does product that glitch that Steve’s talking about, someone has to recognise which glitches are viable, powerful or emotional and which are not, and then everything will move in a new direction. 

 

Sean:                  But to play devil’s advocate, you mention this vector space and the idea of things staying with that space but the glitch is exactly that, though, isn’t it? The idea of it moving beyond that and if you think about human experience, we are brought up with experiences and we use those to build on and that’s where art comes from isn’t it? What goes on in your head. The glitches maybe, maybe it’s an evolutionary thing. But you take what you’ve known, what you’ve seen, what you’ve heard, and you- Something comes out. I don’t know what happens in an artist’s brain. I mean, technically we’re all artists on some level, right?

 

Ali:                      I think we all are. We are all artists on some level and I think it’s probably because humans are glitchy. But knowing which are good glitches, we’re arguing for them and pushing them through, it could be will to power or it could be aesthetics and this level of judgement is something which requires an evolutionary mechanism as opposed to a developmental mechanism.

 

Sean:                  But what’s the gatekeeper for those glitches right at the moment? Is it democracy as in do we- You know, the more people that like something that becomes a thing, you know? For instance maybe hip hop music or whatever it was that first- The scratching first developed. Is that because people thought hey this sounds great or is it like in some parts of the art world where you have these gatekeepers who decide that an artist is the next big thing and go and push him to a gallery or her to a gallery. I don’t know, is that how it works in some levels?

 

Ali:                      It’s the sociology of art. All these productions are ascribed in different hierarchies. So there’s popular acclaim, and a lot of times just because it’s popular there’ll be a group of people who’ll say this is rubbish. It’s not avant garde. And then there’s also the market and the market is going to take certain- Give accolades to certain forms and privilege certain forms and not other ones. I think we’ll see this happening with AI art too. And in fact, AI doesn’t produce according to traditional media. I mean, I guess Microsoft did a quasi-Rembrandt that had some sort of one half level printing but it’s still not oil paint. And maybe they can learn to oil paint. But the older media won’t go away. We’re going to have a new media. And I think, as an artist myself, I find AI to be a wonderful new part of my toolkit because on one hand I can save a lot of money. On the other hand there are things you can do with it you can’t do. 

 

[00:59:51]

 

But in art, you make your own luck and I think Tim made a really strong point about, you know, I guess it was DuChamp really who really said art is what we decide it is. And it’s still going to be humans deciding it and there’ll be a market and there’ll be individual aesthetics and there’ll be gatekeepers. And none of this is going away with, I’d say, the democratisation of certain forms of artistic illustration in the same way that these mobile phones with extraordinary algorithms have made it harder to make an original or beautiful photograph but there’s still photographers out there innovating.

 

Sean:                  Absolutely. And just thinking about what you said about the oil paintings, it just reminded me, Steve’s been on the podcast before and we talked about a robot that goes onstage and plays instruments. I mean, is there a parallel here? I mean, you know, art isn’t just the visual media is it?

 

Steve:                Well for sure. I mean yeah, you’ve talked to me on the podcast already about music. We talked about- We should probably talk about Cat Royale which is certainly not just a visual experience. It’s about robots and autonomous systems, and you know, maybe a little bit I feel may be missing from the conversation so far is a little bit of a discussion of physicality. There were several points at which you touched upon it. An oil painting has got oil on it. That’s really important. I’m not sure that a robot- A computer can play chess until it comes around my house with a box and can get the pieces out and put them on the table and move the pieces and we can do all that other stuff that happens around the planning, because that’s for me what playing chess is about, it’s not just an intellectual puzzle. So I think, you know, maybe we need to think a bit more about the physicality of what’s going on here, which is where the whole TAS programme comes in because it’s not really just an AI programme, it’s very much an embodied AI programme. 

 

Sean:                  That’s really important actually and obviously we’ve gone into a lot of conversations about AI and art, but the one thing that Blast Theory have been doing with this stuff is shining a light on some of these areas and the trust. I mean, it’s perhaps not their primary motivation but they certainly have been, yeah, helping us to understand what it is to kind of see a robot catering for an animal’s needs and you know, what that means. And maybe- I don’t know how that feels. Ali?

 

Ali:                      Yeah I think the Cat Royale project is brilliant because it’s a sociological microcosm of the interaction of autonomous machines with bodies. With bodies that have desires and material needs. So this in fact is a great way to study in a kind of accelerated way because cats are so visceral and they’re not going to show off or try to follow certain social scripts that may confuse the relationship between the deployment of these autonomous systems and their own reactions to it. And I think a big part of what I always do, what I admire about Blast Theory’s work in general, is to look at the physicality which means materiality, but ultimately the fact that there’s a physiology and sociology interacting with all of these technical, techoartistic systems and each Blast Theory artwork, and Cat Royale I think is the last in a series of really groundbreaking ones, gives us insight into the sociology of technology and how it’s really going to behave when we interact with it, and I think this is quite important because we’ve made some technological deployments where we haven’t adequately studied what’s really going to happen and the Blast Theory approach I think is one that needs to be taken as predeployment testing, and that’s what Cat Royale is. It’s a way of looking at in an accelerated way what happens when these autonomous systems interact with physiological systems. 

 

Tim:                    Well I was wondering, I mean, the physicality and the sociology of all of this stuff is fascinating and I wonder if it comes back on some level, which brings us round to utopias, to pleasing, you know? Art is kind of- Has moved a long way away from that which we find aesthetically pleasing. But what we’ve really done I guess is widened our definition of the pleasing. To an art critic, pleasing things tend to be very thinky. Art pleases me when I can write lots about it and think a lot about it. This is not necessarily the same as what pleases other people. And you’ve got with Cat Royale, this system that is trying to, as Blast Theory said, maximise happiness. A great, utilitarian concept that we, I think probably, philosophically, should all be quite uncomfortable with because we don’t have very good ways of maximising or measuring happiness. Now a cat in the context of humanity, you know, is a very simple system and it’s still a lot more complicated than most other areas we could deploy this, you know? This cat likes to have its tummy scratched. This cat will like it for the 30 seconds before it absolutely savages you, as everyone knows at a certain point you come across a cat rolling over and doing that. So I think the pleasingness of things is an important thing to think about. What pleases viewers in terms of- Or it might be a bit better to say audiences, people who are interacting with this stuff, but also the capacity of AI to produce different forms of pleasure. And that is such a massive and sociologically defined realm of enquiry, what it is that is pleasing to us in these different ways and I think it’s great that Blast Theory are testing this out and I- It just opens a massive can of worms for me. 

 

Steve:                Yeah, part of the concept of pleasing makes me think about play and playfulness and that’s one of the things that’s really involved in Cat Royale and if you do watch at least the video highlights or headlong and take a look at the real thing, you may get to see Clover’s growing relationship to the robot which gets- On day 10, seriously, seriously feisty. She gets the robot in a sort of death grip where she’s pulling on the toy and won’t let go and is really, really stressing the technology sort of physically and the operators are trying to decide should we let go of this toy? If we do, will it spring around the room and hit a cat. What’s going to happen? And she the robot in this moment and then there’s a real negotiation of pleasure and play going on, and they do let go and she carries off and I’m not an expert on cat behaviour but she looks pretty pleased with herself I think at that point and savages the bit she’s got in the corner of the room. So there’s something- It’s visceral, it’s playful, it’s somehow you’re observing that kind of relationship. It has this nature of kind of turning back which I think goes back to the kind of what happens when it’s really kind of physical and I think it’s really interesting.

 

Sean:                  There’s a kind of like- Maybe this is a bit overblown, but playing god here isn’t there, in this? You know, this idea of right okay, we can put this system in and we know what these cats will need or would like or- And I know if that were a group of humans, even if you were catering to their every whim, there would come a point where someone would get fed up and would want something different. Tim?

 

Tim:                    I mean it’s the lotus eaters. It’s Greek myth and Tennyson, it’s actually, you know, it comes back round to utopia again. People don’t- A, there is this idea that utopias are all dystopias, you know, the original utopia is told to you by someone whose name means rubbish, it’s narrator by Thomas More’s characters, you know, and it’s definitely a bad place, really. It might be better than Renaissance Britain, but people get fed up, you know? We- The life of ease doesn’t actually suit human beings and I think there’s probably a quite good evolutionary reason for this, you know, surprise minimisation in a sense that the other shoe’s going to drop at some point, but pleasure stops being pleasurable, and yeah, playing god or otherwise, if god could make us, you know, he could put us in heaven, I think we’d be quite bored.

 

Sean:                  It’s the Christmas everyday kind of parallel isn’t it, you know? It wouldn’t be Christmas any more if it was every day. Add wonderful day of choice, delete where applicable. I asked ChatGPT to say a few words on this just because, you know, it’s a gimmick I can easily dial up at the internet and it did spew out several paragraphs when I asked it about AI and art, because I was thinking about this idea, and we mentioned just a moment ago about what’s maybe popular and what’s kind of considered and you know, at these different levels, and the idea of lionising art and having it on a kind of pedestal. Anyway, ChatGPT said, in summary, AI’s role in making art more accessible is indeed a positive development. I mean, it’s like its own best salesperson here, by removing barriers and providing opportunities for engagement, AI can contribute to a more inclusive and diverse appreciation of art empowering individuals to explore and connect with the artistic world in ways that they may have thought were unattainable. 

 

[01:09:57]

 

Ali:                      Well I tend to agree with ChatGPT there. I don’t feel particularly competitive with AI. I feel like it’s a partner. I also feel like, well, it’s nice for people to have this kind of partnership. I think the dangerous this, getting back at Tim’s point about having every day be Christmas, you know? We could have a loss of social cohesion because if people start to rely on these external mechanisms, it may lower social interactions, you may have an avatar that maintains your friendships and ultimately become more and more isolated. So I think the frightening thing I find about a lot of technologies, not just AI but information and communication technologies, and I’ve explored this in some my artworks, like Groupthink, is the fact that we may think we’re interacting but be delegating more and more of these material, physical interactions that Steve is referencing and just delegating them to AI and other entities and in fact start to lose our own capacity to interact. Somebody said to me, actually I was at a festival and they said to me as I was onstage that the reason pilots keep landing planes, even though autopilot can do it perfectly well is because they need to stay in practice. Autopilot is 97% of the flight but the pilots need to keep practising. So the thing to bear in mind is, is when we can get pleasing results all the time, sometimes we need to remember how to create those ourselves and that stands for any kind of situation, whether it’s making an artwork or writing a nice letter to a friend.

 

Sean:                  Absolutely. My examples are significantly more low brow than yours but I’m remembering the animated movie from a few years ago, WALL:E and if you haven’t see WALL:E there is a robot or a set of systems keeping alive the humans while they fix the earth and the humans are just blobs on flying hover wheelchair things watching screens and they have no idea what’s going on, and that’s something we need to be wary of I suspect.

 

Ali:                      I think WALL:E is a great parable for the emerging human condition. When they’re- You know, a lot of the time I think our fears are misplaced, but that is spot on. 

 

Steve:                One of the angles that the TAS audience might find interesting is, you know, because a lot of the conversation has been about what does AI, and in part autonomous systems bring to art, but also I think art illuminates things for TAS and one of the reasons I think it’s got an artistic programme as a kind of research thing is to get new insights. So I mean, some of the things thatit celebrates I think are, you know, art celebrates failure and then improvisation and it perhaps puts interpretation about explainability. And sometimes it puts surrender above control and playfulness above being safe and these are all important because the way a lot of folks are thinking in autonomous systems research is that things should be explainable, dependable, controllable and safe and that’s not what you get from art. So I think one of the important things is, it’s a provocation back to people making technology to really think about some of the things that are often, I feel, trotted out a little bit, perhaps I’m being a bit unfair, but you know, the number of times I hear AI or an AS should be explainable. And I think well, why should they be? I mean art is interpretable. That’s the fun. You have the debate, you make your interpretation, it’s ambiguous. It’s not self-explanatory by any means. And so I think these are important lessons for the technical research community that need to kind of come back. 

 

Ali:                      I agree with Steve’s point and at National Gallery X which was one of the- Which was part of the TAS Hub founding group, we really looked at explainability and art, and it’s full of interpretation, so we formed a thing called the AI gallery which we called the British Council for AI, for listeners in other countries, the British Council was always trying to explain British culture to other countries and that’s- I remember all through life just having exchange programmes with countries, you know, I’m American, that we were nominally or maybe truly enemies with and art builds trust. And there may be a point where actually AI outstrips our cognitive capacity. I don’t think cognition is the definition of the human. I think it’s agency. And agency is really bound up with creativity and the kind of new things that we- Innovation that we come up with. But art can also build understanding. So actually I think AI art productions could be part of the explainability. It could both show us- I don’t want to attribute intention to art, to machines, but it could show us both their capacity and their limitations in a way that no amount of words or even a ChatGPT output could. So it’s an exchange programme. We need exchange programmes with the realm of machines. 

 

Tim:                    I like the idea of being able to go on a machine exchange or having a 16-year-old AI coming and stealing your girlfriend as they used to do from France. Get sidetracked onto that, 16-year-old bearded Frenchman. 

 

Steve:                A little bit too much there. We can talk about this afterwards.

 

Tim:                    Yeah, I can talk about it with my therapist. Safety, I thought safety is such an interesting term and much as I hate to admit it, although we have outrage around art, it’s kind of commonplace and has been commonplace probably since the 19th century for lots of different reasons. Actually, art in the 21st century is the safest of safe things. I believe art needs to and can change things, point to problems, raise issues, but actually, my sensation of being around art is one of total safety that I don’t experience anywhere else because it is safe. It might worry me a little bit, the issues it’s talking about, but the object itself is safe and that’s the condition that we give to art these days. Whereas the stakes are quite high for autonomous systems. We are talking about things that if mistrained, if misreleased, if misregulated, if misused, have a huge potential to be having direct human impact in the world. And from that point of view- This is the first point of disagreement maybe that I’ve had, I really want them to be explainable. It might be a dream to think that laymen like me can understand what’s going on in the black box, but I want to know how those decisions are being made. If I’m driving- If I’m walking around the streets of London and I have self-driving cars, I want to know what version of the trolley problem the car is working through that says it’ll run over me rather than a mother and a toddler. Because that’s meaningful to me. And so I do think there is a safety issue that art is a great training ground because it’s play. Play is safe. Play brings up danger, it evokes danger and invokes danger and gets us used to danger in safe ways. Art does the same thing. But when we move over into autonomous systems, that’s the point. They’re autonomous and some dangers can appear.

 

Sean:                  And also, I don’t think it’s an exaggeration to say that a lot of the very complicated neural networks are black boxes and even the people that design them don’t know exactly what’s going on inside them and exactly what they’ll do in circumstances that they’ve not encountered before.

 

Steve:                Hang about, hang about folks. We’re black boxes. We don’t understand ourselves. You can’t explain yourself to me in terms of a functional model of well this neuron fired and this neuron fired and therefore this data input came in and therefore I did this. You can offer me an account, if I demand one, you know, if I say what the hell did you do that for, you’ll give me an account, for me, in the moment of for your own reasons as to what’s going on, but that’s not the kind of explanation people are asking for. They’re asking for I want to know why in this exact- Sort of determinalistically, this thing did this thing. That’s not, I don’t think, how intelligence works and I think when we- We can’t even explain ourselves and then we talk about wanting to make things that are intelligent. I know we haven’t discussed this in this conversation but we use that label. Why would we expect them to be determinalistically explainable? 

 

Tim:                    I 100% agree with that. We’re not transparent to ourselves, yeah.

 

[01:19:39]

 

Ali:                      These are both really good points about explainability and wanting that from the systems, which we do. Although Tim, I think it answered your own question, but you know, about which- Who the car would run over. But I think Steve’s point is also very good, and what it comes down to is training. And you know, we can’t even train our own children to be ethical a lot of the times. Ask some parents about that. But still we can train the systems and create regimes of training where the outcomes are optimised so we’re constantly in a state of trying to optimise our systems, if you think about, say, the military, it’s very mechanistic in the way it operates, but the militaries have over the centures developed laws of engagement and I think we’re going to have to do something similar with that deployment of autonomous systems and let’s get back to, say, Cat Royale, and the world of art and Tim and Steve have both been talking about this a lot. I think art is the- It’s not an entirely safe space but it’s the best space where we can test some of this because in 5G deployment I worked on- I was doing the art and cultural section but we also had the transportation sector and the nedical sector and we found that actually, the human sensitivity to art matched that in medicine and transportation, still it’s the safest place to find a glitch, so by deploying the systems in an artistic environment we could find what the tolerances were before deploying them more widely in society. 

 

Sean:                  That’s about all we have time for today. It only remains for me to thank our contributors today. It’s been a real pleasure and I appreciate you sparing your time out of your busy schedules, so thank you very much Tim.

                                                                                                              

Tim:                    Thanks very much. 

 

Sean:                  Thanks Steve.

 

Steve:                It’s been a pleasure. 

 

Sean:                  Thanks Ali.

 

Ali:                      Yeah, it’s been a pleasure and a nice surprise, good to see you again Steve. 

 

Sean:                  If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS hub website at tas.ac.uk, where you can also find out more about the Trust for the Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Bordie Ltd, our theme music is Weekend in Tatooine by Unicorn Heads, and it was presented by me, Sean Riley. 

 

 

[01:22:12]