Living With AI Podcast: Challenges of Living with Artificial Intelligence

AI Music: Duelling against a Robot Piano!

Sean Riley Season 1 Episode 14

If music be the food of love, AI. We talk to Professor Steve Benford about robots playing music. The panel guests are Stuart Middleton, Alan Chamberlain and Paurav Shukla. This Podcast was recorded on 11th March 2021

00:34 Paurav Shukla
00:41  Stuart Middleton
00:55 Alan Chamberlain
01:05 Alan's music & visuals on YouTube
01:10 Sean Riley

01:35 Living With AI Theme: Weekend in Tattoine by Unicorn Heads (YouTube)

01:55 Facebook Algorithm trained on Instagram photos (BBC)
03:15 Facebook's SEER
03:59: OpenAI’s state-of-the-art machine vision AI is fooled by handwritten notes (The Verge)
07:55 AI conquers challenge of 1980s platform games (Nature) (BBC)

13:00 Steve Benford
13:45 Art Codes
13:50 Blogging Guitar
15:55 Tonedexter (Audio Sprockets)
18:30 Robot Marimba Player Shimon (The Verge)
20:15 Maria Kallionpää (Soundcloud)
20:20 Disklavier Piano (Yamaha)
22:40 "Climb"- Disklavier Piano Project University of Nottingham
25:40 "Kein Operationsplan reicht mit einiger Sicherheit über das erste Zusammentreffen mit der feindlichen Hauptmacht hinaus" Helmuth von Moltke the Elder - Chief of Staff of the Prussian General Staff from 1857 to 1871 (wikiquote)
Roughly translated: "No plan survives first contact with the enemy"
34:21 Whiplash (Internet Movie Database)
35:00 Nottingham's TAShub Node
41:40 Living With AI - Robots in the Home / Discussing Empathy @ 37min 30secs
46:20 CORRECTION: Michael Jackson said this about "Billy Jean" not "Beat It" (Wikipedia)
01:00:40 The Beatles in India (Wikipedia)
01:00:43 Paul Simon - Graceland album inspired by South African street music (Wikipedia)
01:00:46 Led Zeppelin Kashmir - Moroccon

Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 1, Episode: 14

AI Music Duelling against a Robot Piano!

If music be the food of love, AI. We talk to Professor Steve Benford about robots playing music. The panel guests are Stuart Middleton, Alan Chamberlain and Paurav Shukla. This Podcast was recorded on 11th March 2021
 
 00:34 Paurav Shukla
00:41 Stuart Middleton
00:55 Alan Chamberlain
01:05 Alan's music & visuals on YouTube
01:10 Sean Riley

01:35 Living With AI Theme: Weekend in Tattoine by Unicorn Heads (YouTube)
01:55 Facebook Algorithm trained on Instagram photos (BBC)
03:15 Facebook's SEER
03:59: OpenAI’s state-of-the-art machine vision AI is fooled by handwritten notes (The Verge)
07:55 AI conquers challenge of 1980s platform games (Nature) (BBC)

13:00 Steve Benford
13:45 Art Codes
13:50 Blogging Guitar
15:55 Tonedexter (Audio Sprockets)
18:30 Robot Marimba Player Shimon (The Verge)
20:15 Maria Kallionpää (Soundcloud)
20:20 Disklavier Piano (Yamaha)
22:40 "Climb"- Disklavier Piano Project University of Nottingham
25:40 "Kein Operationsplan reicht mit einiger Sicherheit über das erste Zusammentreffen mit der feindlichen Hauptmacht hinaus" Helmuth von Moltke the Elder - Chief of Staff of the Prussian General Staff from 1857 to 1871 (wikiquote)
Roughly translated: "No plan survives first contact with the enemy"
34:21 Whiplash (Internet Movie Database)
35:00 Nottingham's TAShub Node
41:40 Living With AI - Robots in the Home / Discussing Empathy @ 37min 30secs
46:20 CORRECTION: Michael Jackson said this about "Billy Jean" not "Beat It" (Wikipedia)
01:00:40 The Beatles in India (Wikipedia)
01:00:43 Paul Simon - Graceland album inspired by South African street music (Wikipedia)
01:00:46 Led Zeppelin Kashmir - Moroccon influe

Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.



Episode Transcript:

  

Sean:                  You're tuned to Living With AI, the podcast that talks about artificial intelligence and its effect on life in general. If I sound a bit like I’ve switched to local radio DJ mode it's probably because today's future topic is AI and music. Shortly you'll hear from Professor Steve Benford who's one of the world experts in mixed reality at the University of Nottingham but before that let's meet this week's panel. Chatting all things AI this week are Paurav Shukla, Alan Chamberlain and Stuart Middleton, Paurav is Professor of Marketing at University of Southampton's Business School Welcome back Paurav.

 

Paurav:              Hello, hello.

 

Sean:                  First time we're on the panel is Stuart Middleton, Stuart’s a lecturer in computer science at The University of Southampton. His research focuses on natural language processing and machine learning. Welcome to Living With AI Stuart we'll try to be gentle.

 

Stuart:               Thanks Sean and good to be here.

 

Sean:                  And Alan is a Senior Research Fellow in the mixed reality lab at the University of Nottingham but perhaps more important for today's episode he's also a musician. You can listen to some of his music via the link I’ll put in the show notes and he's an artist, welcome back Alan. 

 

Alan:                  Hello again. 

 

Sean:                  And I'm Sean Riley and I’ve mentioned I used to be in a band on the last podcast. This feature’s right up my street, I'm currently using an app to try to teach myself piano, having taught myself to roughly play guitar and drums over the years I must say I'm hoping I get better results using technology. Well, we're recording this on the 11th of March 2021 and we are discussing music, so I’ll give a shout out to our theme music which is called Weekend In Tatooine and it's by Unicorn Heads. 

 

Sean:                  Stuart you sent a couple of links through just before, before we started recording, what’s caught your eye?

 

Stuart:               Yeah, there's been a really interesting couple of news stories. The first is from Facebook, they've released a new algorithm and they've trained it on absolutely massive amounts of Instagram photos. So 1 billion photos all unlabelled and what they're trying to do is train the AI on data which has no human labels at all to try and get rid of the human, human bias in the, in the labelling. So quite a neat idea but I, I was thinking, this might well introduce a few problems there in itself because yes, you get rid of the human bias so you know, people's bias towards maybe their, their own personal prejudices, prejudice. 

 

But what you also do is you, you may be, you, by getting rid of the human label- the actual dataset was created by people’s Instagram so actually it's all the images from what's popular. So you're going to get really massive filter bubble effects, popularity bias, cultural bias and that's just going to get ingrained into those algorithms. So, not quite sure if they, they thought that one through a little bit and also these algorithms are going to, they're going to be used for things and if, if they've got no human in the loop, no human labelling have they really thought about the kind of cultural sensitivities? You know or, or is this algorithm just going to mete out there whatever it decides with no sort of thinking about the human sort of consequences of what it's doing?

 

Sean:                  We’re, we’re back to that Jurassic Park just because they could rather than whether they should. They Facebook's called that system SEER, I mean that's you know, that's sort of God you know, God complex is it? What, what's the- Paurav what do you think?             

 

Paurav:              I have am amazed by it one way anyway as a lay person in this area, I feel like, my God how fast the technology is moving. But at the same time I have a question that is emerging, so if this AI sees a cat and then sees my son wearing a cat outfit and looking like a cat, with a cat you know, some sort of moustaches and all those created on his body will he be identified as cat too?

 

Sean:                  Well it's almost worse than that, because there was an Open Eye story recently that said that their system that they've recently trained, which is incredibly accurate on most objects can be fooled by simply putting a Post-It note on the object and writing on that the object is something else. So they had an apple and they sent it through the system and it correctly identified it was a Granny Smith type apple and then somebody put a Post-It on it saying iPod or iPad and it suddenly came up as identifying it as an Apple product rather than an apple. So you know there are, there are still flaws in all these things, Facebook claimed this SEER is 84.5% accurate, I don't think that's that good is it?

 

Alan:                  I, I sorry I, I think it's a bit, it's a bit strange really isn't it. that it's, how do they assess accuracy if they're not using human terms? I mean it’s, it’s, it's- because language is messy anyway isn't it? So things like apple, cat you know, son, certain, all those kinds of terms that you've just used that can have multiple meanings in multiple contexts so that you're constantly kind of correcting and you don't have to be grammatically correct. We understand each other sort of based on, on, on what's happening in the moment but, but the, the weird thing- I mean perhaps Stuart could fill us in on this is if you want to get rid of human bias then is, is it, is the code created by nonhumans that have never had any input? Because it, it feels like there's, there's certain imagery probably certain like, religious imagery and stuff that if it's classified incorrectly by a system it's going to create huge amounts of outrage.

 

Stuart:               Yeah I, I, I think, I think you're right and the, the images are created by people, they're annotated by people that bias is going to be there, that is very hard to, to actually not learn that. So the key perhaps is to not, not avoid the bias but actually report the bias, find out where the bias is strong. If the results are in that area maybe flag up some warnings that this result might be unreliable.

 

Sean:                  Go ahead Paurav?

 

Paurav:              And if I may add into that, the other thing that comes to my mind is when I'm thinking about Instagram, is that do I ever post a photo of me looking uncool on Instagram? So whatever I felt-

 

Sean:                  Well this is, this is in, the bias built in right? 

 

Paurav:              Isn’t it?

 

Sean:                  The bias is completely built into the photos we select to put up there- 

 

Paurav:              That's it.

 

Sean:                  -as being a representation of ourselves.

 

Paurav:              Yeah.

 

Sean:                  So as, and as Stuart said right at the beginning you know, this is like an idealised kind of view or it's what people hope and maybe the odd fail but those are, those are in there as jokes right? 

 

Alan:                  Yeah I, well I, I thought having looked at Instagram that most people had Gucci sort of clothes and Chanel jewellery and-

 

Sean:                  They're all filmed from slightly higher up to make sure they don't show double chin. Yeah, no absolutely.

 

Alan:                  Yeah exactly yeah, yeah with-

 

Stuart:               Aren't they all old as well, because does anyone young use Instagram anymore?

 

Sean:                  But, but the, the flip side this might be using people's Teams and webcam data because you know, you must get most unflattering pictures of people shot from below and poor lighting with low quality cameras. So maybe they need to augment this with kind of like, some of Microsoft's data.

 

Paurav:              One of the things I realised when Alan said this you know, the industry I research, luxury, will become, it would be a nightmare this dataset for them because most of the photographs will only show luxury goods. That means they are too common right, and luxury by nature is exclusive. But now Primark will become exclusive, H&M will become exclusive and Gucci and Louis Vuitton would become commonplace.

 

Sean:                  Well this is where you need a machine learning algorithm that turns everything on its head the moment it's learned it yeah, absolutely.

 

Alan:                  Or, or a human.

 

Sean:                  Humans yeah, yeah absolutely and there was another story that Stuart sent through about AI playing 1980s platform games. Now that reminds me that I still have a, a date with destiny to one day finish Ghosts and Goblins, I don't think it's ever going to happen but I, I haven't given up hope yet that I’ll get past whatever it is, level four. Well what's this one all about then Stuart?

 

Stuart:               Yeah there's, so this is a team from Uber Lab, so Uber AI Labs in California. They've developed an algorithm which can learn by itself lots of 1980s games, I presume all the games are broken these days so they're emulated. But yeah it's, it can literally play itself so Super Mario and Donkey Kong all those sort of classics and what it's using is reinforcement learning. But it's, they, they figured out a little, little nice little switch to, to go, called the Go Explore so a little, different way of training. It's a bit more like humans do, so rather than just sort of combining the exploration in the game and figuring out what works and then slowly piecing it together they actually allow the algorithm to go back in time. 

 

So let's, let's, Let's explore at this stage and let's learn this stage and they kind of do a bit of both and I think this, this this type of approach is, is a good example of sort of disruptive AI where it can be applied to loads of different sorts of applications. You could use this approach in AI planning for drones and swarms, robotics you know, working out where the robot has got to go in the new home that it hasn't seen before. Classifying images, you could probably do the same thing if you're trying to work out where in an image to look, could, this one could be one to watch I think.

 

Sean:                  I think the only thing about that is I know that when we talk about things like driverless cars we kind of, we talk about the necessity to make mistakes but the problem with trying to make mistakes and the real world. It's okay if you make those mistakes in a platform game where you fall off the platform and you, you know, I don't know, diddly, diddly down to your doom. You don't want to be trying that out in real life though right?

 

Paurav:              Absolutely, I think this is quite a promising you know, idea in, in some sense. Isn't this also aligned with what AlphaGo and their technology does in terms of you know, brute force learning and then after that learning now they have some sort of an exploration stage wherein they are now exploring different ways of doing things in novel ways and they are learning. A human question to this is which is obviously philosophical here, but you know when you go into Mario and when you go and jump on that little cactus and then that cactus goes down and you actually are able to go down and collect 20 more coins, in that exploration does the AI feel the same joy as what we found and then the way we shared it with all our friends, “Do you know you can get 20 more coins?” I think that is the most fascinating part of it.

 

[00:10:36]

 

Sean:                  All these AIs are talking to each other aren't they?

 

Alan:                  I completely agree with Paurav I mean the, the- when I looked at it I thought, wow well one of the things about AI, particularly emulations as, as this is of, of a sort is that it can do things endlessly and then I started thinking to myself, that left me a bit cold somehow. It's a bit like driverless cars it’s, it’s sort of I, I think with the rest of technology sort of I don't know, and design drifting towards kind of more quality of experience and use and use. You saying things like Ghosts and Goblins and people thinking about Pacman and all those kinds of things, those games used to be in pubs or you'd have your computer and people would gather round and take turns.

 

                            And, and, and with this it kind of although it can do the task and probably do it you know, sort of a very short amount of time it’s, it’s kind of like time on task and efficiency and error rate and all those kinds of You know, time and motion studies. Whereas what you want is a, something that you feel- have some sort of bond with. It doesn't have to be an emotional bond but like a cool little character and you can jump over things and then there's the sound and that it has an influence on you. You kind of I, I don't know how do you two feel about that? This is something about competing as well that this loses.

 

Sean:                  But there was also, there was an element of them being very difficult because of limitations of the time so they made the, the games quite difficult because they didn't have enough memory to make them very long. So these days a game can be an epic thing that could take you hours to complete but each stage is incrementally difficult or incrementally larger worlds to discover things and all the rest of it. Whereas back then it was more, lets make this really hard because if you get past this level it's nearly all over and they're going to think, what have I spent my money on? Stuart you're going to say something?

 

Stuart:               Yeah I, I, I think this is, I think you're right. The games are a social experience and, and part of the joy is, is you're playing with, with other people and if you've got loads of bots in the game you know, like, like you, you see this one I think it's called Call of Duty, it's starting to get real problems with people got these super bots, you run in you’ve got no chance because they've shot you within two seconds and can't even see you. And it’s, it’s stopping the fun of the game where people are kind of learning together and maybe the best do make it at the end but at least you’ve got a chance.

 

Taking the stage today it's Professor Steve Benford- sorry getting a bit theatrical but I'm missing live music. Steve is Professor of Computer Science and EPSRC Dream Fellow at the Faculty of Science at the University of Nottingham, welcome to Living With AI Steve.

 

Steve:                Hi, thanks Sean. Yeah great to meet you again and yeah, nice to have an opportunity to talk.

 

Sean:                  Absolutely and then before we go any further I have to address this Dream Fellow, I mean what a compliment.

 

Steve:                Yes actually, I mean to, just to slightly correct I was a Dream Fellow, fellowship ended a few years ago but it yeah, it was a wonderful opportunity with some EPSRC funding to really go off and think about things differently for a couple of years. Yeah, it was, it was pretty transformative actually.

 

Sean:                  Fantastic well full, full disclosure I’ve made a couple of videos with you before haven't I Steve? We did, we did some on a project called Art Codes but more on topic for today I remember talking to you and actually I can see it on your webcam, the Carolan guitar, not strictly AI but it's certainly music meets tech right?

 

Steve:                Yes, yeah that's true. You know, Carolan was a project about digitally augmenting an everyday product to carry its whole kind of history with it and we, we chose a product that was very non digital to make the point and that was an acoustic guitar. A very traditional object, handmade and we attached digital information to it by its decoration, you could scan it augmented reality style and it would pop up a bunch of stuff. 

 

And then we just documented the pants off of it really, everything from how it was made, the woods the, the craftsman through to lots of people who played it. And you know we're now seven years on from when it launched and it's still going. It's a guitar in residence in a folk club, the blog is growing and so this thing by now has a very rich kind of personal identity and, and history.

 

Sean:                  Fantastic. Well it's moving well, sticking with the music obviously but moving slightly more onto the AI side of things I was recently digitising some ropey old VHS recordings of a band I was in from over 20 years ago. And one of my immediate thoughts was the possibility of AI cleaning up the pictures that you know, the images and I'm sure it would be great if it could clean up the sound too. But in sci-fi there's even talk of AI superstars, I think Black Mirror touches on it and there's some mentions in William Gibson's work. I mean what's the current kind of state-of-the-art where AI meets music?

 

Steve:                So it's interesting, I think music is one of the domains where AI is really kind of breaking through quickly and becoming a, a kind of standard tool and you know, I can think of a few examples. So there's, there's AI in the sense of kind of, signal processing that sort of you know weakish version of AI. 

 

But, but that's appearing all over the place I mean recently as a guitarist I’ve bought a pedal If I'm allowed to mention brand names, called a ToneDexter and what this does is essentially you, you plug your guitar into it through a pickup which normally produces a bit of a dodgy sound. You plug a microphone in at the same time, you play both and you train the system to spot the difference and then it reintroduces all of the missing mic sounds next time you just plug the cable in. So you kind of go through this whole training process with this, this bit of signal processing and, and you know that's, that's really interesting. 

 

A second example that, that really struck me was digital audio workstations. So I, I, I use Logic for better or for worse for my production and over the years they've refined their approach to how you deal with drums. You know, for many people who do their own recordings not having access to a live drummer or a room in which you could record drums is a challenge and so of course sequencing and programming drums has always been part of digital audio workstations. 

 

And it's kind of gone from software drum machines through to loading up really accurate samples of real drum kits through to the system kind of helping you compose lines. And most recently they have launched AI drummers, so you now choose a persona of a drummer that you want to work with in Logic and it will play with a feel and a style. So that's a real example of proper AI emulating a human being embedded in a, in a really everyday musical tool that, that you know I'm sure millions of people use.

 

Sean:                  There, yeah this is, this is the worry for- well the worry, the worry for kind of potential session musicians that you know, that, “They're taking our jobs.” But yeah, drum machines have been on the, on the kind of scene for years and years and years and you used to be able to programme in so errors to make them sound more real even you know, 30 years ago. So this is, it's just an evolution of the same thing I, I suppose it won't take on- we're not going to be seeing robots sitting on stage playing drums.

 

Steve:                I think- so there's been some really sort of beautiful work on robot musicians. I, I kind of quite often go to a, a conference called NIME, New Interfaces for Musical Expression and there's, there's quite a thread there about physically getting robots to emulate how humans play. So you'll see papers about how a robot can, arm can move a drumstick in the same way that a human does. And I think one of the nicest examples I’ve seen, a few years old was a, a robot marimba player called Shimon I think, I hope I’ve got that right produced by folks at Georgia Institute of Technology.

 

                             And this would play on stage alongside other jazz musicians and it would try and pick up cues from their playing about its own timing. You could also give it visual cues, you can glance and nod at it, you can give it the timing by nodding your head and it nods back to kind of pick up. It does a pretty impressive job so actually I think there is a, a good chance that yeah, robot musicians will, will become a thing.

 

Sean:                  That's, that's the interesting part isn't it having a jam with the robot as opposed to getting it to, programmed to do a certain job and there's an irony here isn't there, in the fact that perhaps you know, I used to be a drummer- I say used to be, I still like to play but I don't have a kit available because of the problems we just mentioned you know, noise space etc. But the one thing you're trying to do as a drummer is keep the beat and yet the mistakes are what make it a human performance. But I think I love this idea of getting in a jamability you know, the ability to kind of as you say, have that interaction sort of on stage where you nod and you glance and these things that happen when, when humans are jamming. If that could happen with robots as well that sounds amazing.

 

Steve:                Absolutely, and there's you know, there's a musical question about feel which takes us back a little bit to the kind of, digital audio work workstation stuff but also the, the collaboration. Now we, we actually experimented very directly with these ideas a couple of years ago where we worked with a composer called Maria Kallionpää and we made a very unusual piece of music. It was a, a duet for a pianist and a Disklavier piano and I don't know if you know Disklavier but it's the, it's the modern version of the pianola, it's a, it's a piano that plays itself. So in some sense although we weren't strictly using AI techniques it, it’s, it’s a, it's an autonomous system, it's a piano that's capable of physically actuating itself. 

 

[00:20:43]

 

Sean:                  Okay.

 

Steve:                And so Maria she, she made this game where essentially it was a, a battle between a human concert pianist and we were talking about highly trained, professional concert pianist you know at the top of their technique and their, their game battling against this piano. And at various places in order to advance the score the human has to play a test essentially, it has to play certain codes in the music sufficiently well. And then at other points the piano comes in and the keys move against them as they're playing and the piano is playing at one level better than they could. It can play bigger spans, it could play all of the notes on the keyboard at once if it wanted, it can play insanely fast for a very long time without getting tired, you know all of the things robots can do. 

 

And so this was a lovely kind of piece of work and we toured it to, to quite a lot of venues and we talked to the pianists afterwards and what became clear was they, they adopted very different attitudes towards this instrument and this piece of work. So they often started off being quite submissive and thinking that they kind of had to play the piece as written and they ended up having to adapt their style to be like the piano, they ended up having to play less human like. 

 

And, and after a while they sort of rejected this and they realised that the point of the game was essentially to tame and ultimately game the piano. And so they would deliberately cause it, the peace to fail in interesting ways, jump to places, and they began to mess with it essentially so that they could show the difference between human virtuosity and robot virtuosity. And so this whole piece became an exercise in a sort of back and forth improvisation with both parties kind of pushing each other in different ways.

 

Sean:                  I think, well one thing that's interesting about that is that we've probably all seen footage of these heavy metal guitarists who play extremely fast, extremely technical, finger tapping on the fret board and sometimes what's missing from that can be say you know, for want of a better word, feel. It's technically brilliant and yet maybe it's missing a bit of emotion and that's perhaps, sounds like what's, what was going on there in a way.

 

Steve:                Absolutely. You know a big part of the story is, is feel and emotion, it’s, it’s the variation in timing and all those things. But I think another big part of it is failure, so failure is an important aesthetic quality and it’s failure that demands improvisation and leads you to do new things. And I think in autonomous systems we don't- mainstream thinking doesn't understand that, failure is still seen as something that's, needs to be designed out, autonomous cars you know, whatever weapon systems, you, medical systems you know. Your first-

 

Sean:                  Failure is definitely- sound, sounds as a bad thing there. I do remember early on in my learning to play the drums a drummer telling me that if you make a mistake do it again and everyone will think it's a fill in.

 

Steve:                Yeah absolutely and, and riff on it and build on it and go somewhere else, yeah, exactly and that's what was going on in [s/l incline 00:24:11]. So in creative areas you need to build in failure, provoke failure and that's one of the things that systems can do and as a result you get a much more improvised response. The question is, do you also need it in other walks of life? Can you have medical systems without embracing the positive aspects of improvisation, plan making and failure? Now that's quite a controversial statement, but on the other hand if you don't I'm not sure you have autonomous systems.

 

Sean:                  Yeah, I mean I, at least do that failure in modelling perhaps.

 

Steve:                I, I think in practise. I think you know, improvisation is you know, a key part of, of how we work with the world. So you know I’m terrible, I can’t-

 

Sean:                  I suppose moving out, you're able to move out of tunnel vision that way. If you say, “Right I have a task to do.” and it requires for instance I don't know certain lengths and certain thicknesses of wood and yet you've got thinner wood you know, the improvisation is to bolt some of that wood together to make thicker pieces to do the job anyway. Whereas perhaps if you just think, no well I can't do this job, the job doesn't get done.

 

Steve:                Yeah, absolutely I mean you know, it's, I wish I could quote to the correct source I'm sure somebody who is listening will be able to do this but you know, it is a, a very famous quote that you know, “In battle your, your plan doesn't survive the first skirmish with the enemy.” You know, you are improvising in the theatre of war for example. So you know, and I'm sure that's actually true in pretty much all walks of life where people work around technologies, they actually improvise and collaborate and find solutions.

 

Sean:                  And autonomy yeah, I suppose that's a great thing. If it is going to work on its own anything that's autonomous has to have the ability to adapt and to change, change the plan. Just sticking with music for a minute. I remember back in 2000 early 2000s it might have been 2005, a system called Pandora which you put in a song and it came out with all sorts of different songs that sounded like it. And I'm sort of taking a sideways leap here but I'm just thinking in terms of AI, is that stuff what lead us now to kind of recommendation systems and things?

 

Steve:                You know applications such as Pandora and Shazam and various others are really interesting because yeah, they try and analyse music and recognise it and I think often as a basis of telling you what it is or recommending other things. But you know, that makes again, that makes certain useful connections I’ll, I’ll, I can give you more stuff that sounds in some analysis view of the world like other stuff. 

 

But it misses out on all the discussions we have about music and genre you know, the discussing what is a genre and what's an emerging trend. It's a very human thing and what makes one piece of music like another is as much about the context and the meaning, the tribalism of genres as it is about actually what the sound is. So again if we're not careful we're in danger I think of just blandifying the way we talk about, think about and learn about music.

 

Sean:                  Yeah I, I, I do remember thinking to myself listening to a song which you know, sounded quite nice and not dissimilar to the previous song and, and then I started thinking, but what if this band are a sham? What if they've been made by a producer somewhere? What about the back story? It's a human thing isn't it to think about the bigger picture right?

 

Steve:                Yeah, yeah, absolutely. So again I think it's another of these examples of perhaps it's not failure but it's about serendipity and quirkiness and the other connections that, that happen around things. And you know I think autonomous systems can help us find stuff we know we want, the question is whether they can partner with us in taking us to new places that we didn't know we wanted to go to.

 

Sean:                  Well that's the thing. Now “everything” you know, is available on services like Spotify, it can be hard to find what you're looking for just because of the sheer volume of things whereas many more happy accidents happened when a friend at school lent you a cassette that he'd recorded off the radio from you know, the night before or something. You'd find songs that you liked and obviously he liked because otherwise- or she liked because they shared it with you. We are in, are we in danger of losing that or do those, do you think those sharing options still exist? Is it a bit too easy now?

 

Steve:                Yeah, so there, there are clearly still lots of options for sharing music and recommending music to friends and other things so yes I wouldn’t say we've lost it. I think my argument is that that if you're someone thinking about the role of autonomous systems in all of this you perhaps need to think a bit more broadly about how that social context works and to understand you know, what is the, what is the role of the system? 

 

I had a student a couple of years ago, a PhD student who looked at, Chris Ellis looked at, at music recommender systems and he sort of for his PhD designed an interface where essentially you could sit down at a coffee table with some humans and a number of system personas and talk about music. So there the system is just one voice at the table and it, it has an obviously quirky bias. You choose a yeah, you choose, I’ll have a, a goth sit down at the table and a heavy metal fan and will recognise the kind of system biases and it will be part of the conversation.

 

Sean:                  That, I think that's really interesting. One, one thing I’ve noticed since the advent of streaming services and YouTube being available to watch you know, pretty much any music video there ever has been is that I will find a new band that I absolutely love and then by next week I’ve completely forgotten about them. Now perhaps this is an age thing, perhaps I'm, I'm just you know, living in the past but for me the physical product was a reminder never mind the fact that I perhaps spent some hard earned cash on an album or whatever. 

 

But then you had that thing in your hand, and it might be sitting in, on the seat in your car or it might be on the shelf in the, the room where you have your stereo and therefore you remembered to, to listen to it. And I think I, personally would like to see some autonomous systems that kind of are a bit better at remembering or you know, recognising what you like and helping you sort of stay in touch in a way.

 

[00:30:47]

 

Steve:                Yeah and I'm now wondering if part of that is about you know the, the character and, and making a bit more visible the character and role and relationship you have with those systems. So it's very true that you know, I think you- as yeah, going back to the previous example there's a lot of talk at the moment that AI and autonomous systems may have biases, there's a lot of very sensible thinking and hard work thinking about how do we eliminate those biases. But you know, another way of dealing with those kind of problems is to make them visible, to, to make it very clear that the system has a persona and a history and a set of characteristics so that as a human you can figure out whether that's interesting. So when I talk to someone face to face I try and work out if they have some biases and who they are. Sometimes that's positive, sometimes it's a problem but I guess we're kind of missing that.

 

Sean:                  Yeah, it’s, it’s, it becomes part of the conversation and it actually injects something into it doesn't it? And I suppose that's the thing perhaps, perhaps we need more chosen by us as you said before you know, if you are interested in goth or heavy metal music then you want more goth or heavy metal music. It doesn't mean that you eliminate everything else, it's not a zero sum game, but if I'm asking a recommendation system I'm probably looking for more things like The Mission or The Sisters of Mercy or whatever goth band you decide to insert in that place there.

 

Steve:                You know I think one of the, the really interesting questions is about whether these technologies do replace human musicians and you know, I think there's no- well I say no doubt, little doubt in my mind that, that introducing kind of AI drummers into digital audio workstations is to some extent doing that. And certainly Disklavier pianos I think was designed initially as a kind of like recording device for the studio. So you could kind of play out your MIDI file on a real instrument without having to hire in a pianist and certainly when you see those kind of things in hotel foyers you know, they're replacing a human player. So I think there's a, there is a really serious question about you know, whether these technologies will replace humans or whether we can design them to work with or amplify what humans can do, I think that's a massive challenge.

 

Sean:                  I think- well I think one thing that's interesting about that is kind of- and again we're missing performance at the minute because of the Covid situation. But going back to the point where you have an orchestra- I don't even know how many people are in an orchestra, dozens of people anyway all of which have attained an extremely high level. Have their 10,000 100,000 hours of practise whatever it is, have attended the conservatoire of music and, and obviously need being, need to be paid. That all costs a lot of money doesn't it to, to go through that process of training a musician and so it could be argued from a kind of purely financial and heartless point of view that buying an expensive robot that can do the job every time without any mistakes maybe is a, a worthwhile investment. But is it, is it missing something?

 

Steve:                Yeah, and is it doing the job? I mean it's doing a job but I'm not sure it's doing the job as we said about playing either with feel but, but to me more importantly playing with aesthetic failure that, that I think is the issue.

 

Sean:                  Yeah and, and you wouldn't have films like Whiplash with robots. I don't know if you've seen Whiplash, if you haven’t if- 

 

Steve:                No tell me, tell me about whiplash? 

 

Sean:                  Okay and for any listeners out there who are in the slightest bit interested in drumming and don't mind the odd blue language, Whiplash is an incredible film about a young man's journey into playing, playing the drums. Yeah it’s, it’s, it’s, it’s yeah spine tingling stuff but it is quite, yeah quite adult.

 

Steve:                I remember it but it's also absolutely a case study in the challenges of aesthetics, failure, technique, all of those questions yeah, you're right it's a great example. Imagine that was, imagine that was a robot training you.

 

Sean:                  Yeah no, no, computer says no brings on a whole new dimension. Steve one more thing before I, I let you go what's happening with your side of the, or your node of the, of the TASHub? What, what is it that you guys are bringing to the party as it were?

 

Steve:                Yeah so our part of the TASHub I guess there's a kind of big picture thing. So I'm heading up the TASHub’s creative programme and the aim of that is to bring some of this kind of creative and artistic thinking into the hub. And we're going to do that by in part, engaging other artists, folks who don't even know who they are yet, we'll have a sort of residency scheme to bring people into the hub, that hopefully will inspire some new production projects. Sort of new projects a bit like the, the piano one I talked about before but doing something entirely different. And I think that that will both mean that we get to apply the TAS technologies to creative industries if you like, but at the same time I hope that those folks are going to confront us with some of these questions about you know, what does it mean to be human and creative and how can you design technologies to, to not take that away or even better to amplify it?

 

Sean:                  And I think there’s probably a role in technology helping train some of these musicians we, we obviously didn't go into that but I mean that's-

 

Steve:                I think training's really interesting because you know, we were talking about you know, failure and again you know, that is the thing about learning and certainly learning anything but certainly learning to play an instrument is, it's a journey through layers and levels of failure and progression. Mastering something but then having to be pushed out of your mastery of the current level if you're going to get to some other Level of it yeah, so all the time ultimately you have to fail in order to, to get somewhere new. So you know, I'm thinking back on the, the failure thing you know, maybe part of my role in the creative programme is to bring some failure into the TASHub I think.

 

Sean:                  Fantastic. On that note if you pardon the pun, I'd like to say thanks for being here on the Living With AI Podcast today Steve and performing, maybe we'll be doing a jam next time hopefully with a robot as well.

 

Steve:                Thank you and it's been a pleasure.

 

Sean:                  Great stuff to hear from Professor Benford there and interesting kind of gaming or gamifying robot pianos. I mean what does that mean Alan, this is kind of, this is right up your street isn't it? What what's your experience of AI and music?

 

Alan:                  I- many and varied like music is, I think one of the problems that, that we, we all face as researchers in this area and, and the public as well who are also some sort of researcher when you think about it, is that the, the domain of music is ginormous. So it’s, it’s like if you're writing a piece of music and you take those notes and you run it through some kind of algorithm and it spits out other notes that's a different sort of AI to, hold on let me show my phone the positions of my ears so that you know, that, that, that sound is spatialized in a certain way which is tailored for me. 

 

Which is different from, I want that hiss taking off of that, which is different from possibly stuff that Paurav and Stuart's got more experience of where some- you put David Bowie on your I don't know, Spotify or some other thing, YouTube and the next song that comes up is Iron Maiden and you can't work out why. So I, I just, I don't know I, I’m probably like, like a sponge for this sort of stuff at the moment so I'm interested in what, what other people have got to say on the matter who perhaps aren't directly linked into music in that sort of way.

 

Stuart:               I, I think it's quite interesting I, I was watching that and I was thinking can AI really generate creative results which you want to hear? I'm sure it can create stuff but how good is it to actually- because Isn't it the mistakes that you make and as a human and the adaptations that you make which gives you the character and the, the tone of the voice for example? If all the voices were AI perfect- it’s like you get on The X Factor don't you? You get the, the AI applied to the voices so they actually sound a bit ropey on the day but you, you come to the live show and they're brilliant because they've gone through the magic mix machine.

 

Sean:                  Absolutely well all the voices and the different instruments are made up from all sorts of different harmonics of course which gives the character and the sound and if those all come out perfect it will sound like we're an oscilloscope or something. One thing that really intrigued me was this idea of a robot marimba player that Steve mentioned and the idea wasn't necessarily of the fact that a robot could play marimba, it was the idea of being able to engage with it on stage as a, as a fellow performer almost. 

 

Because I’ve tried to record things at home to a drum machine and it's quite soulless. you set a tempo and you maybe choose a pattern and off it goes and, and after a few minutes perhaps you get quite bored of the same thing over and over again. So unless you're really spending a lot of time programming- but what about the idea of getting up on a stage and nodding across to a robot that carries on? Paurav do you think people will pay for that?

 

Paurav:              I, to be truthful I have been, I have been made wrong on creative industry so many times. Imagine this, if somebody asked you that, “Would you spend hours watching other people play games?” and we would have at that point in time said, “No.” Look at the Esports industry, what has happened you know? You know when now we are talking about Esports injuries, forget about Esports itself, you're talking about Esports therapy. You know, Twitch is a billion dollar platform bought by Amazon, so what you're looking into there is this phenomenon. So I'm, I'm ready to be wrong here but I want to say that you know, Emusic would be extremely successful.

 

[00:41:06]

 

Sean:                  Fantastic, yeah there was, wasn't there somebody who insured their thumbs or something one of these players? But yeah, not just Esports but you know, let's play videos on YouTube a two hour video of somebody- let's be honest, ceaselessly talking as they play through a computer game. But music has always had a slightly different kind of thing, you can listen, you can watch, you can be part of it. I, there was a discussion previously about empathy and emotion in, in AI and whether it's trustworthy or not to for an AI to pretend to have that empathy or that emotion. Do we care that much in music? Do we just want to hear something that sounds good?

 

Alan:                  I, well I having worked with AI or certain sorts of AI it's -the interesting thing here for me is, is not, not the AI it's the way that we work with intelligent tools. So for example I, I guess this, this will, this will come back to you Stuart. But if, if I, I’ve, I’ve tried to work before when I’ve been writing lyrics for some, some piece that I'm working on and I’ve taken a I don't know, libretto from an opera, put it through some AI to see what comes out the other end and I think it gave me some excerpts from a motorbike manual or something like that. So I looked at it, I thought, well you know, this might be cool but, but then I’ve got to take that stuff and work with those words and orchestrate it and you know pick, pick the right sounds to go with it and take the stuff out that I don't want in there or put more stuff in. 

 

So it, I think there's- I like the way the, the term AI inspired creativity because it, it's almost like something coming up with a recipe, you getting a quick taste of that and then thinking, oh a little bit more salt. That's what I really like, applying that to the creative industries or things like luxury products or you know, anything that involves language or poetry it’s, it’s, it, it- that I think that's what's changing the world. And I think that's why Steve was saying you're seeing the music industry kind of use these sorts of tools because straight away they can, they're used to automation, adding effects and things to, to music. They're used to kind of setting up stage sets, singing, repetitive stuff so when they see something that can help them create something they jump on it.

 

Sean:                  Yeah they, they can help them use the formula that's worked before as much as anything. 

 

Alan:                  Yeah. 

 

Sean:                  I suspect with- this perhaps is a question for Stuart, machine learning you can throw in you know, gigabytes and gigabytes of data, of different song data, of different audience reaction data and learn what people like in certain situations could you?

 

Stuart:               Yeah definitely. That's what the whole field of recommender systems and you think of Amazon when, when they're they, they, they know you've bought the yeah the- when I log on to Amazon it knows I’ve bought a few movies of a certain type. I’m not going to reveal what they are, nothing dodgy, sci-fi and stuff but then, but you endlessly get recommended the same sort of stuff and that's why of course you get filter bubbles and, and issues like that. But I, I wonder whether for this type of music thing, I mean the creativity and the, the room to experiment, having safe spaces to be creative and try things out, is that a, is that a good thing for, for AI? Sort of AI sandpits where they can, people can play around with those and then it doesn't matter if it fails? That might be a good analogy for, for areas where maybe it does matter if you fail like, healthcare or defence or something, have little sort of practise sessions and then you can take the best learnings and use those for real.

 

Sean:                  I like that yeah, I like that idea a lot.

 

Paurav:              I, I seriously like that Stuart for the reason being because in an earlier podcast we were talking about- and this was also discussed in today's you know, interview that we want AI or autonomous system built on AI to be perfect. 

 

Stuart:               Yeah.

 

Paurav:              And here in the creative domain we want it to be imperfect and those imperfections will bring on improvisations. And I think that is such an, such a critical element because most of the time when we are thinking about autonomous vehicles we are thinking about auto, any sort of autonomous system. We are thinking about it being perfect, but perfect does not exist because there is no perfect car yet, there is no perfect driver yet. And in similar sense I think there is no perfect music yet and, and, and that, that is what these creative industries can provide the pure engineering industry. You know the, the physical in a way the, the spaces of applied sciences that this, this room for failure could be such an important aspect of learning and continuously improving.

 

Sean:                  The mistakes is a really important one because in a way the technology is making things easier, so it's almost more difficult to make a mistake and as a result it's more difficult to have those happy accidents. So there are a couple of tracks I can think of, music tracks of the past you know, Michael Jackson's Beat It has such a long intro because Michael Jackson was in the booth ready to start his singing and Quincy Jones said, “You've missed your in.” and Michael Jackson was there going, “I'm just enjoying this so much.” So they decided to, to leave it an extra few bars and I think it was an Elton John track where the, the, the vocal is really strange and actually it was some bizarre mistake of routing through the mixing desk, and mistakes like that are going to be more difficult to, to, to get almost. So is, is it going to stifle creativity by giving us too much or too many options, too much choice Alan?

 

Alan:                  I think yeah, I think one of the things is, one of the things that I’ve learned with this kind of thing is that you, you, you can write something and then you feed it through AI and you get like you, you could get thousands of outputs. But the, the other thing that, that you, you tend to sort of think about a little more is, is, is, is kind of what, what Paurav was saying you can, it’s, it’s- you, you, you forget that you're not looking for a perfect piece of something. You're looking for this, it’s, it’s entertaining to sort of have a go at a musical instrument and play it and you know, other people like it and you want to create something that you, you kind of get into and think it's interesting or odd or whatever and then share that. 

 

So the imperfections or the humanity of it is something that we, we all kind of get. So if you, if you make something that's scary other people find it scary and I, I think that, that's kind of with autonomous systems we, we kind of, at the moment we don't seem to be designing for experiences an engagement with AI in that way. It's like I think probably like Stuart and Paurav have alluded to, it's yeah, smart car okay, you can design that, you can have an algorithm that says, “Don't run people over.” But that's not the way the world works.

 

Sean:                  Well it's the same as the rules of the road isn't it? Everybody finds themselves in contention with them. We've, we've mentioned this before you know the idea of, okay the car should not drive on, on the pavement/sidewalk/whatever you call it in your neck of the woods. But then there are certain situations where you need to drive on the pavement, perhaps there's an ambulance on coming and you need to get out of its way and you see that it's clear then you move on to the pavement. And if AI is following these rules so rigidly then it won't make those, make those kind of improvisations. And mentioned, well Steve mentioned quite a bit you know, in our conversation, improvisations are almost the bedrock of, of the creative side of things for music etc. 

 

But as I understand it lots of AI has the ability to have in if you like a seed of random. Now this is, this is- I'm going to ask Stuart about this because I'm kind of drawing upon like, third-hand knowledge but if you don't have that sort of random seed as I understand it you can't kind of iterate. Would that, is that fair Stuart or have I got the wrong end of the stick there? That you need something to make things change?

 

Stuart:               Yeah, certainly with machine learning you'll, you'll give the examples but you'll give them in batches and you'll randomly shuffle those batches and you'll maybe switch some out to, to try as you do your training. Otherwise what the model will do will just learn to just rigorously all the time fit one pattern, often you'll actually want to deliberately inject noise into your dataset because you want it to be able to respond to the real world and in the real world the data is horrible. It's noisy, if it's images there's people in the way, there's you know, it's raining, it's snowing, you know, there's stuff like that happening. If it's audio there's a car goes by just as you're trying to record, what you want to do you want those algorithms to pick up on those and it's quite challenging to create datasets which are representative of all the different types of things that, that can really happen. And that's one of the, I think the bigger challenges AI has as it moves forward.

 

[00:50:11]

 

Paurav:              When I think about music the, the only thing when I you know, when I was taught music long, long time ago and I’ve completely lost touch with it. You know when my kids play it I play and strum most of the time and I call it Fantasia, it's my fantastic music, it's my music only nobody understands it you know? And that's my engagement with it. But the one thing I was told in, in Indian classical music is that you keep on practising, you keep on practising and when you go on the stage you forget it all. You now use that base and then you improvise, that's what gives people- 

 

So that is their continuation in terms of the background music and some of the other things. But when you're using your vocals or any other instrument that little change that you make is what gets people clapping for you. That's what gets people coming to your shows, that's what gets people talk about you, that look what you did, something different and everyone starts talking. And that, that randomization, that improvement I think you know, it is not just purely random but also the artist knows how to play the audience.

 

Alan:                  Yeah I agree with that yeah.

 

Paurav:              Deeply.

 

Sean:                  And, and it's yeah, it's a, it's often a culmination of lots of different inputs you've had during your life that makes it your style and how you, you get something across.

 

Alan:                  It's interesting, it's interesting isn’t it because you almost want sort of- it's like you want Uber bias. So it's completely biassed by the human because you, you know that when you get up there if people know your style, you're going to sort of play to that a little bit. But also I guess if you're- depending on what genre you're in which, which is more of a cultural thing to think about is you can kind of push the boundaries a bit can't you? You know that if you're very good you can actually get things wrong and pull it back and that, that, that all- in certain things like I suppose more in, in jazz and things where people can just kind of go with that and it’s, it’s, it’s, it’s immediate isn't it? 

 

One, one thing that that I do think is interesting with all this that Steve- I’ve spoken to him before about this is, is what, what opportunities is this going to have for people who- will, will it really change music creation or will it, will it help people who want to expect- well, push the boundaries I guess of improvisation and, and also people that, that have got different levels of ability? Or how, if you can't play a standard instrument are there techniques that you might be able to use that would enable you to become a virtuoso via AI? To play the guitar or play an instrument that we've not even thought of and I thought, wow that's kind of- you get you, you get on stage and you've actually- your instrument is the robot.

 

Sean:                  Yeah, yeah, yeah. So this is like- I mean it happens already actually if you think about it, if you think of turntablists and some of these music producers who get on stage with maybe three turntables or two turntables and a sampler. They are making music from music and they are using automation in some ways to do that sequence of samplers, whatever. So I suppose it's just, is it just an evolution of that? I remember having a long conversation with a, a cousin of mine about, is it just semantics? Is a turntable an instrument or is it a something that can be used as an instrument and yeah, we drank a few beers that night. Anyway-

 

Paurav:              One of the other things I, I, it kept coming to me was, you know each artist has a footprint, what would be AI's footprint? So when I'm standing on that you know, if I'm watching a you know, Michael Jackson video I'm waiting for the moonwalk to happen. You know if I'm watching a Chuck Berry video you know, the thing I'm waiting for is him going on that one knee and going around the stage doing that whole movement. It’s, it’s what you know, it's not just the music, it's the whole experience of it and so we are not- when we are thinking about this creative industries and this art aspects of it's the totality of the experience and that is something we need to capture. 

 

So while yes, in a way you know if like what Stuart or [s/l Lee 00:54:35] said you know if you give the machine enough input and if you randomise that input and you provide it in, in different chunks the machine will create the music. And, and music is not the only experience you know, when you go into Glastonbury and you're standing at that stage and all that happening. I still remember Kanye West you know, when he was doing the Glastonbury show, he was the only person on the stage and the lights were just above him, that was just an electric experience. You know that, that experience I, I don't know if a robot will ever be able to create that but I, I, I have been proven wrong on all those occasions so I assume I will be proven wrong.

 

Sean:                  I’ll wait for the robot marimba player to get that gig, at least it would be potentially Covid safe. Now the other thing that often gets thought of when we, when we think of actually anything creative and AI or machine learning or, or any of these kind of systems is that how much is new? How much is created and how much is just samples of previous things? Now you can take a kind of contrary view to this and say that, “Well everything a human does is samples of you know, bits they've learned where they’ve been learning the instrument and things they've listened to.” But does it matter? I mean this is- I'm trying to get round to the idea of trustworthiness, does it matter that these are just potentially rehashings, albeit on an infinitesimal scale, but rehashings of things that, that have been fed in in the first place? Paurav? 

 

Paurav:              I think there are two things that are coming up here. This is a fascinating question, I have been asking since listening to that interview and you're absolutely right Sean, that you know, in some sense AI is not going to be siloed, we are siloed. So for example when I'm thinking about music you know, whatever I have been trained with I assume that music is superior to other music. You know, so when I'm listening to- I still remember once listening to Pandit Ravi Shankar who was a sitar maestro you know, playing with Paku Delusia and oh what an innovate. 

 

This was divine music you know? These are two maestros were sitting together just strumming together it felt like, who are these people? You know, you close your eyes and you are lost but the thing is, is that in a way so Paco was trained in a particular system, Ravi Shankar was trained in a particular system and that is what makes them, their, their, their maestros in their systems. AI doesn't have any of those biases so it can pick up a Brazilian music and it can join it with some Turkish music and it can bring in some African music into it and it will create new kinds of music that humankind has never seen I think, and that would be most amazing because it would go beyond those silos.

 

Sean:                  There, there might be an issue there though with finding an audience, as in you might like the one track but if that, that system is always doing this if you like, mixing pot of different styles then there may be other tracks that you think are really not up your- I mean Steve mentioned the idea of you know, choosing a goth kind of persona to come and bring you I think I said the Sisters of Mercy or someone like that right? I think we were discussing recommendation systems but if its creation is happening in that way well presumably we'd just pick biases right? The thing is it's a really, really important subject to me music it's massive to everybody it, it, it forms a bedrock of lots of people's lives, the soundtrack of people's lives. So you know you kind of don't want to muck with it right?

 

Alan:                  I think you're going into- I think this, this, this applies to kind of all, all sorts of genres and subcultures and that kind of thing. Because if you're into Indian classical music you know, I don't know a lot about that but I certainly know that singing is very different from people that play the sitar, very different from some, some music which might only be used in certain religious contexts which might some, which might only be used in certain dance context. But I also know if I talked about heavy metal, Iron Maiden is not the same as Slayer is not the same as- so when our AI gets hold of something and somebody trains it on flamenco music what comes out is some kind of mixing of styles but it might not be flamenco music. 

 

So, so and it might not relate to any of those subcultures that it’s, it’s, it’s, it’s kind of you, you could take anything and slice it though can't you? Science fiction writing or it could be luxury goods it's you know, this is my brand and I only ever buy that but then you feed it into a system that, that, that learns about all these things and spits something out, doesn't relate to any of those things. 

 

Paurav:              That’s where I feel you know Alan, some interesting things are happening because I, I’ll give you an example of Bollywood where music is such an important part of the industry itself and what you see is initially you know Bollywood was predominantly driven by Indian classical music. Then came 80s where in the Western pop, particularly the low end pop, I won't talk about the high end pop, but low end pop started getting copied into Bollywood music. Then for a bit of time there was a crisis, oh my God we are just copying all the, all this Western music, what are we doing? So they went back into a little bit of classical type of music. But then suddenly fusion emerged and fusion has brought in you know, Egyptian, Arabic music into Bollywood and it is, they have done such wonderful things with it. 

 

[01:00:08]

 

I was amazed at listening to some of those songs and I thought, I have never heard of these instruments what is being played here? And then I was like, wow I like this and that led me to listening to Arabic music, and so I think in some sense you know, these kind of combinations and all that yes Sean you're right that many a times it would fail. But that is that whole space safe space of music you know fail, but one or two of them would be succeeding and that would create new types of music, new types of entertainment or-

 

Sean:                  And, and we've seen it happen in, in Western music as well haven't we with The Beatles even, with Paul Simon and Graceland, Led Zeppelin and Kashmir if you, you know indulge me briefly there.

 

Paurav:              Yes, yes.

 

Sean:                  Alan you were going to speak sorry.

 

Alan:                  Oh no I was going to ask Stuart a question actually because I think it, it was interesting when he was saying about yeah, how do you deal with creating data in the real world? But I, I just wondered in terms of things like machine learning and, and all those kinds of things how, how do those systems deal with or can they ever do with humans’ fickle nature. You know what I mean? It's like one moment I might wake up one day and I’ll, and I you know, I might want Coronation Street on. The next day I’ll wake up and, and I sort of, I fancy a documentary on something like AI.

 

Stuart:               Yeah, this is a known problem with things like recommender systems it's called sort of serendipity, sometimes the, the algorithms are deliberately designed to inject in that random choice just because every now and again you don't want the same 10 recommendations you want something a little bit different. And maybe, maybe one of your friends has just looked at something new and that's what you want to know. So that, that is a, that is a thing and it can, it can be learnt but there is a, there is big there- I think for music it's quite, quite resilient and the creative industry is quite resilient to these new sort of new mash-ups of things. 

 

If you think of- my world is really around natural language processing so text documents, if you compose the, the automatic text generation is really quite poor when it comes to inventing new text. It will be perfectly grammatical gibberish, it’ll be talking rubbish and that's a big issue because when you have like text summarisation where you take in lots of documents, write a summary of this news story or something. If the summary is lots of bits of the news story stuck together it does really well, if that summary is actually a creative abstract of what's going on it does really badly and that, that is a, a big challenge I think at the moment.

 

Alan:                  I like the serendipity.

 

Paurav:              But if you, but if you, if you feed that, Stuart if you feed you know, lots of say for example Agatha Christie novels and Lee Child novel and so on and so forth to that AI will that AI not learn from those once the data is big enough to write some sort of a thriller paragraph at minimum, forget about a novel?

 

Stuart:               Yeah, it's the current state-of-the-art is around sort of three lines maybe of, of good quality text and then it degrades.

 

Paurav:              Oh.

 

Stuart:               So and, and what you get is, you get, you get the, the best systems kind of you give it like you know, 100s you know 10s of 1000s of books and it sort of pieces together sensible bits of text from loads of different sources and it comes out with quite a good idea. But if it's, if you're going too long it starts at the narrative and the- what you're trying to say the, it needs a bit more intelligence which it isn't really learning about what it's actually saying rather than regurgitating things which, which maybe in the, in the training set have been ticked to saying, “This is sensible stuff.”

 

Sean:                  I know we on Computerphile we did, recently did a video on is it GPT 3 which is one of these text generation things and it made a story. This is not necessarily about music but it is certainly about creative writing in some respects. It made up a story about unicorns being discovered in a, in a Chilean or an Andean glade somewhere in the, in the foothills or something like this. And, and it, it's really sensible apart from somewhere in the middle it starts talking about how they were discovered by aliens or something like this. But on first glance, on cursive glance it, it's really interesting and, and you could definitely see, see something like that being the product of a perhaps 1960s influenced prog band for instance, that could definitely have been a song by them.

 

Alan:                  Well I, I like this whole idea of serendipity because it’s, it’s kind of the way we all do stuff isn't it? It's kind of you accidentally bump into somebody in the street sometimes and started a conversation and you’ve sort of you've got an interest and you share it. But, but I'd, I'd love if, if I was writing a research paper and I had a serendipity button it might lead me to all kinds of like new, new findings because, because you need something to spark your, oh I don't know, your creative juices don't you? You want to get, you, you- humans thrive on that kind of thing oddly don't they?

 

Sean:                  It would be a great way to get past a block wouldn't it? Yeah you just go, “Give me, give me a starter for 10.” Almost you know, bit like Cards- is it Cards Against Humanity where you pull out a card and- or is it a different thing? I think it's certainly there are card sets you can get which are designed to try and inspire you when you're feeling like you need inspiration.

 

Alan:                  Well it's the way that we work isn't it? We've, we've just done it now for half an hour- sorry audience where, where, where we’re sparking each other's interest because that's the way that conversation works. It's amazingly flexible and you can make errors and all kinds of stuff but, but we all know what the rules are and, and I think that's kind of perhaps what Steve's point was with, with music. Is that there are rules there and you know what you kind of can break but you, but you can't really break the rules if you want to create something that people want to listen to.

 

Sean:                  Yes that's, that's it. Is it going to be received well and within certain genres there's a certain amount of rule breaking allowed right?

 

Alan:                  Yeah I'm just making notes about some, some aliens and you know natural language processing and unicorns and-

 

Sean:                  Yes I’ll again, the link to the alien AI system, no what was it, Unicorn AI system will be in the show notes for anyone who's interested in that.

 

Alan:                  Yeah, I, I think also the interesting thing with, with all this kind of stuff, I was talking to Derek McCauley this morning one of the, one of the investigators on the project on TAS and we, we started to discuss explainable AI. And again it got me thinking about how do you- if, if you're improvising with some, some AI, how do you know what it's- do you need to know what it's doing if you're good enough? It's in the moment, it's just-

 

Sean:                  We're back to the black box and does it matter right? Does it matter whether it's trustworthy or not?

 

Paurav:              Yeah, that black box remains the holy grail of AI and our trust towards it we, we you know in every industry that is going to be affecting. But I think in creative industries less so because when you ask an artist, “How did you come about this song?” You know when you ask Eagles one of their greatest hits and they were asked, “When did you- how, how much effort did it take?” and they said, “Well it was about two hours and we were sitting together being bored and that was it.” 

 

Sean:                  We had, we had 30 minutes left in the studio and we thought, what could we do? And often that's the case right? Perhaps that's the serendipity coming in there but-

 

Paurav:              Yeah, yeah.

 

Alan:                  So Stuart we'll have to get you sort of designing a serendipity machine.

 

Sean:                  You want to serendipity-matic.

 

Stuart:               Certainly I am working on some machines to, to bring humans in the loop. So rather than just the training is, you train your black box and there's your magic algorithm, it works, great you actually put the human teams with the AI tools and you find ways to, to get the insights of feedback injected into those hidden layers of the, of the AI itself. And that I think is potentially a way to get around that problem of just you know, it’s, it’s a black box, nobody understands how it's working. And it can really build trust because the teams at the end of that process kind of know what the AI is doing because they've just run it through many cycles and then they get a good feel for it.

 

Sean:                  I like that idea, that's really nice, we brought trust back in which is always good. Just remains for me to say thank you to Stuart for being on Living With AI. 

 

Stuart:               Thank you.

 

Sean:                  And thanks Paurav for being on Living With AI.

 

Paurav:              Thank you very much for having me again.

 

Sean:                  And thanks Alan, I won't say Living With AI again.

 

Alan:                  Thank you.

 

Sean:                  If you want to get in touch with us here at the Living With AI Podcast you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, Audio engineering was by Boardie Limited and it was presented by me Sean Riley. Subscribe to us wherever you get your podcast from and we hope to see you again soon.

 

[01:09:03]