Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Regulation and the Creative Industries
With generative AI creation tools for language, images and videos plus algorithmically curated social media feeds, it can feel as though AI is muscling in on a previously human domain. We discuss some of the digital regulation around AI in the creative industries.
Guest:
Martin Kretschmer, Director, CREATe, Professor of Intellectual Property Law, School of Law | College of Social Sciences | ARC | University of Glasgow
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 4, Episode: 4
Regulation and the Creative Industries
With generative AI creation tools for language, images and videos plus algorithmically curated social media feeds, it can feel as though AI is muscling in on a previously human domain. We discuss some of the digital regulation around AI in the creative industries.
Guest:
Martin Kretschmer, Director, CREATe, Professor of Intellectual Property Law, School of Law | College of Social Sciences | ARC | University of Glasgow
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to the Living with AI podcast from the Trustworthy Autonomous Systems Hub. This episode, we’re going to be talking about digital regulation around AI. I’m Sean Riley, and we’re recording this on 23 April 2024, and our guest for today is Martin Kretschmer. Martin, welcome to the podcast. Can you just give us a brief introduction please?
Martin: Yes, yeah, good morning Sean, I’m a professor of intellectual property law at the University of Glasgow but I work in the context of a large research centre which is called CREATe and it looks really into the regulation of the creative economy. So digital regulation in the wider scope, anything from tech reg to competition law to copyright law and cultural dimensions.
Sean: You mentioned wider scope, I mean this is a massive, massive field. There’s all sort of potential things we could talk about. What’s the big picture with regards to AI and regulation?
Martin: Well, the big picture is still one really of moral panic. You know, it starts from safety to explainability to bias to personal data to text and data mining, training data to industry structure and competition law. So essentially, wherever you look, there’s a body of law which may be already there and applicable but there is also the general feeling that something new is happening and the existing tools may not work, and, I assume, that conversation is globally unsettled. Do we need to regulate technology itself? Do we look at deployment of applications? How each jurisdiction is going to hand this? And do we need, you know, laws that are specifically designed for this technology or can we essentially run with a framework of jurisprudence we already have?
Sean: We often see law lagging behind tech, I mean, that’s just across the board. I mean, I think aren’t we still applying some communications laws to things like digital communications which probably don’t apply that well? So we do need to obviously update things and try and make sure that we cover all the bases, but also there are laws that, you know, just general laws which probably work in the place anyway, you know? Laws that kind of cover things like harassment and things like that are already in place aren’t they? It’s just a different means of doing these things isn’t it? I mean, why would we need to update the laws?
Martin: Yeah, so an example which I often use is the first generation of internet laws, you know, which treated online intermediaries not as a publisher. So if you’re a newspaper and you publish a defamatory statement, you know, you are the publisher, you are liable. But if you’re a social media platform, the way it was traditionally handled, it’s your host and liability essentially may arise once you are notified but you have no ex ante obligations and that- Really there’s a switch in society now that it’s no longer acceptable. So there is a kind of expectation that in the digital sphere, something needs to happen in advance. So we don’t quite know what it is, but there needs to be what is now often called a risk based approach. So if you’re a service provider if you offer a new technology, if you offer a new application, there’s some kind of duty on the deployer, on the technology provider to assess what they’re doing, ex ante before something bad happens. That’s why we hear all these words of online harms. That’s why we have this kind of high risk here and unacceptable risk classifications in the AI act in the European Union. That’s also why we have here in the UK the AI Safety Institute. So there’s a new mechanism but we don’t quite know what that means, you know? What are the standards by which these risk assessments are going to happen? How are they going to be enforced if somebody doesn’t comply? All of this we don’t know.
Sean: Yes, I mean this has come about as a result of recent- My feeling is this is basically down to algorithmic kind of cultivation of our news feeds. I mean, if things were just passing through that’s fair enough, but the moment a social media company tries to let’s say up engagement by deciding what you see, then they are a publisher, right?
Martin: Well, the way- I think it may feel like this. But technically, they’re still not a publisher. But under different laws, you know, there are now obligations in place more stringently. So in the European Union, disinformation, particularly in the context of elections are, I mean, a massive issue. But the means by which you regulate this also have free speech implications. You know? Who is the authority who tells you you can’t say something? So again, here the approach is generally to put some kind of duty on the relevant service to make an assessment and have a process in place. So it’s more a process obligation rather than a non-disinformation obligation. This seems to be the international approach in this area now.
Sean: And there’s clearly no silver bullet to this. We can’t just, you know, flick a switch and everything gets fixed while maintaining our access to these services. But yeah, it’s strange isn’t it? I think the irony is that we may be using AI and technology to start to address some of these problems as well, so how do you- You know, where does that start and end?
Martin: I think that’s entirely right. It’s entirely impossible to police the digital sphere without some automated mechanism. It’s not scalable. So thus the algorithm play the algorithm and what’s the role of humans in addressing, you know, if there is a complaint? What’s the mechanism of redress? So again, I mean, I’m basically pointing into open areas where we really don’t know how this is going to work out. So this is why many people think okay, the law is moving fast here, and yes, all this new legislation coming onstream. Much of it depends on how it’s going to be enacted and implemented. So if you want to operate with these kind of risk based assessments and then codes of practice, you know, how you should do this, then it really matters how they are, in practice, implemented. So what is TikTok going to do?
Sean: If anything. Yeah, yeah.
Martin: What are they going to do? Okay, so they may change their community guidelines. They may change the algorithm. So if the regulator doesn’t like it what’s going to happen, you know?
Sean: And one thing I- Thinking a million years ago to when I did media studies, we had this- Talked about the panopticon and who watches the watchers and all of this sort of thing. But when it sits on a global stage it’s that much harder isn’t it?
Martin: Absolutely, absolutely. And the big tech companies, they will not want to have hundreds of different policies. I mean, they’re prepared to offer different terms of service and community guidelines for different parts of the world but they will not offer a different one for Luxembourg and Ireland and the UK, it’s not going to happen. So they will try to find a way to negotiate with various authorities to arrive at one type.
Sean: It sounds to me like more research is required as well as to how to, you know, how to implement some of these changes and how to protect people online without, as you say, infringing on freedom of speech, without it costing an absolute fortune because the tech companies won’t pay it. I mean, is that going to help research?
[00:09:02]
Martin: Absolutely, we need to- Particularly, we need to follow the emerging laws. It’s not sufficient that, let’s say in the UK there are proposals for a more type of one-to-one regulation coming for the digital competition bill. So there, the proposal is that, you know, the digital markets unit industry may, in some ways, will for designated companies negotiate behavioural obligations. So this is obviously going to happen initially in a non-public space because they have very sensitive information about how these services work. But it cannot be that these processes happen out of sight and similarly, in the US, your key legislation, for example, you’ve got the Digital Services Act where there is access for researchers built in. But these are vetted researchers and they are then supposed to be able to see, you know, how these moderation mechanisms work and what kind of outcomes they have. But you know, at this moment, we know in principle it’s possible to gain access but we haven’t really seen any data from it apart from kind of transparency reports which the platforms themselves publish. But you need to be able to audit those. It’s not sufficient that they’re published and, you know, what they say they’re going to do. So you need to verify it. So there’s a huge need for actually finding out how, you know, these moderation processes happen online and the difficult is that they’re in some ways tailored to individuals isn’t it? Because the algoithms feed you something which somebody else doesn’t see, so there’s no longer really what we understood to be a public. So the regulatory approach here is very different than moderating what happens on the highways, although we all know, you know, here we are all out there and we need to avoid accidents, but in the online space, you know, the only way we can see what the platform does is if we collaborate with the platform, so we gain access, and that means that we’re opening also the line to regulatory capture because they need to have a very close relationship between the regulating body and the platform, and because of all the commercial sensitivities, you know, not much of it will be seen in public either. So we’re regulating what in effect is a public sphere without it being a public sphere. And that poses a dramatic challenge for us as researchers to how are we going to validate what we do here?
Sean: Yeah, I mean the mind boggles as to how you test that because we feel like we’re in a public space but as you say, it’s tailored to a personal kind of viewpoint, so the algorithms on various platforms are all giving you something that they think that you want based on previous search history, based on cookies on your device, based on the last person who used your computer or somebody looking for something in the same network. But just thinking about- Talking about this, it feels like we’re, you know, we’re talking about all the negative side. Is there anything positive happening at the moment that we can sort of switch to the other side of the seesaw? What’s going well?
Martin: Well, in some sense, it’s time that, you know, proper mature risk and quality management happens in the online space. I think you could almost call it just professionalisation. So I think that is a good sign that we are starting on that route, that there’s an expectation, you know, that the culture changes and, you know, we do proper quality management. That means also, yeah, I know the data I’m using, it knows that I’m transparent in documenting this, the guidance provided further down the supply change and these are all things you would expect in a maturing technology and I think that’s a very good sign that we’re starting this journey. But, you know, it comes with these very big- They’re still [unclear 00:13:46] put it that way.
Sean: Yeah, understood, understood. We mentioned at the beginning how wide a space and how big a kind of field this all is but just to sort of pick something out of the air about it, I’m thinking of things like generative AI and the way that that’s being used either in image or video creation or in large language models, these sorts of things, are these- You could decide that this is a problem because people are going to lose their jobs as illustrators and all this side of things or you can see that this is a tool which is going to help people do their job easier. How does that sort of sit with the regulative world, regulatory world, rather?
Martin: Yeahso in the big picture, I think both of what we say is true. I mean, it’s a new technology and some will gain and some will lose. It’s not different from photography and painters or gramophone and muscians, you know? We have had computer generated art since the 1960s. So there are opportunities and obviously also challenges, no question about it. And in particular certain, let’s say graphic design professions, will have to change dramatically to have the human touch which AI can’t do. But I can also see there’s, again, this very deep crisis of legitimacy that particular artistic communities feel, you know, they are being abused. So that big tech is hoovering up everything that they have ever produced and turning it into something that is valuable for society and the benefits are seen just there, so there’s firstly, you know, a deep disquiet about that. And that’s one which is not easily just solved with a little bit of money. The debate is quite distorted there. I mean, if you just look, you know- Let’s say generative- Large language models which basically hoover up the whole internet, any language which was ever written as they say in English, so if you trace that back to all the authors which have been hoovered up, you know, if the big AI companies put in a few millions, you know, everybody who’s written something will get 3p. You know? So financially it’s not a defensible proposition. So it’s basically just a, you know, a gesture, a symbolic gesture. That doesn’t mean I’m saying as it stands it’s fine. That is not my position at all. What I’m worried about is that the debate basically doesn’t see the real underlying problem is that, you know, if you ask for licensing which is happening already, so the big tech companies, you know, Open AI and Google estate, they already do licensing deals with, you know, if you’ve got valuable bodies of quality data, whether you’re a publisher or a, you know, a record label publishing company, you’re doing deals now. So there’s money, large amounts of money being handed over between the big AI companies and content companies and that doesn’t really on one hand help the small creator at all because they are depending on their contracts they have with their traditional intermediaries, whether that’s a newspaper publisher or an image library or a record label. So the individual creators are, you know, governed by the contracts they have made with your intermediary. So you may or you may not see anything from the licensing revenue which comes in the other way so that doesn’t really help and at the same time, there are good reasons why one has some certain exceptions under copyright law for text and data mining which in particular enable research. So by moving now to this kind of licensing environment, we also endanger the possibility for academic research because we can no longer get access to data under terms that enable scientific progress here. So again, I’m not offering you a solution. I’m just saying that the trouble for the cultural sector from having their creativity hoovered up is not easily solved by moving to a licensing environment.
Sean: But you kind of also mentioned it there, it’s already been hoovered up. The horse has bolted on this particular- I mean, without saying you need to stop retraining models on things you’ve agreed it’s okay to retrain on, that’s too late isn’t it?
Martin: Yeah, I assume if you- If you lose, you know, the court case hearing a lot of money will have to be handed over, but where that money ends up is not clear. And you could obviously shut down, you know- It could be that like in a Napster scenario, 20 years ago, it could be that a company disappears.
Sean: Yeah absolutely, I mean my memory of Napster may be slightly incorrect but Napster was the kind of file sharing network wasn’t it where people would just share music without paying for it? But as I see it, that sort of came back under a corporate guise with the streaming systems but just actually this time endorsed by the people who had the share of the money, you know, the Spotifys, the Apple Music. They are Napster by a different route aren’t they really?
[00:20:04]
Martin: They are, and different owners yeah.
Sean: Things like these generative AI just to finish on that, what’s the difference there between AI looking at all that stuff and generating something or an artist looking at a thousand paintings and deciding to paint something similar?
Martin: Yeah that’s an argument which is often made and in different jurisdictions it may go in different results. So in some sense, extracting information is not a copyright relevant activity. You know, if you copy expression, that’s the core of copyright law. You could argue if you extract information, you know, from a text or from an image it’s not- That should never be a copyright infringement. On the other hand, you can see that particularly- It may be different analysis for language as it is for art, for example, because in language, you know, you could almost argue that the structure of language, the grammar, is kind of extracted which is really not a personal expression.
Sean: I suppose you have to cite your sources and if you quote something you make it obvious you’ve quoted it. Maybe there should be a version of that for art as well, I don’t know.
Martin: Possibly, possibly. And on the artist side, if there are similarities, the traditional artist should apply. It’s very possible that substantial parts are being taken and then it should be an infringement claim. So whether the intermediary level, where you make just an initial copy before processing it in some ways you could argue it’s similar to internet search, you know, to web search. The whole web needs to be indexed otherwise we can’t search it, and we agreed globally that we need the temporary copying exception because otherwise, you know, the internet wouldn’t work.
Sean: Yes, yeah, absolutely, yeah, yeah.
Martin: But on the other hand, you know, for most of us it feels different. It feels different if you use then that material to generate something new it does feel different. As I say, my analysis is that we’re moving to a licensing environment here because the EU has already put in place an opt out and also there’s a risk that the court cases in the US may not go the way, and in the UK in a way that the law is quite tight, that means in the UK it’s very likely to be an infringement because the exception this way now protects the data mining for research. So that means, essentially, we will move to a licensing environment and I already sketched, you know, whether we think that it’s compatible with underlying fundamental copyright principles and methods because it looks that we’re already at the next stage.
Sean: it feels in some art environments as a sort of creator myself that some areas of work will have to work a bit more like the Renaissance where you have to find a wealthy benefactor to try and survive as a struggling artist. But anyway, that’s definitely not under the scope of today’s podcast. One thing that obviouslty this podcast and the Trustworthy Autonomous Systems’ Hub is all about is trust and AI. I’m just wondering where that kind of sits with regulation, the idea of how can we, you know- Do we trust the regulation? Or is that we, you know- How do we use regulation to help with the area of trust?
Martin: Yeah, and I think we touched on the aspect that it’s no longer really a public sphere in the sense that we used to think of a public sphere and that is, I think, where we need to make progress. We need to find ways of re-establishing a proper public space where arguments can be seen and engaged with without immediately having the suspicion that one is manipulated. And that, you know, requires in some ways, I would say this more mature quality management which I assume the whole industry needs to move to and that’s as much a cultural problem as a legal problem. I mean, it could be that a couple of huge infringement cases and huge enforcement cases with mega fines will change, or it could be there’s a huge liability case on some autonomous vehicles and it could be that the cultural change needs to come from some draconian legal change. That has happened in the past sometimes. I wouldn’t rule this out. But if I was working in the digital industries I think professionalisation I think is happening already and it should be transparently made visible so it’s as much in the interests I think of the tech companies to change here as it is in governmental action.
Sean: I would normally sort of finish this by saying what do you think the future holds, but the whole thing’s been about what the future holds, really, hasn’t it?
Martin: Yeah, we really don’t know how we come out of it. We’re all experimenting, each society in their own way, the big tech companies in their own way, and let’s be optimistic. Let’s hope that we mature and the public sphere will reappear.
Sean: Martin, thank you so much for joining us on the Living With AI podcast.
Martin: It’s been a pleasure, thank you very much Sean.
Sean: If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS hub website at tas.ac.uk, where you can also find out more about the Trust for the Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Bordie Ltd and it was presented by me, Sean Riley.
[00:26:49]