Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Generative AI (Featuring Deep Dhillon of Xyonix)
Generative AI has changed the landscape over the last few months, We talk to AI industry expert Deep Dhillon about what this means and how it's being used.
The panel discussion was recorded on May 22nd 2023
The feature interview was recorded on May 9th 2023
Guest:
Deep Dhillon, Xyonix co-founder and leads Xyonix technology development
Panel Discussion:
- Chris Nielsen, CEO and co-founder of Levatas
- Gisela Reyez Cruz, Research Fellow, University of Nottingham
- Dr Cecily Morrison, MBE - Principal Research Manager - Microsoft Research Cambridge
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 3, Episode: 2
Generative AI (Featuring Deep Dhillon of Xyonix)
Generative AI has changed the landscape over the last few months, We talk to AI industry expert Deep Dhillon about what this means and how it's being used.
Guest:
Deep Dhillon, Xyonix co-founder and leads Xyonix technology development
The feature interview was recorded on May 9th 2023
Panel Discussion:
Chris Nielsen, CEO and co-founder of Levatas
Gisela Reyez Cruz, Research Fellow, University of Nottingham
Dr Cecily Morrison, MBE - Principal Research Manager - Microsoft Research Cambridge
The panel discussion was recorded on May 22nd 2023
Podcast production by boardie.com
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
Producers: Louise Male and Stacha Hicks
Episode Transcript:
Sean: Welcome to Living with AI a podcast where we get together to look at how AI is changing our lives, altering society and changing freedoms and the impact it has on our general wellbeing.
Today we are looking at generative AI. This is season three of the podcast so there is plenty of back episodes for you to binge on, you can find the links in the show notes or if you search for TAS Hub you will probably find us I am sure. If you have not heard of TAS Hub it’s the Trustworthy Autonomous Systems Hub.
We are recording this on the 22nd of May 2023 and our featured interview was recorded even earlier, we will come to that shortly. But before we hear from Deep Dhillon, the founder and Data Scientist at Xyonix in Seattle, before that we will introduce the panel who are going to be chatting today.
So joining us today, in fact if I go round the room and just ask you to just give us your name, rank and serial number, hopefully that will be fine. So in no particular order, just because you are on the left of my screen Gisela I am going to come to you first.
Gisela: Sure thank you, thank you Sean. So my name is Gisela Reyez Cruz and I am a Doctor and researcher in human computer interaction at the University of Nottingham. Here I am part of the Trustworthy Autonomous Systems Hub. So I have been investigating topics of trust, acceptance, use and non-use in relation to various autonomous robotic systems ranging from mobile apps that have or may have an autonomous component to actual physical robots.
And I also have an interest in background in accessibility for my PhD investigating technologies used by people with visual impairments.
Chris: My name is Chris Nielsen and I am the CEO of Levatas. We are an AI firm that specialises in industrial AI. We help automate the inspections using Machine Learning robots, drones and camera. So I have more of an industrial viewpoint today but we are dipping our toes into generative AI so I am excited to be here.
Cecily: Hello my name is Cecily Morrison and I am senior principal researcher at Microsoft Research in Cambridge, UK. And I have a background in human computer interaction and I lead a team called the Teachable AI Experiences Team. And we build out experiences that allow people to have agency through teaching Machine Learning systems that do and address their specific needs.
Sean: I’d like to introduce Deep Dhillon, welcome to Living with AI Deep.
Deep: Thank you so much for having me.
Sean: Fantastic, well it is great that you are able to join us. Just a quick note, particularly in this world of AI, this chat is being recorded on May the 9th 2023. I recently recorded a computer file video on Large Language Models which was out of date before I had finished editing it. So I think it is important to know kind of where we are at in terms of time of recording. I ended up captioning that with a paraphrase quote from Ferris Bueller’s Day Off that said AI moves pretty quick.
So Deep is a co-founder at Xyonix and leads technology development there. But it’s probably better to hear an overview from you Deep rather than me try and tell you what I have read on the internet.
So how does it all work, what’s your role? What is Xyonix? Can you start with that one?
Deep: Sure yeah, so at Xyonix we build in essence, custom AI systems for clients. The genesis of Xyonix. I have been pretty active here in Seattle in the US building start-ups and leading them usually as a CTO or chief scientist or some kind of tech exec in various stages.
And at my last start-up I basically made two lists, one list was of stuff that I am pretty good at but don’t really like doing and another list was of stuff that I am pretty good at but love doing. The second list it turns out was a lot smaller, which I think might surprise some of the folks straight out of school, but that was the idea behind it. And of course everything in it had to do with Machine Learning or AI because I have kind of been obsessed with that for a few decades now.
And the other thing is that, you know, I really wanted to focus on or as much effort as I could on answering like a simple criteria and if a five year old girl was to look at the project would they think the world is better off if we succeed or not? If they say yes then we take the project, if they say I don’t know then maybe we take the project. And if we say no we might still take the project but maybe we are not as excited about it or something because we might have to pay some bills.
But as a result we do a lot of projects in healthcare. We have got a number of sort of smart device companies that we help to build stuff to assist physicians for example. We spent a good amount of time building like in-body surgery imagery analysis and text and analysis of physician feedback that they were giving to other surgeons. And that is an example project, we have, we did a heartbeat anomaly detector for a digital stethoscope company, so we work a lot with start-up companies.
As far as my particular role, we are a small team, we have got about ten folk so everybody does everything. I build models still. I write a lot of code but I also try to get new clients. And I keep clients happy and we do a lot, also a lot of our clients are start-ups so we do a lot to help them raise money. Connect them up with the investment community because we have been doing this for a while so we are pretty well connected there and really just kind of help our companies do amazing things.
Sean: Fantastic. Well obviously the theme today is generative AI which I am sure lots of companies are perhaps asking for flavours of. But you would have had to have been living under a rock not to have noticed things like ChatGPT and kind of everybody getting very excited. Mind you then Microsoft’s Bing came along and people started getting a bit more concerned about these things. I mean how do you keep up with this in this kind of work area? How do you just keep up?
Deep: To be blunt it’s just not easy. Because in my, you know I have been doing this for probably three decades now and there has never been a time where the rate of new announcements on such profound levels has been anywhere near this. I mean it’s just, it’s kind of wild.
So one thing that we do is we actually just host a podcast which forces us to really spend time together and dig in deep on a new thing. It seems like, you know, about once a week there is something really significant coming out. Obviously ChatGPT is the elephant in the room for this conversation, it’s been extremely transformative.
And I don’t really have a good answer other than subscribe to a bunch of feeds. And when things rise to the top of my feeds instead of waiting a few weeks to like dig in, I just immediately grab them and find the time and dig in.
We also, like all of our principles, you know, we just have a practice every morning. We spend an hour reading papers so that you can get a little bit ahead of the curve, so we have been doing that for years. We talk about stuff but yeah I mean every once in a while something wild pops. Like I remember when ChatGPT first came out, I think day one it started rising up on my feeds but I was just like oh whatever, it’s probably something that I can fit into my prior mental model.
And then by day two I was like huh, really, really important people are saying this is a big deal, so I started digging around with it. And then I don’t know, three and a half weeks later I did something other than that, it was just completely I went down the rabbit hole.
So I think it’s important to take that time to grab on to this stuff. Because a lot of these things are really, they really change your frame of mind and you really have to absorb it into your business and products and think like what does this mean. And I don’t know that the answers are that easy to be honest but.
Sean: I know that, I have mentioned this before and regular viewers will know, I make these videos for the Computer File YouTube channel. And we made a video about language models about three or four years ago, 2019 it was and the presenter who I work with was saying, you know the contributor was saying, I think these might be a big deal. And a few months passed by and then we did another one on the sort of latest one and there was something to do with this article that was written about unicorns and we all had a bit of a snigger but thought ah this is impressive. But I don’t think we had any idea how quickly it would turn this from writing an article about made up unicorns into something that basically can pass the Turin test.
I mean how do we get, the overarching theme of this whole podcast series is trust right, how do we square trust with something that can convince you of pretty much anything? It’s like an amazing sale’s person isn’t it?
Deep: Wow, I don’t know that. I don’t know that you can, right. Like I don’t think trust is something that you should hand away to the output of these algorithms and models. That said it’s natural to start trusting it when it does so well so much of the time. You know we have got a pack of like human annotators that we hire, you know, really smart kids in good colleges with tough majors to do things for us.
[00:10:08]
And we don’t even know while we use them. Sometimes we keep an eye on training data that we generate from ChatGPT forum because they are not as good. I mean it’s not knocking them because these are really bright, you know, students but this thing is good. I mean it’s better at summarising texts than I am. It’s more conversational on a wider range of topics then anyone I have ever met.
But all that said, sometimes in some context you can’t quite trust it. I will give you an example, you know, so one of our companies that we work with has a medical, it’s a kind of wellness device company. But if you build wellness devices you can be on the line between medical devices and you want to stay clearly on the side of the wellness otherwise it’s a whole other regulatory environment.
And so out of curiosity we tried to like generate an FAQ and had a bunch of stuff generated out of ChatGPT. We started looking at it and realised these are great answers but they cross this line so we can’t really go here. So I think you can’t trust them, you know, I think you really have to keep.
It’s the same like I use it to like, everything from like, writing proposals, which is something that might have taken me three hours and now I have got my time down to fifteen minutes. But I mean it would be absolutely insane to just send out a proposal without reading it. And it gets me sixty per cent of the way there and it helps address the blank slate problem. I can get some structure out but I think it is just important for humans to pay attention here.
Sean: It needs checking effectively right?
Deep: Honestly oftentimes for a lot of use cases doesn’t but it still does.
Sean: As a bit of a gimmick on our first season I asked ChatGPT to write some of the intros for the some of the guests that we had on. And the very last second as we were about to go and record, obviously I done this ahead of time, one of the contributors had laryngitis so she said I am going to try my best but I have got somebody here to take over in the event that I can’t speak. It’s James, and she gave me his surname but I typed into ChatGPT, unfortunately there is a problem and James may step in. And it just flat out made up his surname for the prompt.
So it is this idea of right, I like to equate it to a politician or a sale’s person, it feels like it tells you what you want to hear almost.
Deep: I think that sort of phenomenon was more present with 3.5. 4.0 It’s pretty rare that it doesn’t say something really, really good. And if there are problems it’s usually in your prompting and it’s usually in the style that you are after. So like for example I use ChatGPT daily as a union therapist. I got kind of obsessed with dream analysis a while ago and union therapists in Seattle are like a few hundred bucks an hour and there is only three of them and they are booked out for like a decade in advance or something.
So it turns out you can prompt it really well and it’s shockingly intriguing and it will analyse the heck out of my dreams. And everything it says almost always, you know within like thirty seconds, a minute conversation, is uncannily interesting. So yeah I think, it did take me a bit to kind of nail the prompts there.
But honestly 3.5 is amazing too. I find 4.0 in some ways worse because it’s a little bit more cautious.
Sean: Yeah there is definitely some, what’s the word, kind of patching going on to. I don’t know, is it to do with kind of avoiding litigation where you know, you are making sure that it doesn’t give you something that might be problematic? I don’t know.
Deep: Well there is definitely a very conservative effort at Open AI to put guard rails on this thing. There is concern, you know it’s giving out health advice, it’s giving out stock tips, it’s trying not give you stock tips, it’s trying not to give health advice but it still does. So there is a lot of effort to put up guard rails.
And I don’t think it’s driven by only litigation. There is a great conversation recently between Altman and Lex, and Sam Altman was talking a lot about, there is a genuine belief and concern for this thing destroying the human race. And like I don’t, like I am trying to wrap my head around the what is the path to this apocalyptic vision that people, very bright people are genuinely concerned about? I think I have kind of got the argument figured out but I think that’s the genuine concern there that it’s not just about.
I mean a lot of people see ChatGPT and they just see this thing that you talk to but that’s not at all what is happening. In the start-up eco-system and just the company, like the whole eco-system at large people are taking this stuff and using it as the central brain in like autonomous action systems now.
If you look at AutoGPT for example this thing is kind of in a box still but it’s nowhere near the box of just you talk to it and it says something. It has got access to the internet, it can fill out forms, it can, you know. So as soon as you start unleashing. And so what people are doing is they are using the GPT as a mind to orchestrate actions and have agency and not only plan tasks but plan some tasks.
And that’s where it starts to get interesting. Because these systems really are optimising towards a goal and they might have very naïve but efficient ways of doing things that we would never do. Like we would never say it’s okay to suck all of the oxygen out of the atmosphere in order to like you know, whatever, cool the system or something.
Sean: Yeah I mean there is a field isn’t there of AI safety which I have a colleague who is very concerned over it all and then some other colleagues who think there is nothing to worry about. But I think it’s like you say, when you start connecting some of these things together then you have to be very careful about what decisions are not overseen by somebody who perhaps hasn’t got that naïve thing.
I mean like you say suck all the oxygen out the air. It’s a classic thing where the task at hand might be much improved by doing that but it’s the side effects the perhaps aren’t going to be kind of considered.
There is also lots of other generative AI happening at the moment and I mean in the field of art it’s becoming a bit of a problem isn’t it. Because of the kind of images there are artists who are concerned about copyright but equally there are stock companies who sell images who are having problems with what their business model might look like going forwards. It’s a huge field and I only expect us to scratch the surface here but how do we kind of approach that sort of area?
Deep: You know, I mean I think whenever you have a civilisation style disruption, you know, like fire, the wheel or the internet or the printing press or AI, like everything changes and so everything is impacted and everything is kind of reinterpreted. So I think when we think about stuff like copyright for example, I mean the internet itself has like a huge disruptive effect on copyright. You know people maybe don’t recall but back in the mid-Nineties and with the emergence of Google there is a lot of concern, rightly so, that there would be some disintermediation of like news sites for example and that their copyright might technically be respected in the sense that Google is linking off to the news sites back then.
But in practical terms I think most of us would agree that they have been utterly disintermediated to the point where they are barely alive. Like maybe there is a couple of news organisations, the ones who are thriving though have sort of morphed into entertainment vehicles. And Western democracy is pretty well threatened at this point, it’s not a small thing. I don’t think it should be trivialised and I think these disruptions are real.
[00:20:14]
And it takes a long time for society to develop immunity and the immune response to deal with these things, right. Like 2016 election here in the States, you know there was an awful lot of manipulation of social media to push narratives coming from Russia and other places and it arguably did have an impact on the election. Like whether you would go so far as to say like you know, Trump wouldn’t have won without it, maybe that’s hard to prove. But definitely anyone, almost any reasonable person would agree that the information landscape has been radically effected by social media.
And I think in the same way we are going to see a lot of good things and a lot of terrible things with AI and our immune system is nowhere near ready to deal with it.
And the one thing that is different is the pace of, the pace of change is on an almost exponential uptick. Like it’s not like we get, even the internet was mind boggling for society in terms of its transformation and the same with electricity, like these were really fast transformations. AI will be faster and it’s already like massively different from three months ago. Like find a college student that doesn’t start with ChatGPT, you know.
And rewind that for what it means for education and their learning. And rewind, and think about where the educators are, like they haven’t even gotten started, like they have no idea what to do.
Sean: Yeah I have spoken to an educator recently where they had caught somebody for using ChatGPT in a. I am sure this has happened lots of times, but it was the first time I had spoken to someone who had actually had experience of a law student actually trying to submit a piece of coursework that ChatGPT had written and was caught because the citations were wrong. It had completely fabricated the citations, which anyone you would hope would have gone and checked before submitting.
Deep: That’s a transient problem, it’s going to be addressed.
Sean: They are going to use it as a starting point even if they don’t use it. But are we not at the stage then where this is a bit like when we moved to having pocket calculators for instance, it’s a tool that you use rather than something that’s cheating, it sort of becomes factored in?
Deep: I think it should be, you know, because if we don’t. I mean the calculator idea is hard for me to wrap my head around because the difference between a calculator and a mind that’s more powerful in many ways than most people I know, all the people I know.
And at least with respect to its breadth and what it can glean from reading only. And I think we are not very far from the other modalities coming in like you mentioned imagery and video and other ways of observing the universe and at some point not too far from now we will have a physical understanding of most of these models too.
And we are already starting to integrate symbolic math understanding.
So I think we have no choice but to embrace and like ride the tiger here, I don’t see an alternative. If you and I completely agree that we should shut this thing off maybe we even get all of the world leaders to agree, it is still not going to happen. There is just no way it’s going to happen. Unless we are willing to shut down all the GPUs and CPUs and the cloud and like all of modern civilisation like it is not going to happen so there is no way to shut it off at this point.
Sean: This is a true Pandora’s Box kind of moment isn’t it really?
Deep: It’s already open. It opened a while ago. So I think that the way to think about it is like what can we do to accelerate the development of society’s immune response? And by society I am mostly interested in the Western democracies. Because I think democracy is already at a massive threat from the rise of social media and the attention economy. And media that can really only make a buck by tapping into reptilian minds and our seeking of sensationalism, we are already having so many problems because of that.
But now when we can generate fake, you know deepfakes. Well I think we just had our first officially like large public political campaign that was based completely on deepfakes. I think it was last week, maybe the week before, like there was a GOP campaign of all this apocalyptic stuff that Biden was going to do that was deepfake generated. That is not going to end but like one side does it the other side is going to do it so you keep going and we can fundamentally alter perception.
So we have no choice but to like accelerate the education across the populous so people know that when they see stuff that it is probably not real and if it is outlandish it’s probably not real. It may be still real but it might not. And we have to like get way better at like anticipating the rise of this stuff and trying to get humans to talk about it.
And with respect to like legality like the laws are, like we are still writing it on laws, at least in the States, from like the early Nineties laws are still governing much of the internet. And we had, I don’t know what the case is in Britain but like our senators are not the brightest bulbs in the room. And you have got like Mark Zuckerberg standing in front of the senate and you have got like a major, if not one of the most influential senators in the room saying well I don’t understand how you make money because you don’t charge anyone for this. And he is like well senator we run ads. And these are the people who are going to decide how to manage this stuff, like I don’t follow, I don’t see how that is going to work.
Sean: Legislation in any country is traditionally lagging behind technology. But things are moving at such a pace aren’t they that, you know, it is just unfathomable that it will all kind of catch up.
And I think you are absolutely right, we kind of need some education to kind of combat the way people perceive things right. That is probably the thing isn’t it, you know, we need to get in early so people realise what these things are capable of. And as soon as you hopefully know what it is capable of then you might have an understanding that something is not real. There is always going to be people who want to believe things right?
Deep: Yeah. I mean like going back to your question about artists and musicians and copyright. Like I think copyright is kind of an interesting kind of thread to grab on to and like tease out. So I think part of what is happening right now is humanity is faced, like in the West we have this idea that we are individuals and that we have sort of genuinely unique thoughts and that our work product is genuinely unique. But I think if you really ask any musician they will say well everything is derivative right. Like they are grabbing riffs. I am looking at your, I can show you I have a home music system back here too. I am looking at your studio you have got a guitars and drums and stuff everywhere.
But like you know everything is derivative right, derivative on some level in the musical realm and art is not that different. If you have never seen a cubist piece before and then you see one well now it is in your head and so now you start being affected by it.
And I think the idea that like, I think we want to like draw crisp boundaries around this essay that I wrote and say oh this is mine. Well it’s not really yours because everything that got into your brain wound up, you know, getting a simulated mantra into that essay. Well the only difference between your brain is that you are just incapable of reading anywhere near the quantity of what ChatGPT is capable of reading.
And you are nowhere near capable of looking at as much as what Dall-E or Stable Diffusion is looking at. So you just don’t have the creative capacity because you just haven’t seen all those things and been able to assemble it. And so I think we are re-wiring society in a way that will be much more, it will have no choice but to be less like all in on the concept of individualism.
Sean: Yeah I joked with someone even earlier today that perhaps you know, if we ask it to make a song in the style of Oasis and a Beatles track comes out then you know we will know that it is working correctly.
But you are absolutely right, the influence thing, I think it’s perhaps slightly beyond the remit of this trust overarching idea that we are looking at here. But it seems to get into the idea of well there is going to have to be a universal basic income or something because, you know, people can’t make money out of doing some of these things. And it’s easy to kind of equate that to the idea of printing presses being replaced by computers and printing systems but this is on such a massive scale isn’t it?
[00:30:07]
Deep: I don’t know about UBI in particular right. I think if you asked me pre-pandemic you know what I thought about UBI I might lean on the side of well humanity will do just fine if people give them money. But in the heart of the pandemic you got all these sixteen to twenty-four year old young men sitting at home with cheques coming in. I would like to think that they were doing something valuable with their time but it turns out they were just throwing Molotov cocktails at the Federal building, like downtown Seattle. Like I don’t know if that’s, like that is not a great recipe.
UBI is, maybe that’s part of the solution but it’s utterly insufficient. Like I think what we have to address is people need reasons to live and work is sort of a way to avoid that issue. Like because your reason to live is to pay the mortgage and you know, get your kids through school or just eat or grab a beer at the end of the day like it can be that simple.
And once you take that basic thing away then it’s not that different from all the trust fund kids driving around in a circle on a tractor with a joint sticking out their mouth because they have no idea what to do with their lives, like that is a thing. And no offence to the trust fund kids but it’s hard to figure out a purpose in life especially in an era in society where religion is effectively dead. Things that people used to give us as a way to like bind us and think about how to do things is not there.
So I think, and this is touching on stuff way beyond AI, but I think like this transformation is like bringing it to a head right, because yeah we do have to figure out how to feed people when the work is being done by the bots. But I guess if I put my capitalist hat on for a moment, I am less convinced that we can’t just come up with things for us to do in the guise of work. We can always grow the non-profit sector, there is no shortage of problems the world is facing. And we can always come up with like a bunch of electrons to be pushed around like video game development or whatever.
And our standards always rise, you know, as soon as we invented the camera, everyone was like well what’s the point in painting? But here we are a hundred plus years later and there is plenty of painters around, like they just redefined the game and I think we will do that too so I think it is a bit of both.
Sean: I was getting a bit concerned in the middle there that we were like okay oh no the apocalypse is coming but we are upping our game hopefully.
Deep: I don’t think we have a choice but to up our game. And I don’t have like an ambiguously apocalyptic vision here. I think it’s a risk. But here is an example, like if you rewind to 1993. I remember I was in grad school and like looking over a friend’s shoulder and like you have got to check this thing out. And it’s an early browser, I am looking at it and I get this vision of the internet we are like watching a video off of CNN and you know. And I played it for just ten minutes and then I just had this flash in my head of the future.
And at that time I was like twenty-one, twenty-two, you know I was a much more de facto like positive. Like my vision of the future was sort of de facto positive in the way that at least prior generations of youth were. But I did at the same time think wow this is going to be amazing, like think about all of the new content that people can just publish without having to wait for a publisher to agree to it. And all of the music that people can just distribute on their own, like I thought of all these things.
And then you rewind today and nowhere in that initial flash did I think like all of the mayhem that has come with the internet. But at the same time there has been a ton of amazing stuff that’s happened. And if I can just sit down now knowing where the world evolved to and go back to ’93, would I turn it off if I could and shut it down and say no we don’t really want this thing? Like there is no way I would do that.
If somebody is sitting in their house, they have got a pacemaker in, the data is being sent straight up to the hospital, they are immediately going in. Like there is so many like fundamentally like high quality life things that came out and then there is yes there is all this other stuff.
And I think AI is going to be the same way. There will definitely be a ton of weird stuff that we can’t really foresee right now but there will also be just so much amazing stuff, we will save so many lives. We won’t even probably die until we are well into our hundreds, hundred and fifty, maybe a hundred and ninety even. I don’t know if that is good or bad, that will have its own set of problems, you know.
Like the world is going to change and we probably won’t be sitting on this planet anymore. Like as soon as we can harness this like much more intellectual capability we will probably be able to have settlements outside of the planet, like it’s going to be a whole new world. And I think in twenty, thirty years, if we have that same choice would we undo the AI? We would say no but we will have a long list of problems.
Sean: It’s really been great to have you on the podcast Deep and thank you so much for sparing your time.
Deep: Yeah thanks for having me that was fun, so anytime.
Sean: Back now to Chris, Gisela and Cecily. Well where we do start with this one? I mean the interview ranged across so much ground it’s kind of hard to know where to start.
But one thing that struck me at the very beginning was he talked about in his organisation bringing in, he called them kids but I will call them graduates, graduates from top schools to do annotation and all sorts of things and ChatGPT maybe being better than them. Okay I am going to pose this to Gisela just because you are in an academic situation right now, how does that feel that he is saying that some of the kind of graduates are being outclassed maybe?
Gisela: Do you mean outclassed by ChatGPT?
Sean: Maybe?
Gisela: That’s quite interesting. I think I appreciate it from Dhillon saying we have these people doing this annotation. I think that was my first reaction when I heard that part of the interview. And it also strikes a chord with recent, I mean I might not have a comment directly to how ChatGPT might be outsmarting these people, but I thought about this connection of there is people in the global south who are annotating, who annotated ChatGPT.
I don’t know if you were familiar with that but it was like a recent Times journalistic investigation of when ChatGPT came out it was not really known that they were outsourcing these annotating practices to people in the global south in Kenya and being paid two dollars per hour or something like that.
So I think it also connected to other topics that Dhillon talked about in the whole interview. But connecting that to the transparency I guess of this whole process, we don’t know, it’s a huge black box. And I think to me that is not the first thing I thought about when he said that.
Sean: I think that black box is a really key thing and it is a really strong recurring theme in any of these AI related podcasts that we talk about is that quite often with deep learning, with neural networks and all sorts of technologies like this, you can’t fully test what is going to happen.
This is coming towards one of you guys now, but how do we approach that problem? Maybe Chris, how do you guys approach that problem in industry, in an industrial setting rather, with AI being a black box, how do you know what it is going to do?
Chris: Yeah that is a great question. I think when it comes to my world we are speaking from an industrial standpoint. We are working within a slightly more of a narrow domain then the world of ChatGPT and these large language models.
So we work primarily within closed loop systems. We are working on smaller datasets where we are not asking our agents or embodied agents like cameras, drones and quadrupedal robots, we are not asking them to understand broad contexts. We are actually really just saying hey look at your inspection data, tell us what you are seeing.
As that pertains to trust we have fewer issues with trust in our closed loop systems, if that makes sense. We really just, tools like ChatGPT really have almost zero factual errors or hallucinations when they are retrieving data because we are really not asking them to generate anything.
[00:40:04]
What I would say is that will change over time So we are not dealing with hallucinations or trust issues now but as we ask our agents to solve more and more complex problems and to make connections that perhaps we the humans can’t see as clearly, as quickly. We do believe that we will encounter those kinds of trust challenges.
But for now just to give you a concrete example, we will train a Machine Learning model to understand very basic industrial tasks such as reading and analoguing pressure gauge to tell us if the reading is anomalous, is it dangerous, to look at thermal equipment, looking for thermal anomalies. It’s more of a binary is it safe, is it not, we are not asking it to give us an opinion per se. And so we have fewer trust issues I would say.
And the second part of that answer really is that we have built in a human in-loop engine really to, we talk about our AI tools as human guided intelligence. We are really not at the point yet where we are setting loose kind of a very powerful, very near sentient agent in our customer environments quite yet. So we keep a strong guiding hand with our human operators.
Sean: Which is great because I think it’s the stuff on science fiction people think, things like lots of dynamic spots going off but these things can be possible to send them out autonomously can’t it?
Chris: Yes and I don’t want to over speak here but speaking about bots and dynamic spot we have quite a bit of experience with that quadrupedal mobile robot, in fact that’s a very large part of what we deliver to our customers. And I can say first-hand it is, first off it is an amazing marvel of technology, it’s very capable and it does have autonomous features but these are pre-programmed routes. These are inspection missions that are all set up by a human, it does not have the agency to go off path if you will.
So with our Machine Learning inspections models it can decide on a kind of a dynamic tree of actions but these are pre-programmed actions, it’s not making a cognitive decision to do something new or to create a new pathway if you will. It is still very much a loop-guided system. We know that won’t always be the case but that is the state of the technology right now.
Sean: Cecily do you have much experience of this side of the kind of the AI usage?
Cecily: Sure. I think the first thing that came to my mind when they said oh well this is better than our fantastic graduates, I would have said better in what context? Because I think a lot of these measurements and a lot of the things that are mentioned as quote, unquote, better are ones where there is a concrete notion of right and wrong.
But most of what we do in humanity is make judgments about what is appropriate to the situation. What’s appropriate to what I need to do now? What is the communication I want to have? And even if writing an essay there is some certain ideas that I want to communicate or there is a context in which I am choosing to communicate.
So even if it produces you a lovely summary of some lovely words, that’s very different then making a choice about the communication that I want to have. And this is for me the sort of difference between what is human and what are the things that enable humans to increase their capabilities.
I think, plus I like to turn it the other way. I know there is this view of oh wouldn’t it be great if these things were great or bad or I don’t know what and take over the world. Or we can think about it, how are we going to use these things to enable us to be the people that we want to be? And I often, we talk in our team and these technologies are not meant to be superhuman, they are meant to make us people be super being human. And how are we setting that up so that can happen?
And I think there is a lot within the technical space that can enable us to do that. So allowing people to show the system, what do they want the system to do, is a good direction. To say well yes maybe it is very good at its language but what I need is this, this and this because this is the context of which I am producing something in.
Sean: There seems to be a new skill that is required which is getting the prompting right isn’t it?
Cecily: Well again I don’t want to over speak either but I feel like this is a bit of lacking in the human computer direction space because people have prompts available so that is what they have been pushing on. But actually I think in a year’s time we are going to see much more a complex interface designs and architectures where they have, sometimes we refer to an ensemble of models, and some of the things are going to get us to the outcome that we want.
I think ChatGPT is a demonstration, a capability, it is by no means something that we are actually going to use in the systems that we use in our lives.
Sean: Unless you are maybe doing dream analysis. I joke, I joke of course.
But even ChatGPT right now, and Deep mentioned this and I can totally understand, it is incredibly useful for what he called the blank slate problem. This idea you know of sitting down at your desk to do a task and thinking right where do I start? To just be able to throw in some ideas and for that to give you something that you can work on and you can manipulate. What do you think about that Gisela? Is that something that you would find useful? Have you used it yourself in that respect or?
Gisela: I haven’t in that respect no. I just recently started playing around with it. But I think that’s an important part of understanding the capabilities and limitations to use it, playing around with it. And I guess it also aligns with your, perhaps with your career, with your job, what you do.
So for instance for me and other people in the academic environment what we do, the bread and butter of our line of work, is writing research papers right. So I think there is definitely these conversations and concerns maybe about if I let it write it, if I just give the prompts and write it, is it still me? Am I still engaging with the ideas? Am I still learning?
So I think ahead of us there are still so many of these conversations. But it’s interesting and I think for me I think we need to start engaging and playing around with it just to understand what it can produce.
Sean: And I think it’s interesting because I made the very simplistic analogy of a calculator. When the pocket calculator was introduced and at first, certainly in the early days of school you were kind of thought to be cheating when you use a pocket calculator because you are kind of by-passing what you are supposed to be learning. But then as time goes on and you get more advanced and you are using it as a tool right, nobody expects you to be able to do crazy amounts of square roots in your head. I don’t know if people even do square roots anymore, it’s been a while since I did maths at school.
But Chris is this going to be a tool that we are going to use, like this AI? Like Cecily said perhaps we think of ChatGPT as a demonstrator but, you know?
Chris: Yes I would say that moment is already here using it as a power tool if you will. We employ a lot of data scientists, Machine Learning engineers. I would say these tools, not necessarily just ChatGPT, but those like it are already integral tools in our daily work flows.
If you think about it the LLMs are kind of getting a lot of press for their understanding of language, natural language, English and all these other languages but they were first. I mean the machine language code is that much more straightforward and simpler and more structured so it’s phenomenal at coding. And so as a support to the industry of programming and development data scientists and yeah, we are already seeing it have a huge impact in our day to day. When the tool goes down I know about it how about that.
Sean: That’s really key isn’t it? I know for myself, I don’t code but I have coded something with ChatGPT. I didn’t know where to start and maybe I have tried Hello World in the past but I am not a coder. And then I thought I can ask this system to do this for me and I got it to write a programme to do speech to text and it was incredible. It took a bit of massaging but wow. Yeah to allow somebody who sort of understands what code should be capable of but doesn’t know how to actually do it.
I mean there is always a question about whether jobs are going to suffer from this and I am not totally of the belief that they will. Again I think it’s going to be used as a tool to do the grunt work but surely that means you need fewer people to do that grunt work right? I don’t know.
Cecily: I might offer a positive spin on that. One of the things that I think is going to be revolutionised by these large foundation models, language or a multi model or otherwise, is accessibility. So the ability, so we know that about, depending on your figures and where you are in the world, somewhere between sort of sixty and seventy-five per cent of people with disabilities don’t work, this is a huge, huge number.
[00:50:00]
And I think a lot of these tools will enable that to become a much more practical experience. For someone who is blind just being able to add in headers and be able to re-head tables, something that we just take for granted, they should just take that for granted too and these tools are going to make that happen.
Things like my son in particular uses text to speech. He has trained his own voice. He is a ten year old boy he doesn’t sound like a robot he sounds like a ten year old boy like him. So a lot of these things are going to enable people to participate in society. So the people who haven’t been participating are going to go into society.
And will there be just other people who find that they need to reskill or change their jobs, quite possibly. But again I think it really depends not on the fact that these tools exist but how we as a society take on these tools. These tools are there to help us fill the gaps. And if we set up people to be able to use these to fill their gaps to do the things that they couldn’t do before, that is going to take us in a very different direction then if we set this up as like these are going to replace people, we just can get rid of them, it’s all more efficient.
I think it’s much more, we need to be more confident as a society in creating that transition rather than just saying it’s the end of the world oh my goodness things are going to change.
Sean: It’s interesting that you say that, in an upcoming podcast that we have got coming in a couple of weeks’ time we talked to some artists. And they were saying you know, people had this worry back in the 1990s when Photoshop came out and everybody was saying oh there will be no more photographers. But of course the main use of that tool is by photographers to enhance and alter and fix things that maybe went wrong and all the rest of it, so these things are tools.
And then you also mentioned you know, we keep talking about ChatGPT but there are lots of generative AI tools out there, Dall-E for instance for artwork and Stable Diffusion, there are lots of these things happening in different areas and they are all going to combine at some point.
Chris: Yeah we see, I think Cecily mentioned it earlier, the ensemble effect. Basically more capabilities together will create a more valuable output. I believe one of the upcoming additions, releases of the ChatGPT tool, GPT 4, will include vision capabilities, the sort of subset of AI, computer vision as we understand it again. These tools haven’t been released yet but that is an example of the kind of the multi models that are coming and it will be really valuable I think across a spectrum of capabilities.
And Sean I will go back to one thing just to support what Cecily said. And also to kind of tie back to what Deep had said in the interview with regards to, our standards societally will always continue to rise, and I think that is the words he used. And I think we will always find new employment, more valuable ways to interact as humans, creative, empathetic, strategic roles that can’t be replaced at this point in history by machines or models. And we will be able to reskill and upskill a significant portion of our workforce.
The reason that triggered a thought with me is I deal in robots and AI, that’s our day to day. Trust me half of my job easily is trying to dispel the myth that we are coming with an army of robots to eliminate a human workforce, it is the opposite.
And there isn’t a global irreversible worker shortage in the industrial sector. I will probably mess up the statistics but it’s something like in the next ten years there will be several million jobs unfilled in just the manufacturing sector in the United States of America alone.
And so we are addressing a labour shortage by providing kind of end to end AI and mobile robotic solutions to solve a major problem not to eliminate any jobs. The whole point for us, and in fact our ethos and mission at Levatas, is to unlock human potential. So we really view ourselves as doing that and I love the themes that Cecily you were mentioning about accessibility because that is truly aligned with how we think as well.
Sean: Absolutely. And there was always this background kind of idea of well the robots will do the drudgery, the tedious jobs. There is a flipside to that which is they are tens of thousands of dollars, hundreds of thousands of dollars in some cases these robots, and sometimes it’s maybe it’s more economical to use humans to do these jobs then to get a very, very expensive bit of kit to do it.
I was a bit concerned, sorry this is slightly off topic, but we were going down the kind of post-apocalyptic rabbit hole at one point in this conversation with Deep but I feel like we turned it around by the end of it. And we were talking about how this AI stuff is just going to be amazing and save a lot of lives.
I wonder if I get each of your thoughts on that idea of it being you know, kind of promoting saving lives as much as anything. I will start with you Gisela, what were your feelings when Deep was talking like that?
Gisela: Yeah I think at the end he was also talking about like we need to accelerate this education we as a society, learning to engage with these systems rather than saying okay let’s stop them. Let’s put a whole stop to it.
So I think yeah, I think in relation with what we were saying before about the creative, I think even then I was listening and watching some of their. Because they are at the US congress some of them are trying to you know, get to an agreement to what is going on really in terms of regulation. So I think for us it is in terms of how we engage with AI and the systems in terms of regulation, education and definitely if we approach it with care I think it could do really good things for us as a society.
Sean: And Cecily what were your thoughts when Deep was talking about the amazing stuff and the ability to save lives? And I think he was saying we are going to be living till a hundred and fifty or something which was slightly terrifying to me but maybe I will change my mind in fifty years.
Cecily: I have to admit that I appreciate a little bit more the mundaneness of life and when I think about what could I do if I had more time. I think it would be I would care more for people. I would care more for my parents and not outsource it to someone else. I would care more for my kids. I would care more for my community. I care a lot but obviously I have a lot of demands job wise.
If we could go down to the four day week which I think is showing up quite well in the UK as being very productive, just think about the time we would have for other people. And that to me is what we do.
He did have a moment where he was like well what are we going to do with our time? And it’s like well at least for me being a carer for other people can take a lot of time. I mean this is what women used to do full time before they started working and wouldn’t it be nice if we had more time to balance the ability to participate in society and the ability to care?
So I think there is amazing things that we can do if we had more time and I am hoping these technologies bring us more time rather than a faster rush to do more things. I am a bit worried if all these people are going to write things in these summaries and they are going to do it faster, me as a manager I am going to be like oh no more things I have to read. But I hope that we find this balance where it brings us more humanity.
That said I don’t think that, and I do think that people from all levels need to engage with these things and be part of that conversation. And I hope things like ChatGPT actually enables people to start thinking about being part of that conversation. Because usually that conversation is just between engineering teams and maybe some visionaries and it’s not a whole discussion, so I am glad that that’s moving in that direction.
But I think regulation, responsibility factors is absolutely critical because not everyone is going to be able to see all angles. And to be able to make sure that we have really consistent processes for understanding where’s the boundaries of what’s allowed. And where are our tools possibly falling in places that we didn’t expect them and didn’t want them to is going to be critical.
Sean: That’s interesting both of you mentioned regulation and I have said this a few times on the podcast so apologies for repeating myself. But it seems to be, it’s a struggle for regulation and anything like that to keep up with the speed of change of AI.
Is that something you find Chris in the industrial sector as well, that the regulations are written for a piece of equipment to automate something back in the 1970s and you are trying to apply it to something that is cutting edge?
Chris: Yes, again with our more narrow domain inside of the commercial world and specifically in the industrial sector, we are not dealing with the same amount of regulation, for society, for the large federal or state or municipal level. So we do of course adhere to all safety compliance, cyber security, anything that involves you know our worker safety. So our view of regulation I don’t have probably as much valuable input as these ladies.
However, our goal really is to provide solutions that take over, like you said, the drudgery, the dirty, the dull, the dangerous jobs that should ultimately, they will ultimately save lives. They will prevent injury. So we do believe that our mission, even though it is more focused on, you know, commercial efficiency and safety and security, that it is really kind of unlocking human potential.
[01:00:10]
I think just to tag on to another of Cecily’s points is just, we are concerned, me as a person, not just as kind of an industry figure, but I am concerned about the global equity situation. Making sure that these tools are available kind of as broadly as possible and not just reserved for the parts of the globe that are so developed and have access and have the tools. I think it is a really important part of not just regulation but also just a broad distribution of these tools for the promotion of global equity.
Sean: I think that’s a really important point. And particularly in the light of Deep calling this AI revolution a bit like the invention of the wheel or, what did he say? The discovery of fire, invention of the wheel or the invention of the printing press which I don’t know, is that overblown or are we in the right ball park with that do you think?
Chris: Not that far off.
Sean: Yeah it’s interesting isn’t it because I don’t know if there has been anything this large in my lifetime. I am not making my lifetime out to be anything special but it just seems so huge that in such a short period of time we have gone from predictive text effectively to this level of kind of what appears to be a thinking machine. Whether it is or not, whether it is just using clever techniques, I will leave that up to the experts.
Gisela what do you think about that, comparing this to the discovery of fire?
Gisela: I mean I think I sort of understand I guess because it’s really big right now. But at the same time I think, I don’t know, if we think about twenty years ago maybe we didn’t have a mobile phone in our pocket doing all the things we have. So many things we take for granted and I don’t know. I mean I was born in ’91 but the internet maybe would be something that you get to see how it progressed and everything.
So I think the difference with this is just how fast it’s happening. It’s just in a matter of over a year or less than a year things are just speeding up that much. And that’s why I guess it is something different that we haven’t seen in a very long time, that’s what I perceive.
Sean: I forget actually the internet was on his list and that was during my lifetime so I shall eat my words on that one.
One thing that Deep mentioned as well there was this idea of the kind of, obviously there is a huge kind of scene of start-ups in the tech industry and these small businesses with ideas and getting venture capital funding and all this sort of stuff. And one of the things that Deep mentioned was the idea that lots of them are just kind of bolting in something like a ChatGPT as the brain for another system.
And that struck me as quite a dangerous kind of approach, without guard rails anyway. And there was some discussion of guard rails and Open AI put in guard rails on things. I wonder is that a viable way to do things, to take technology like that? Is that the way things have always worked? You grab somebody else’s technology and you grab this and you pull them together and you smush it and you hope to have a new product. Is that a start-up life, am I just missing it?
Chris: I would say yes it has always been that way. I think Deep mentioned in the interview something that rings true for me that everything is derivative. And I think that’s not just true from a large language model standpoint but also from a business start-up, you know, getting traction standpoint.
I think building it as a wrapper on top of a tool like ChatGPT is a very viable way to start a business but it introduces a lot of what we would call platform risk. If you build your entire business on the top of something else and that something else decides to make a change or to incorporate some of your core features, happens all the time, it presents an undue risk on your business.
So when I talk to other start-up founders and entrepreneurs, you know, a lot of our question is what is your differentiator and what happens when or if something changes with the underlying model? And there is various strategies for that. Again I think you are, it really does depend whether you are going direct to consumers or you are working business to business at the enterprise level. But certainly it is a risk building solely on top of these models.
Cecily: I might jump in there. I actually happen to be in Brussels today because I am attending the Data Science Forum tomorrow where there is a lot of discussion around the EU AI Act and this issue you have raised around, it’s really hard to regulate this stuff but we need to do it.
And I think maybe one thing, it is a slightly different take on your question then perhaps expected, but one real difference for me between the wheel in AI is the wheel is one thing. It is a thing, you can have it in different forms but it is one thing. AI is not one thing. And I think we have sort of bundled them all together like it is kind of the same thing, you know, you put data in, you have got an algorithm you get some predictions out or, but they are really, really different things.
And the complexities of how these might be built into things. Because as you say there is very few Machine Learning which is just the Machine Learning most of them are embedded at least in an interface, some of them are embedded in social systems. Some of them might have three or four or five, you know, MR models, Machine Learning models underlying them. So there is a whole set of different kinds of systems and they are used for different purposes like it depends on the risk of the thing.
So I think this is a really hard space. It is much harder to regulate then the wheel and it’s much harder to get consistency and get right then the wheel. But that doesn’t mean that society can’t come together and start to say what are the processes for thinking about whether things are right? What level of transparency do you need to have with your customers, with the public about what level of testing you have done?
And when you know it’s going to work what tools are you using? What tools are you required to provide to know how deeply you have looked into how well your system is working because they work but what about the education?
It’s like one of the things that struck me people say well ChatGPT it will do it. I think it is revolutional although it’s very fast and there is a lot that has been done as is often the case the first, the last twenty per cent takes you know, eighty per cent of the effort right.
So I think we have kind of got to the point where we have got a technology that is kind of working and it’s pretty cool but there is actually quite a long way to go for that technology to seamless embed in systems across our society. And I think that process is going to go pretty quickly, as we saw with mobile phones things went pretty quickly.
But I think because these things can do things that are unexpected I think there will be a slowing down to try to make sure that we don’t impact people negatively in that exploration process. As we come to understand and we come to find solutions for how we address you know these non-deterministic systems.
Sean: And it’s partly a kind of inherent problem with having marketing buzz phrases like AI just being applied blanket across everything from scaleable Vecta machines through to deep learning isn’t it, you know all these different possible technologies. I mean I think I joked a few years ago that you know, you could be buying a box of eggs and it would say with AI, just if it sells then use the term.
Cecily: There is one thing that I would like to raise ,and I am not an expert in this space but I feel like it should be raised every time we talk about AI, which is the impact on the environment. I am definitely not an expert on the facts and the details here but these models, these big models take months to train on very large numbers of GPs, and there is an environmental cost to that. The amount that they can produce back when they run themselves also takes time, also has compute needs.
So we need to be thinking about we already have a sustainability challenge on our planet in a really big way. We need to be thinking about is there as much effort going into the sustainability impact of these. It doesn’t mean we shouldn’t have them but how are we solving the problem of sustainable power consumption for example to get these to work? And is that front and centre of everyone’s mind as the excitement of what’s possible?
Chris: If it’s hopefully an encouragement, when we are in discussions about the use of AI with our customers, they definitely do have kind of the environmental impact at least in the conversation at the table. It may not be exactly to the level of discourse that we would like or the level of requirement that we’d like, but things, you know energy consumption as you just mentioned, waste. You know these GPs we have to create this hardware and what does that mean.
Data centre infrastructure, you know, just these huge facilities, so there is a lot of considerations and again I agree I am also not in an expert on the environmental impact of AI. However what we are finding is that every AI business strategy should have a matching environmental impact understanding and that will mean different things for at least the commercial and industrial side of things. I know that is also happening and Cecily it sounds like that’s one of the reasons you are where you are today at the kind of governmental level.
[01:10:01]
But industry is at least aware of the challenge and I think are discussing it so.
Gisela: Yeah actually if I bring a little bit of the TAS Hub hat and talk about from this responsible research and innovation lens that is what we are trying to implement in all projects is, like you are saying, trying to get people in these discussions. The people who are going to be most affected by whatever technology is going to be deployed so whether that’s you know the creative of there or the people who are going to be, their jobs are going to be affected. But also the people in indigenous communities or the global south that will have these environmental effects in a major way.
So I think it’s nice that we bring this topic up because it’s a matter of, we need to look at different perspectives coming together and seeing what is the real impact of these technologies. Knowing it is out there and it is going to help us but anticipating the threat and the affects it will have.
Sean: It’s going to help us but at what cost. I would like to thank all three of you for joining us today on the podcast. It has been really great to have your time and I appreciate you sparing it. I understand time is precious.
So thanks very much to Cecily.
Cecily: Thanks very much Sean. I was really glad that we had lots of themes of equity inclusion and the environment alongside our themes of the excitement of what this technology can bring.
Sean: Thank you to Chris.
Chris: Sean thanks so much for having me. I really appreciate the opportunity to talk about all the exciting things happening in the field of AI.
Sean: And thank you Gisela.
Gisela: Thank you too and thanks to the others, it was a very interesting conversation.
Sean: If you want to get in touch with us here at the Living with AI podcast you can visit the TAS website at www.TAS.ac.uk where you can also find out more about the Trustworthy Autonomous Systems hub. The living with AI podcast is a production of the Trustworthy Autonomous Systems hub. Audio engineering was by Boardie Limited. Our theme music is Weekend in Tatooine by Unicorn Heads and it was presented by me, Sean Riley.
[01:12:20]