Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
AI’s impact on the future of Supply Chains
This special episode features Dr Yossi Sheffi, the Elisha Gray II professor of Engineering Systems at the Massachusetts Institute of Technology.
Dr Sheffi is an expert in supply chains, having written several books on the topic.
Podcast production by boardie.com
Producers: Louise Male and Stacha Hicks
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 3, Episode: 5
AI’s impact on the future of Supply Chains
This special episode features Dr Yossi Sheffi, the Elisha Gray II professor of Engineering Systems at the Massachusetts Institute of Technology.
Dr Sheffi is an expert in supply chains, having written several books on the topic.
Podcast production by boardie.com
Podcast Host: Sean Riley
Producers: Louise Male and Stacha Hicks
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Welcome to Living with AI the podcast from the TAS Hub. The TAS Hub is the Trustworthy Autonomous Systems Hub.
It is the 23rd of May and this episode we are going to do something a little bit different. It is a special episode featuring Doctor Yossi Sheffi who is an award winning global supply chain expert and he happens to be director of the MIT Centre for Transportation and Logistics.
Well Dr Sheffi AI is being used as a marketing buzz word everywhere right, it seems to be applied to anything and everything just to try to sell it. What is the deal with supply chain AI? Are we talking about smart scheduling or controlling the transportation or storage or smart kind of like predictions of sort of supply and demand? Or is it all of the above?
Dr Sheffi: All of the above, all of the above and a lot more. And of course, of course there is a lot of hype so if you want to start a company that does good old fashioned let’s say optimising schedule you just say I am doing it with AI. Even though you are doing the operation research or techniques that are not, which is fine.
Because the term Artificial Intelligence is not exactly defined I mean if we say you know, is my calculator artificial intelligence? It certainly can multiply numbers much faster than I can multiply in my head so is it artificial intelligence? That’s not what we mean.
We mean something that is closer to be being able to reason like human, to be able to do things like read text, absorb what is in a picture and generate a picture, videos and look beyond numbers. We tend to look at the old computer technology and just dealing with numbers. Here we are dealing with text, with pictures, with video, with other things so that is more of the current definition of AI.
Sean: And then when you apply that to supply chain, logistics and things like this are there things that we need to worry about from a trust perspective? Because of course yes we might be using, I don’t know, a spreadsheet like you say and the kind of calculator example, okay I use a spreadsheet. It helps me work out my numbers very quickly and I can decide, I don’t know, how many container ships I need to get my goods from one place to another no problem.
But when I sort of pass that in very high level terms over to an AI system, how can I trust that is going to happen or can I?
Dr Sheffi: That’s really interesting because we are now at the early days of what is called generative AI. So if you deal with ChatGPT, or the likes of ChatGPT, many times they are not only wrong but the programme is what we call hallucinating, it makes up stuff that doesn’t exist. You ask it to write an article about supply chain and it will come up with reference and journals and authors, never read this paper, this paper didn’t exist, this journal didn’t exist. So it looks for words and what is the probability of the next word so it can come up with stupid stuff.
So first of all how can we protect ourselves is we have to keep the knowledge. We cannot let AI have the knowledge. So it is great problem for hiring which we will talk about later, because we need people with five to ten years of experience. We need people who know the underlying process so they know when the prediction is stupid or they know when the scheduling goes from Boston to New York to San Francisco. I mean it’s just no context. They have to understand context and machines by and large don’t understand context.
So we need to have people who understand the underlying process, context and be able to judge, to monitor continuously. And by the way that is one of the next important jobs in the next few years there will be a lot of monitoring of machines. Be able to intervene and say something is wrong pull the plug, change the programme whatever or do it by hand, doing it the old fashioned way, you need to know what to do.
Quick example, you know, we rely more and more on machines and in 2017 the Russians attacked Ukraine with the cyber-attack on Ukraine that affected all of Europe, Norsk, you know. Suddenly all the computers went down, this is one of the largest containing shipping companies in the world.
But the good news is they had enough people who still understood how to do it by hand and they could write manifests by hand and fax them, they know where to fax them. Well we have to make sure that ten years from now we still have these people because otherwise that is the danger. The danger is to keep expertise even if they are not used day to day just keep them. Think about the safety stock in terms of supply chain, its safety stock of knowledge.
Sean: It sort of ties into the thing about using these tools as tools rather than putting everything in their hands.
Dr Sheffi: Absolutely. Look in universities right now there is a big debate and some of my colleagues don’t allow people to use ChatGPT. I look at it and another colleague look at it very different. I look at it like you say, spreadsheets. Before spreadsheets you wanted to write some financial model you have or scheduling model or something, you had to go and programme it in Cobble or God knows what. And you know go back and forth with the programme and do something and then it never came out the way you wanted. Now download some data and do it on your own spreadsheet, it’s a tool.
So now with ChatGPT or generative AI what we should teach a student is a) how to use the tool well, b) how to ensure that the result makes sense, so have the context, have the ability to judge the result. And at the end you are responsible, if you are submitting something that ChatGPT wrote and it’s wrong, ChatGPT doesn’t get an F, you get an F so it’s your responsibility and you have to make sure.
The most successful uses of this is for first draft. People are sometimes, you know they say how do I write and produce something first draft, but then they keep working at it. They keep saying okay let’s make it look better, let’s add an argument there, take out, work with it, beyond editing really re-work it.
Sean: It’s that blank page and filling it with something just to get you started it’s absolutely fantastic for that. I have had a conversation during the podcast with somebody else from, who was based in Seattle, who was saying that a lot of start-ups are taking ChatGPT and plugging it into something as the brain, which I see as being slightly problematic particularly when you mentioned that cyber-attack on Ukraine. And recently we had the BA entire fleet grounded because of a computer network problem, that I don’t know if it turned out whether it was an attack or not. I don’t know if anything was admitted.
But I am just thinking if you then, yeah if you have something like ChatGPT sitting above that it’s going to be hard to work out where the problem lies if nothing else isn’t it?
Dr Sheffi: Well what you need to make sure is that you have an off switch on whatever system there is. When you see it starting to act crazy and giving you wrong results whether it is coming from the ChatGPT or the app itself has something wrong with it, fine just stop using it and investigate.
But by the way the idea of embedding complex programmes in a seemingly simple app, it’s been well before generative AI, there are lots of. Look when you go on Google Maps and try to go from A to B, there is a very sophisticated algorithm under this that tries to convert the road network to a network of roads and links. And try to run and show this algorithm and then be updated, it’s actually what we call, it’s a digital twin you know, people talk about digital twins.
Usually a digital twin when people talk about they talk about a track or an asset or something and you have a replica of this digitally and you keep getting data back and forth. Let’s say GE is using it for aircraft engine, so they have so for each engine separately they may have a digital twin. And every time the aircraft goes through, you know, up and down and clouds of smoke and ash it sends data to the digital twin. The digital twin keeps analysing and says okay you may need to replace part blade number seventy-two.
But we all use digital twin all the time when we use, you know, Google Maps. Google Maps is a digital twin think about it, there is the infrastructure and there is the digital representation. And the infrastructure keeps sending data about congestion, about what’s going on on the road, about closure, where the police car is, where the cameras are if you are in England, so you have the digital twin in your car. We don’t think about it as, but this is exactly a digital twin that everybody is talking about. So it’s not, and there is a lot of stuff embedded in this already.
But okay, let’s, just the worst that can happen is we go the wrong way right, it’s not a life and death situation. If you have you know, robotic surgery and something goes wrong the result can be unfortunate.
[00:10:00]
And interestingly also, let me just say, that all governments around the world are talking about regulating AI. In fact the industry itself is already regulating itself, which is by the way a very good sign because you know in the early days of the internet we all got very excited about everybody is going to talk to everybody. Everybody is going to love their neighbours. And then we didn’t think about identity theft and terrorists talking to each other and you know, everybody stealing our data, we didn’t think about it.
Now with generative AI the people who develop it are very aware of the dangers. So even today ChatGPT when it came out if you try to say give me the recipe for a Molotov cocktail, it’s not going to give you this. It doesn’t give you this. So there are already guard rails that you know developers are already putting it on and governments are looking at how this can be controlled.
So at least there is a lot, right at the beginning there is a lot of worry about the downside of it. And you can see the downside, the deepfakes and other, you can see the downside. But people are already worried about it right from the beginning which is good.
Sean: There is a whole field isn’t there of AI safety and there are groups of people who are definitely concerned about these things and worry that it is kind of the Jurassic Park. I have mentioned the kind of Jurassic Park quote before you know, just because they could doesn’t mean they should etcetera, etcetera.
Just going back briefly to what you were saying about kind of keeping people with experience in the loop and on the staff as it were. I am going to be slightly cynical here but having worked at the BBC where year on year the staff members were paired down until you know, you are kind of running it one man and his dog in a small cupboard somewhere rather than a team of twenty people, how do we keep an eye on that?
I mean is that something that we are just going to have to learn by experience do you think?
Dr Sheffi: Okay, first of all this industry, the reduction in this industry and of course not only the BBC, its newspapers in the United States are going out of business. Obviously journals like the New York Times are still keeping staff but small towns papers and many, many places are without newspapers at all.
Sean: They can get the AI to write it right?
Dr Sheffi: Well that’s the problem because AI can be biased and who is watching it? AI does not have context so it can be biased, it can be racist, it can be God knows what, so we are trying.
But this, you are talking about a profession that has, honestly my worry about your profession is a lot bigger than that because these are the guard rails of democracy. And they are being hard pressed and now they are even harder when some managers or owners of media outlet may think okay we have got ChatGPT to do it. We will get the, you know, some image to be the online talent, we don’t need the, that’s, I don’t have an answer for it.
It is worrisome because it is a trend well before ChatGPT, to me as I say it is a threat to democracy, which is already taking place. It is already taking place. Part of it is this, part of it is chasing the ratings.
Sean: Politics to one side. I suppose I am taking that and I am looking at it and there is sort of serious problems with that but back again just to something you have already said, you know, with the kind of surge in using robotics there are serious consequences. With logistics there are serious consequences if something goes wrong and a huge container ship or an aircraft or something is relying on some of this technology and perhaps we don’t have enough people, what do we do about that?
Dr Sheffi: I honestly think that the worry is exaggerated about losing jobs. Look nobody is using warehouse robotics more than Amazon. I mean they are very, very good sophisticated robots. But since they really started using widespread robotics in 2017, they have hired one point two million people. I mean it is a combination. And the thing that works so well is a combination when the people and the robots work together.
And that to me long term, by the way let me just mention and we can go back to the jobs because my whole book is about the jobs and the fact that they are not going to disappear tomorrow.
Sean: I have done some work in agri-robotics. So I am videographer by trade and I have done some work with agri-robotics group in Lincoln. And various people there are saying there won’t stop being farmers but there might be more farm hands who become robotics technicians as opposed to perhaps doing the manual labour and things like this. So it is about kind of moving those people around, I understand that.
Dr Sheffi: This is absolutely the case.
Sean: But my concern is that when we were talking about having the experience to fill out the manifest etcetera, that’s the issue isn’t it?
Dr Sheffi: And the issue is, I am talking about it in my book about the main problem is hiring the starting jobs. Because what the company needs is people with five to ten years of experience who know how to do the process and know how to intervene. How do you hire people with five to ten years of experience? They finish school you cannot hire them so how are you going to get this pipeline?
Sean: This is the Silicone Valley joke though isn’t it? We want twenty-two year olds with thirty years of experience.
Dr Sheffi: Yeah exactly, exactly. So it’s, so the question is how, and in my book I talk about some things. If you think about some system like the German dual education system, when you go to a university and you work at a company at the same four years, during the same four years. Half the time you work on a job and half the time you actually learn the theory and work that is a system that gets people with some experience right into the workforce.
Seventy per cent by the way, it is fifty-one or fifty-two per cent of German high school students go through this system and seventy per cent of them are getting hired by the company they do internship with, so it works.
So yes first of all do a lot more joint internship and stuff like this. Second revamp online education because people have to be able to educate themselves to upskill, to reskill.
And another use of AI is augmented reality. You have to run your warehouse and you don’t know where stuff is because you are, it’s your second day and you just learn how to step on other people or how to avoid the robots. Now you have got this rule and it tells you pick up this package and this is what is inside this package and this is how it has to be handled and it goes to spot number three and, you know. So you have immediate instructions that are fuelled by AI. And by the way when you do it you actually get experience about how this works.
But I also put in the book for governments to realise that there is a period of people who need help. There is, and it’s funny because I am as computerlistic as you can imagine, I have five companies, this is absolutely proper role of government. Because we will need people, we will have people who are not in big companies. Big companies are investing, you know, UPS, Amazon and they are actually upskilling their people.
But there are so many gig workers, independent workers, they have to think that they are responsible for themselves. And even people who are journalists or, I cannot tell you by the way how many journalists ended the interview and once they stopped taping they said okay, so what do I do now? How do I prepare myself?
And by the way my answer is always the same. You are in the media, you understand the media, you have been in the media for years, broaden your horizons. You do onscreen, learn how to do editing, learn how to go to adjacent fields and by the way learn how to do it with AI. So a lot of it is just breadth of skill because we don’t know exactly what will happen in the future but you are actually creating safety stock so to speak. You are creating every other area where you can contribute and be valued.
Sean: It’s definitely and certainly my industry is definitely an area where people have to be able to put different hats on for different occasions already. So I think that’s a, that’s definitely an area where people can kind of yeah, take on more different ideas, more different roles and use those tools as well.
In your book I know you talk about six areas that humans surpass computers. With supply chains, can you give us a couple of ideas of what those might be?
Dr Sheffi: The most important stuff is the context, having the context. So we now have the AI telling us that this is the best investment decision but are we going into recession? Are we going, what’s going on? What are other companies doing? What’s the context of this that helps me make a decision?
The AI yeah it may run the numbers absolutely and it may do the cost benefit analysis better than I can do it, it may even take some things into account but what’s going to happen with Taiwan? Is this really something that I should worry about? Who is going to win Iran, you know Ukraine? There is lots of issues that can give you context.
Then there are issues of bias, you know, it can be biased. There is no reason for it not to be biased because it is trained on data that it gathered in a lot of places. So whether it’s, you know, anti-woman, anti-LGBTQ, it gives stuff, its offensive. And sometimes by the way it is stupid so I will give you a quick example.
[00:20:09]
So I put a section of my book into, Microsoft has their own AI that checks for biases. I said okay I will check my book for biases. So I had a section of the book that talks about the Luddites, you know England two hundred and fifty years ago or so, and so I am talking about the master weaver. The master weavers were losing their jobs. So I had the word master, the word master was flagged every time but it is not understanding the context exactly. I didn’t say master and slave. I didn’t say, but just used the word master and they said okay. But how were, they were called the master weavers, it is what they were, so again not understanding context.
But then of course there is an issue of a moral code and empathy. And these are issues that in some sense when people are upskilling on the technology many of the soft skills will become more and more in demand. People who are able, look it’s not only you know a nurse in the hospital or a greeter in the supermarket.
Nobody that I talk to believes that you can say strike a supply chain deal. A supply deal with somebody with a supply in China using anything but flying there, negotiating for two days like hard and then for two days having dinner, talking about your kids and become friends and then, you know, meeting the wives or the husbands. That is how business is done. Supply chain are social networks and people, it’s hard to imagine that this will be totally automated in some sense. And we are so far from it, also in terms of the technology because it has to be fool proof.
Another example, why is it not going to happen right away? There is an issue of acceptance. You think today about the Boeing 777 or 787 or Airbus A350, they can go by themselves with nobody involved from gate to gate. They can go from gate to gate. Would you board an aircraft, just a metal tube with nobody in the front? Not too many people would do it, it is a matter of acceptance. So would you accept big autonomous trucks running on the road at a hundred kilometres an hour right behind you when there is no driver there? It will take a while.
Sean: It will take a while yeah to build that trust. I mean but the joke is of course with the aircraft is they have been taking off and landing and all sorts of things for years haven’t they?
Dr Sheffi: Of course, of course, of course but people don’t realise it.
Sean: And you want to know that there is someone there just in case right?
Dr Sheffi: And by the way interestingly one of the toughest future jobs is monitoring automatic systems. Because you sit there and you have got nothing to do and you can easily forget it. In aircraft what they did is they make the pilots do most of the communications so at least they have to do something. Because they already fly by themselves so they don’t have to do anything, they put in the coordinates and it just goes.
Sean: Yes.
Dr Sheffi: In fact the coordinates-
Sean: And you have got to check the transponders, there is all sorts of things going on isn’t there? I wonder if in the future things will be gamified slightly just to keep people’s interest? You know they have this gamification even on Satnav systems use ways and you can get points for reporting problems on the road and things. So gamifying maybe a way of keeping people’s interest on the job when the job isn’t being done by them, I don’t know.
Dr Sheffi: And systems, there are systems, I don’t know if you ever drove a Cadillac 66?
Sean: We don’t have many of those in the UK sadly.
Dr Sheffi: Okay. 66 is a nice Cadillac and it has, you can put it in automatic mode, so it will drive like a Tesla, they will drive on their own. But what it does is the following, I have drove it many times, it has a camera that looks into your eyes, the minute that it sees that you take your eyes off the road about two seconds later the steering will start shaking, shaking, shaking. If you don’t take hold of it the car goes to the side of the road and stop.
So again here is an example of you look at automation but you say I want people to monitor it all the time. I don’t want people to fall asleep while they are, so people are building systems, they are thinking about this tough job of just monitoring. Maybe gamification is part of it, maybe another type of stimulate is part of it.
Sean: But it’s interesting you mention the kind of idea of the shuddering steering wheel because this is, often it’s the changeover between machine and human that’s the issue isn’t it? It’s that transition can be the problem sometimes can’t it?
Dr Sheffi: Yes and there is a lot of work being done in this area about how to alert, how to notify, is it enough to send a text to your phone? Well not while you are driving but in some other system that monitors your flat, your apartment will send you. I don’t know if it exists but it exists in the US, we will send you an alert if something goes wrong.
If you have a Lexus, I know we look at a lot of cars, with Lexus you have an app and the app tells you everything about the car. The car is, you know, the back left door is open, the window, the car is standing and not moving, the car is, of course the location, of course everything about the car, the engine. It gives you all the information about the car.
You can of course start it remotely and do whatever but you can lock it remotely, you can get the windows up and down remotely, you can control the car remotely. So that is a small step beyond driving it remotely right?
Sean: It’s not far off is it. I have had a couple of Volkswagens with various levels of let’s say driver assistance not self-driving but you know. From kind of adaptive cruise control and lane assist, through to my current, I have now got an electric car and I am sure it is technically capable of driving itself but I would worry if it did.
Because the assist systems sometimes get the speed limit wrong, you might be driving along at sixty-seventy miles an hour on a UK dual carriageway, motorway and then you come past an area with a flyover like a bridge over that has a different speed limit on. And the car sometimes, not always, but will detect that speed limit and begin to slow down very quickly because it suddenly thinks it’s missed something and that is a concern.
Dr Sheffi: Yes I know. We have a car and we have disconnected all this stuff.
Sean: Well exactly this is the thing, the first time I used it for a little while. And I suppose this is kind of the problem isn’t it, you don’t know when those things may or may not happen. And if you, obviously the overarching idea of this podcast is trust, you begin to trust a system and then it does something like that and you go no I don’t trust that anymore.
Dr Sheffi: No I just didn’t like the feeling because the car that I have, it is a Lexus, goes the minute that you go a little bit over a white line in the road it gets you back and it’s kind of like really? I will do fine.
Sean: I sometimes get a message up saying drive in the centre of the lane and I am thinking no there is a reason I have done this, I can see a parked car I am moving to the side. No drive in the centre of the lane. No, context you are absolutely right.
Dr Sheffi: Context exactly.
Sean: Just to kind of briefly return to, I am really enjoying this conversation but just to briefly return to kind of supply chains idea again. I know that one thing that you have talked about as I understand it in the book is about how during the pandemic supply chains actually worked quite well.
I was just thinking what source of examples are there there? Because I know there are problems sometimes with say just in time supply chains when people for instance buy a lot of toilet paper, like has happened here in the UK, and suddenly the supermarkets have a problem with that.
Dr Sheffi: Okay, so first of all by and large there were supply problems but not supply chain problems, they were pandemic problems, that was the issue. There was pandemic, people would fear, sometimes actually totally generated by the media but the media was hyping up a lot of stuff. I can give you a lot of examples here because I have talked to journalists and give them a piece of my mind after some stuff.
But think about it, in the US in middle of March 2020 I think, from one day to the next all restaurants were closed. All universities were closed. All industrial parks were closed. That’s half the food, half the food goes into these places. And by the way it doesn’t go in small packages that have exactly the nutrients in them it goes in fifty kilo sacks and it goes on a pallet and there is no machinery to create this. Yet nobody went hungry.
I mean stuff sometimes, I should tell you one story, there was an article about we are running out of meat, you know a meat scare. Well for a week there was some closed factory, there was no meat shortage but the cut of meat that you wanted for a week or two weeks they didn’t have it. But I called them and said do you even realise that the United States is the fourth largest exporter of meat in the world? I mean we export more meat then we know what to do with.
[00:30:16]
And what are you talking about a shortage of meat? Exactly, no context, even the New York Times before the pandemic they did not have a logistics bit, nobody covered logistics. Nobody covered freight transportation, supply chain only during the pandemic they got somebody to start doing this. On the good side there was a journal actually that had very good coverage about six journalists covering just supply chain.
Sean: Yeah I think the media did over blow all that. I just think it is worth just thinking about how these sudden changes can affect long kind of established systems I suppose.
Dr Sheffi: And we see some changes, we see some, especially talk but some movement already of trying to regionalise supply chain to have less dependence, especially in critical parts like medical and national defence part. Try to make sure for example we don’t depend on China or Russia.
And a lot of companies are moving, not a lot, but many companies are moving stuff out of China but for the most part they are moving it to Vietnam and other parts of South East Asia. Some are moving to Mexico, in the United States Mexico is the next. Some are trying to move into the United States but there is actually a shortage of work and a shortage of engineers. With all the talk about AI taking jobs and the United, is it three point four per cent unemployment. The highest number of employed people ever, so we try to square these things, maybe it’s all temporary.
The most important statistic is not the unemployment because unemployment it can be biased, it is the per cent of the population of employment age employed. And this is now where we are behind, so during the pandemic this is the one that went quite low and this was worrisome.
But look, in the United States right now a lot of the statistics you see about two point five jobs opening for every applicant. It is booming. I mean all the inflation, of course there is inflationary pressure and my God Trump is coming back, so there is a lot of issues to worry about.
Sean: I will smartly sidestep the politics. Just to kind of, because what you are saying about changing those kinds of supply chains and you know, there is an environmental and sustainable kind of bonus to certainly bringing things closer to you know, bringing the source closer to the destination isn’t there?
Dr Sheffi: Well that’s actually, one has to look at it very carefully because I did several studies. I wrote a book, Balancing Green, about supply chain and system ability. And people are talking about oh buy flowers only locally, only locally grown flowers. Well to grow flowers in Boston over the winter you need to have greenhouses and you know, warm them and heat them for months. As opposed to growing in Costa Rica and then taking an aeroplane in but on the whole of it it is not even close. Growing in Costa Rica is so much more sustainable because they are growing to sun.
Sean: This is a classic example of the devil being in the detail isn’t it?
Dr Sheffi: Exactly, the devil being, and don’t go by the slogans and quotes, you have to look at the data, what is involved with this?
Sean: Yeah and things are being greenwashed as well aren’t they you know, with data being kind of used incorrectly.
I did have an idea a few years ago to start a YouTube channel called What’s the Difference. Where you take two products and you work out what is the difference environmentally between say, I will use beer as an example because it’s my favourite. If I buy a tin of beer or a bottle of beer, you know from a kind of recycling point of view, from a food miles point of view, what is the difference? Which one of these is better? I mean one may test better don’t get me wrong.
Dr Sheffi: And at the end of the day you would say A tastes better but B tastes better so hey.
For my book I did an experiment, I did several experiments. Well one of them we look at supermarkets and talking about sustainability. And several supermarkets in the Boston area that have sustainable, green owning with sustainable product like paper product made from recycled paper and then laundry detergent that is not harmful to the environment and stuff like that. They cost five per cent more, a little bit more.
So first of all we did, you know a vote and seventy to eighty per cent of the people said they would gladly pay five per cent more for a sustainable product. In reality the people who actually buy it is about eight or nine per cent. So as I say when everything is said and done a lot more is said than done.
Clearly people who are more highly educated, higher income tend to buy more sustainable but eight per cent? This is in one of the most progressive states in the United States, it’s not in Texas where it would be probably zero. I don’t know depending where, or in you know, Arizona or somewhere. This is Massachusetts and still so there is a lot more said than done.
Sean: I like that phrase. Just going back to the idea of trust in supply chains?
Dr Sheffi: The question is you are talking about trust in several dimensions. We are talking about trust between people and the definition is, my definition is the willingness to accept vulnerability when you have a positive expectation that the other side will behave the way you expect them to behave. That is kind of my definition of trust and it is applied to anything. If you use a piece of software you hope that it will do what it is supposed to do and you hope it will get you the right result.
But at the end of the day most of the issue of trust is a thing still in supply chains between people, you know. Supply chains are still social networks, it is between people and people have to trust each other.
And in that sense I think that the main focus between people is not whether they use AI or not, I can use whatever I want. The question is did I do whatever I promised I would do? Did I send you the stuff? Did you order two goats in the container and I send you one dead chicken and one sick cow? It’s a problem with trust and block chain is not going to help you.
Sean: But you would like to think when it’s people. And it’s funny you mentioned block chain because people have mentioned block chain and supply chains on the podcast before. Oh you can follow a link on the side of the tin and it shows you where the meat came from or so but it’s just a database right, but anyway that’s a different conversation.
But I think usually when you build trust between two humans you feel like you get a sense of somebody, even over email or text or whatever. Whereas when it’s a machine you don’t know what’s going to happen next time you do it or am I being too cynical there?
Dr Sheffi: I think let’s say you do it for hundreds of times and it gives you the right result. Look you are trusting a spreadsheet. You are trusting a spreadsheet because you use it so many times and even though, by the way if you look under the covers at the calculation, it’s not exactly accurate even because it cannot represent the number perfectly.
Sean: It’s an approximation isn’t it, yeah?
Dr Sheffi: But it is good enough.
Sean: I suspect the issue here is that, going back to the buzz phrase of AI, is that sometimes with neural networks, deep learning, all these different kinds of new technologies, often even the people who have trained them, particularly in things like generative AI, don’t know exactly what is going on under the hood. They don’t know why.
So in the spreadsheet example okay I am not going to open up the arithmetic processing unit in my CPU and work out exactly why it goes to what but you know that somewhere somebody has worked out what algorithm can do that in the closest possible way.
Dr Sheffi: Yes.
Sean: And that’s the problem isn’t it, they are different.
Dr Sheffi: The fear, the Jurassic Park fear is what is called emergent properties. When you have properties of the software that nobody forecasted, nobody expected, nobody knows, the software itself created. And this is, it exhibits this already. For example ChatGPT can be translated to many languages by itself, it can start reading other languages and doing translations, okay I can do this and suddenly ChatGPT is Swongolian, I don’t know what.
So yes it has an element of emergent property, things that it can do by itself for good or bad. But the problem is to do something really bad you need to have intent. I mean it can do something bad by mistake clearly but at least to date. Unless by the way there are deepfakes that people are already starting to do and this is intentionally people are using it but it is not the software itself, its people are using it with bad intentions.
Okay this can happen to any technology, we talk about the internet. I mean the internet is the greatest thing since sliced bread and yet it is used, so many problems with the internet so it’s hard to say.
Sean: Yossi it has been absolutely fantastic talking to you today and I appreciate you sparing us the time to come on to Living with AI.
If you want to get in touch with us here at the Living with AI podcast you can visit the TAS website at www.TAS.ac.uk where you can also find out more about the Trustworthy Autonomous Systems hub. The living with AI podcast is a production of the Trustworthy Autonomous Systems hub. Audio engineering was by Boardie Limited. Our theme music is Weekend in Tatooine by Unicorn Heads and it was presented by me, Sean Riley.