AI: the consultant’s friend or foe?
A panel debate
First broadcast onApr 29, 2024
Opinion on the seemingly unstoppable rise of AI is varied in every sector. But how worried should consultants be about AI, and can it help? Joining Phil for this panel debate are:
Dan Sodergren
Dan is a keynote speaker, expert media guest and author on the future of work, technology and AI.
Liz Timoney
Liz has trained over a thousand corporate leaders; she’s also a coach, speaker and the author of Unstuck: Change Your Life Story.
Dr Simon Moore
CEO of IB, Simon is a doctor of Psychophysiological Psychology and a Chartered Psychologist with the BPS.
Transcript
[00:00:00] Phil Lewis: Welcome to The Consultancy Business Podcast with me, Phil Lewis. We are here to help independent consultants build the consultancies nobody else can, and champion ethics and excellence in independent consultancy. This month, I decided to convene a panel. Like you, I’ve been reading a lot about AI in the press, but I’ve also been thinking a bit about AI in the context of consultancy. How worried should we actually be about AI nibbling away at our market? So I decided to convene a few smart people to have that conversation. Joining us this month, then, are: Dan Sodergren, a keynote speaker, expert media guest and author on the future of work, technology, and AI; Liz Timoney, trainer of over a thousand corporate leaders, coach, speaker, and author of Unstuck: Change Your Life Story; and the previous guest returning again to the podcast, Dr. Simon Moore, Chartered Psychologist and CEO of the brilliant behavioural strategy team, IB. Well look, let’s get straight into it. I started by asking Dan a question, which was, in essence, how worried should we be?
[00:01:14] Dan Sodergren: Well, I think worried is most probably the wrong word. How excited perhaps you could be, or perhaps how intrigued you could be, in the effectiveness of it all. You’d have to be the same as… maybe people were worried when they first saw a tractor or a Spinning Jenny or something. Of course there was a whole movement of people that were scared by technology and rallied against it. I think as consultants, we… and I am one myself, so I’m certainly not biting the hands that feed me… I don’t think that we need to be concerned. I think artificial intelligence is going to be enhancing us rather than anything else. It’s certainly not going to be replacing us anytime yet. I talk about this in my book, The Fifth Industrial Revolution, and this is where we’re at, which is why I always use the kind of analogy of tractors and other things, you know. Lots of us used to work in the fields, and now we don’t, right? Consultants, their job will simply move somewhere else. I don’t think we’re gonna be replaced with artificial intelligence. Not yet. Certainly not in the next couple of years, anyway.
[00:02:05] Phil Lewis: That sounds sensible to me. But then if I think about a tractor, it seems to me that there’s a difference between a tractor and AI, in the sense that AI does a much better job of impersonating certain human capabilities than the tractor would. And it seems like there are questions that come out of that about the relationship between what AI can bring by way of replacement human capabilities and how we actually all work together. Liz, what are your thoughts on AI and the relationship between team-building, and team performance, and dynamics of trust and confidence that underpin that?
[00:02:44] Liz Timoney: Yeah, I think that there’s a bit of a problem coming with the trust side of things, that’s just… as AI becomes more emotionally intelligent, and more difficult for us to discern, are we talking to AI or are we talking to a member of the team? Will we get to the point where some of the members of the team are AI bots, for example. I think we’ll see a whole place of work where the lower-paid work gets given to the bots and the elite will always pay for humans. I agree with Dan, there’ll always be room for humans. The problem is, if the elite are paying for the humans, what happens to everybody else with that skill set? So I’d say if you’re an elite consultant, maybe you haven’t got too much to worry about, but if you do very run of the mill things, then I think you’ve got a lot more to worry about, potentially, if you have these multi-discipline teams of AI combined with human consultants, how does that work? Because we trust each other based on character and competence, broadly speaking. So the AI bots will be very competent. You know, I don’t mind having an AI doctor, I think it will probably know more than the average GP. I’m just going to annoy all the GPs there! But it won’t have the human character qualities and the deep experience of just knowing things, just like the consultant. We go in and we just pick up on stuff because of our experience. The AI consultants won’t be able to do that. So my worry is more what happens to the dynamics in the team where you can’t trust the whole team. You know, can we take the AI bots down the pub and have a drink with them afterwards? I think not.
[00:04:15] Phil Lewis: Simon, do you have anything to say to that from a psychological point of view? I mean, it feels to me like those questions of trust, and the levels of trust that we place in machines relative to humans; they feel like the kind of things that psychologists have looked at over years and years. I don’t know if AI changes the calculus on this.
[00:04:33] Dr Simon Moore – IB: Yeah, I’m going to take the conversation down a slightly different route here. So I’m going to start mentioning films. So if you think about some of the most popular films in the last 50, 60 years, The Terminator, The Matrix, 2001: A Space Odyssey — all have a common theme where the AI has gone rogue and the humans are trying to come back and obviously bring it back into some sort of form of control. So, from a sort of human point of view, following on from Liz’s issues with trust, there is an inherent lack of trust, I think for a lot of people, who might be uneducated, I think probably more so in that area, which we have to overcome. And something else that Liz said, I find, and there’s a… I don’t know if Dan’s seen this, but there’s a new study out from the Department of Computing, Copenhagen, who have literally just concluded, after a very lengthy study with AI, that it’s not very stable. The algorithms behind all AI systems are slightly flawed, in the fact that in real life, when we make decisions and think about things, there’s lots of extraneous variables that cut out as humans. So we’re very good actually not getting sidetracked in terms of other things that are going on. What they found is that AI includes everything it encounters in that situation. And is not as good as a human in terms of switching off some of those extraneous noise levels. And so it comes to slightly different conclusions. And the conclusions wobble around a lot. Whereas a human, when you give it a set problem with a defined outcome, is better at being more consistent. I thought that was quite interesting.
[00:06:18] Dan Sodergren: I think it’s absolutely fascinating. There’s a couple of things really that’s worth unpacking, isn’t it? When you talk about teams and AI. Number one, people say the word AI — and I completely respect the fact they say that, however, saying AI is like saying people; ‘all people do this’. No, they don’t. Certain AIs are different to others. In fact, there was at least five different LLMs that are completely different personality-wise and ability-wise. So to be able to say that… and also LLMs and large language models and AI systems that say, even if you just take the major ones, they get better and better. Just like a human being learns more and more, so this thing gets better and better. Simon’s absolutely right though. Yeah. There is an underpinning thing to all this stuff, which is around data, which we’re stumbling upon now, that actually, because it is such a mass of stuff, it can come up with different things. So it’s inconsistent. But that inconsistency can actually be… we can kind of weight it against how well you are as a prompt engineer. So there’s loads of other things you can do. It’s a bit like saying, you know, my interns are all the same. They all do this. Well, no, certain interns are better at doing certain things, and how well have you managed that intern, and have you been clear with your instruction. These are all things that are massively important to artificial intelligence. I know that Simon most probably knows that already, but this is the reality; is that we’re a little bit closer to different strengths and abilities. You can’t just say AI. We’re beyond that world now, and it’s how well you’ve trained it and how well you’ve done with it, especially with your own data. The other point that I’d like to make, to side with it, it’s not in a bad way, but we are very west centric by the way, whenever we talk about this stuff, right? People say; I use films and therefore AI and Terminator, right? Go to Japan. Go to China. They don’t have that at all. In fact, Japan’s had things way before the term, like about 20 years. It’s how much of a geek I am, by the way. You should never have mentioned film texts, Simon. Sorry, my fault. But if you, if you Google or whatever, some of the kind of cultural reference points that have been around for 40 years in Japan, it’s the exact opposite. AI is a helpful thing that makes the world a better place and time travels and does all sorts of cool stuff. They don’t create robots that come back to kill John Connor. They actually create little cat creatures that come back and help a seven-year-old boy. That’s literally the difference. So I think we can sometimes look at AI and say, yeah, because we come up with James Cameron and laser beams and death, therefore so will AI. But I think that’s a little bit strenuous, isn’t it? I mean, we’ve got to be a bit more, global in what we look at. I think we sometimes create our own futures by being fearful, but that’s just… I’m not a psychologist.
[00:08:41] Liz Timoney: And I think that’s an interesting point about the culture that we’re in. Because we’re… the West is very individualised in terms of what we appreciate and how we reward and and how we appraise, you know, that’s what we’re most scared of somebody coming and stealing our individuality. And you know, hurting us individually. Whereas in the East, it’s much more team-based, much more community-based, we’re building on each other, and so therefore the threats would be against the whole community. And equally, that’s where the control appears to be, to me, as well. So in the West, you know, we’re very driven by social proof and this kind of competitive comparison that is toxic to everybody. Whereas in the East, it’s much more at that kind of community family level. But then you can’t get outside of that. Then that’s where that becomes constraining. I don’t know any perfect culture. I wish there were.
[00:09:34] Dr Simon Moore – IB: Following off on that, Liz, and Dan’s points, is the fact that if you think about other criticisms in terms of why we might not trust this kind of area… I mean, a lot of the time we say, oh, well, you know, these systems are going to have lots of cognitive bias in because they are programmed by humans. So obviously, naturally, the cognitive bias that the human person puts into the algorithms is inherent in the systems. The counter to that, and to follow on Dan’s point, is that humans are full of cognitive bias, and we trust those quite well, so why would we not trust a system that’s equally inherent with bias? It’s kind of… it doesn’t make any sense in that sense, and I think to your point Liz, you made a great point there, and following on from Dan’s, it’s that difference, isn’t it, between East and West, and the concepts of that ability to own our own sort of sense of self. And I think these systems do provide a unique challenge to Western societies about the fact that, oh, suddenly we’re not the orchestrator. Or, you know, the conductor in the orchestra — we’re now part of a section. That’s not necessarily true. And I think, in any areas where we are worried about something, we kind of lynch onto facts and figures that might support and prop up our kind of concerns.
[00:10:57] Phil Lewis: I think what this conversation starts to go to, though, is societally and culturally, what do we value? So. Again, talking back to industrial revolutions, I mean, there is a truth of all labour saving devices in history, which is that ultimately in the round, they don’t tend to save a great amount of labour, right? And my sense of part of the worries around… that exist around AI is actually not really to do with AI, is to do with where the control of our technology actually sits, right? And who has the control. Because it seems to me, anyway, that if I was to pick the systems of governance that controlled AI, I won’t necessarily go, I think I would like a whole load of Californian hypercapitalists to have a huge amount of control over our destiny in the way they have over the last 20 years. Now, that I think goes also to the theme of this podcast in a way, because as consultants, we also have choices about what future we actually support. Now there’s many things I think consultants can do with AI. So for example, we can help clients make sense of it and what it might mean for their businesses. We can help them build it actually, in some cases; build those capabilities. We can help them understand how to manage it in terms of their systems and processes. And Liz, to your point earlier on, we can also help teams of people think through how they want to work with it. And I’m sure there’s loads of other examples besides those. But it seems to me that if we’re going to be doing that, the question really is — in service of which values are we going to do that? Because if it is through the lens of what you were talking about, Dan, which is actually, this could be a really exciting future because it frees us up to do lots more creative and imaginative work and have much better balance in our lives, that’s great. But I personally see absolutely no evidence of that whatsoever if the systems of governance and control around this will actually ultimately be in service of the existing kind of capitalist paradigm. Because in the end, what will actually happen is that the people who run those businesses — and the consultants who serve those businesses — will actually ultimately use it to strip out labour, save costs, return more money to shareholders, and therefore you’ll see rising inequality and all the other stuff, which continues to dog our society. So I think from a consulting point of view, the question that all this gives rise to is what kind of future do we want to support?
[00:13:30] Dan Sodergren: I’m going to come back to the old tractor metaphor. And apologies, I know it’s a bit punky, but it works in some respects. We also know the fails of the tractor. Obviously in real life, there wasn’t just a tractor that appeared, right? So we know this, and we know the fact as well that somebody owned the tractor, right? And we know that someone owned the land. So landed gentry tend to own the land, then the tractor was most probably paid for by industrialists who were quite rich. They were the 1% or whatever in the Silicon Valley, or whatever you’d call them, of the day, right? These people that were entrepreneurial and they create tractors, et cetera. I’m simplifying, but I think you see where I’m going with this, which is normally the power rests in those who own the machinery, which is what your fear is around the governance of it, right. But the exciting thing about AI right now, especially in the world of consultancy, not only you can go in and teach companies how to use this better and become more productive, which is what I do also as a job, and to train people how to use AI. But more excitingly, the individual, me and you, the average individual, the average individual who could not afford a tractor can afford to use AI. We’re talking about $20 a month. This is the same as you not having a sandwich, or making your own sandwiches, and not going to a sandwich shop every day. Yeah, it’s less than that. It’s a tiny amount of money. It used to be that you couldn’t afford a tractor. But now everybody can afford the means of production. I come from this from a very, very non-capitalistic point of view, because actually the exciting thing about… up to 40% more productive, I think something like 14 to 400% more productive as an individual. Now it depends where you are, of course. We all talk about, you know, knowledge sectors, but when most of us work in the knowledge sectors, right, who are listening to this podcast, perhaps. Now we’re in a very lucky position, therefore, that we can utilise a piece of technology that now exists, which is pennies in the pound, which gives us an ability to do something that we can never do before. And that’s what I get excited about. I actually think it’s exciting from the, you know, the democratisation of opportunities, why I get so excited about AI, because it is the… if you’ve got a laptop and the internet and £20 a month, you can change your life using a large language model. Yes, controlled by a large American company. Maybe it doesn’t have to be. You could download your own LLM and put it inside your machine. And you can do that now. And there’s open source variations. There’s more democracy and there’s more opportunity in this revolution than in any other revolution that’s ever happened before. And that’s why it’s so exciting. Not just in the realms of consultancy, but actually in the realms of pretty much anything. And coming back to Liz’s point about doctors and the emotional intelligence around that as well. The facts are now out that actually human beings tend to enjoy the AI versions of doctors rather than doctors themselves, because doctors themselves are too stressed and don’t have a long time. Whereas an AI model has, you know, four or five hours to talk to you because it doesn’t get paid per hour. So we’re even now into this world where actually the bit that we thought we could safeguard against the emotional intelligence side of AI, it’s going into it, which doesn’t surprise us. Because remember five years ago, we all were saying that AI couldn’t do art. And now, you know, there’s more images that have been created than AI, than were photos taken in the last 150 years, you know, it’s like 2.4 billion. So, you know, each year or each month now, AI does something new. And that’s why I think it’s really exciting for consultants because that emotional intelligence part, that’s just a matter of time. It might be scary because it’s just a matter of time, but it’s just a matter of time.
[00:16:41] Phil Lewis: Dan, I just want to come back to you on that. So listening to you reminds me of hearing people talking about social media 20 years ago. It’s going to democratise media, it’s going to make the world more connected, more, you know, it’s this kind of Panglossian future that the likes of, you know, Facebook back in the day, or probably before that, even Friends Reunited or whatever. And then, and here we are now, 20 years later going, oh dear, we seem to have unleashed a horrendous tornado of hate that’s actually causing all sorts of issues in society and actually managing to support the ripping apart of our political system.
[00:17:28] Dan Sodergren: But that isn’t to do with social media. That is to do with the companies that owned it and the inability of legal departments and of governments to regulate those industries better. That’s the only reason why. It’s not because it’s like saying, well, electricity, you know, it can kill people. Yes, it does. But the benefits it brings is much better than the negatives, right? Now this is the same for anything. Same for the bicycle. Same for social media. Yeah, I was saying the same about computers years ago. Absolutely with you, except now with AI, we must be a lot more careful because we should have learned the mistakes from social media. We now need to have governments regulating these companies more. So I’m not talking about AI, laissez faire AI shenanigans forever, far from it, I think that’s very dangerous. What I’m actually saying is the fifth industrial revolution potentially can be for us all. I know we could all own a radio and I know we all don’t, we could all have YouTube channels, I know we all don’t. But the ability to become more productive at work is not the same as owning a Twitter profile or YouTube channel, it’s not the same thing. This is talking about a more powerful computer you can use for work. Oh by the way, and this is really important, I’m not here to morally convince anybody that I’m right I’m just saying there’s an opportunity for us all to work less and to make more money I’m not saying it’s morally…
[00:18:39] Phil Lewis: I would totally listen to a podcast called Dan’s Laissez Faire AI Shenanigans, by the way.
[00:18:47] Liz Timoney: I think, you know, and you were saying, you could have heard the same things said about social media. If you look back to when the printing presses were introduced and what they said about the spoken, the written word and how people were going to become lazy and useless because they would just sit around reading rubbish all day and filling their heads with stuff rather than doing the important work that needed to be done. You know, and there’s an argument that says, actually, that’s just what’s happening and it’s just getting faster and faster and faster, and people need to be able to adapt. And I think that’s the challenge we’ve got here. So I sound like I’m anti-AI and I think it will cause all this distrust, which I do. However, I think it’s a good thing. But individuals need to be able to adapt and expand, and change their concept of who they are, because many people are identified with their job, with the thing they do, or worse, with people’s approval of the thing they do. And that’s just not tenable, and it will become increasingly untenable as we go down this sort of multiple AI personality universe, where people are not only competing with peers, which isn’t wise, but they’re now starting to compete with their AI peers. And if COVID has taught us anything, it’s that we don’t cope well when we feel under threat. People’s behaviours are… not generally very smart.
[00:20:05] Phil Lewis: Well, let’s talk about that. Simon, do you have a thought as a psychologist around people feeling threatened by AI, the potential psychological impacts of that on workplaces? I mean, again, it seems to me that there are any number of examples around digital transformation more generally, which have caused all sorts of ructions in workplaces through a sort of collective lack of ability to handle that transformation process very, very well. And I wonder the extent to which AI might represent a loss of control for people, or it might present other kinds of survival threats in organisations or not, depending on how it’s positioned.
[00:20:43] Dr Simon Moore – IB: Funnily enough, the last few weeks working in fairly big kind of groups. There’s a lot of people in the office. And some of the conversations I’ve heard recently are really quite interesting. I’ve heard three times in the last week alone: ‘Well, of course we got to that point, it wasn’t us, was it? It was the system we used.’ And you could almost see shoulders sag. And they say; ‘We’re not as intelligent as we would like to have been there. It wasn’t us. It was the system.’ And so I agree, I think there are a lot of opportunities here, but I think some of the downsides we need to think about are; what is the impact of our sense of value, and our own sense of self-worth and esteem when we start having access to these systems that can do all sorts of, you know, really innovative, crazy, creative things that cut down time, but then we can’t own those things because, you know, the perception is… it might not be the right perception, but this is the psychology of it — the perception is yes, but you didn’t actually do that. Did you? It wasn’t off your own back that you did that. You had a system that supported you to do that. So I was at a conference last week. This is interesting and I didn’t quite understand this. So I was at an insurance conference last week and they said, oh, the beauty of these systems is that we can get rid of all the menial, mundane work and we can cut out all the juniors. To which I said… I put my hand up tentatively and said, okay, so if these systems are going to get rid of all your juniors, how are you going to ever get seniors?
[00:22:23] Liz Timoney: And therefore consultants will always be needed.
[00:22:26] Phil Lewis: Well, you say that. I come back to my earlier point about the values that inform consulting, because the ways in which we as consultants need to be advising our clients, I think, has to go towards the future that we all want to see, right? And if the future that we all want to see is one in which we are freed up to do more creative and more interesting and more engaging work, then we have to be encouraging our clients to think in ways that actually bring that future about. My worry is that a combination of AI being in the middle of a kind of hype cycle curve at the moment and a resulting gold rush around AI, which I do think is happening, and I think is happening in consulting as much as anywhere else, because consultants love to smell what’s cooking in the kitchen and try and get a piece of it for themselves, right? So a combination of those things, already the companies that are behind all of the major developments that are in AI, and frankly, a whole load of stuff to do with incentives and whatever in our business landscape, whether that’s share price incentives or whatever else, actually will conspire to be in a situation where what management consultants in particular mostly spend their time advising clients on is exactly what you’ve just talked about, Simon, which is; how do we just cut a load of junior cost out of our business? And, you know, replace it. And you see that happening already because you see it happening with chatbots, right? So I guess the interesting question for me that flows from that, well, actually there’s two interesting questions that may flow from that, which is one, agree or disagree with what I’ve just put down, and secondly, if you agree or disagree with it, I’d be interested in thoughts around how consultants might be able to push against that flow.
[00:24:19] Dan Sodergren: A couple of things. I agree, with what you’re saying a bit. I certainly… there is always an interest in capitalism and to cut costs and to save money and all those things. I think that’s what you were saying. I wrote a book, called The Fifth Industrial Revolution, and in that I’m talking about; we need four different types of new intelligence for this new world. Artificial intelligence, just being one of them. Emotional intelligence being another. Organisational intelligence, which is the one that I talk about too much, and then independent intelligence. And I think it’s that independent intelligence that we need to start looking at more. We’re going to be entering this world, whether we kind of like it or not, where companies are not likely to be larger. They’re likely to be smaller. You know, people are not going to go out and work, they’re more likely to work for themselves. You’re going to have teams and individuals that need to start thinking independently and let’s not kid ourselves. Not everyone believes that they’re going to stay in that company forever. When my dad was born, it was something like you had one job for life. I think it’s now something like seven or eight. Yeah, I think in the next few years that’s going to increase almost exponentially and people are going to be jumping, jumping around a lot more in what they do as a job. Liz’s point around… I’m totally with you, I really wish that people wouldn’t identify with their job as their main thing. I think it’s actually quite sad that people do, but people do, right? We might start not having to do that. We might not say I work for so-and-so. It might be, I am a [ ] rather than I work for so and so, if that makes sense. Yeah, I think we’re going to see a lot of, a lot more… I’m hoping, because I love helping people start their own businesses. I think if we do that, we can self-actualise and we can control our own destinies a lot more. But I know I’m, I’m not preaching to convert you, by talking about that, but the ability of AI gives us the chance to actually kind of control our own destinies a bit more. I think, yeah, I think you’re right. Large companies, especially management consultants, including, by the way, if you’ve seen the cuts in their own businesses, you know, they’re cutting out numbers in their own businesses. They’re not just inviting people to do it themselves. They’re doing it to themselves! This next world of the next five years, we’re going to see a huge amount of people that aren’t out of work, but they aren’t in the same company anymore. I think the idea that we’re going to get rid of jobs and get rid of work; I don’t think that’s going to happen, but we’re not going to see massive companies get bigger. Massive companies are going to get smaller. Just the same as if you were running a factory that needed a thousand people and then your factory needed 10. You know, you’re just not going to keep people on for the sake of it. Now, don’t get me wrong. I think we need a whole new way of looking at society because of this. I think it’s a much deeper problem. I think it’s a governmental social problem, and we have to start looking at, which I talked about five or six years ago on the BBC, we have to start looking at things like UBI and all sorts of stuff that underpins a whole mass of people no longer having a job. That doesn’t mean it’s not going to happen. The next few years, people, unless they are very independent and are very clever about the job market and clever about using AI, they are going to be replaced by people who use AI a lot and are very clever. Because that’s unfortunately the nature of progress in that respect. I know it’s not very fashionable. I know, by the way, from a humanitarian point of view, I think it’s appalling. But the reality is that we can’t stop that. What we can do as human beings is hopefully vote for the right kind of people that will actually understand that, you know, not having a job is going to be part and parcel of life for quite a few people quite soon.
[00:27:30] Phil Lewis: Well, Dan, we can stop it. We can stop homelessness in 24 hours as we’ve proven by the beginning of COVID. We just chose not to, you know, and I think that’s the interesting conversation for anybody who… any consultant who is listening to this particular podcast. And particularly building on the last episode with Carmel McConnell, I would be really interested in talking to consultants about what are we voting for in our industry and how are we, you know, supporting clients in making good choices around some of this stuff. You mentioned elite consultants before, and I just wanted to probe that a little bit, Liz, because it seems to me that any good consultant, forget the distinction of elite for a second, any good consultant is fundamentally offering two things to clients. The first thing is creativity, and the second thing is care. So creativity for me is about, yes, looking at patterns and assessing patterns and then diagnosing issues, but then helping clients to think very, very laterally about the solutions to those issues, usually because by the time the consultants come in, then all the obvious routes to the solution have been exhausted anyway. And that’s got to be wrapped to be effective in my experience. That has to be wrapped in a kind of pretty humanitarian way, because often organisations, when they are calling in consultants are in some form of distress, and the people within them are in some form of distress. Now, from where I’m sitting, I genuinely think that any consultant worth his or her salt is probably bringing some level of creativity and some level of care into a client relationship. If you take that and then you go back to the world of AI and you go, well, certainly at this point in time, Dan, to your earlier point, AI is like an unhelpful shorthand for a whole load of machine-based capabilities, it does seem to me that certainly in the short to medium term, there is not going to be a form of AI that can be a substitute for either human creativity or human care, not at any sort of reasonable standard that anybody, say, running a business or working at a medium level of the business would actually necessarily need. And if that’s true, then I become less concerned about consultants in terms of the longer term health and viability of our industry beyond the quote unquote elite level.
[00:30:03] Dan Sodergren: Just a sort of general thing with the AI point. So in consultancy world, you’re right about the human quality, which is why emotional intelligence is so important. By the way, you need emotional intelligence to get the most out of artificial intelligence. This is the tremendously important point that most people forget or don’t know. It’s not the same as a tractor. It’s not the same as a computer. You do actually have to manage it and you do… it’s much more like a human being than people would like to think. Now that doesn’t mean it’s sentient, but the way that you get the best out of it is actually to do with human language and a load of other stuff, which I could bore you with and wrote whole courses on, but it’s bizarrely more to do with management of machines than you would actually think. Anyway, from a consultancy point, you know, if you want to be become a bit more productive, the average now is about 40% more productive as a consultant. And the question that I’m sure, because you’re very wise, Phil, you’d say is, but for what reason do you want to become 40% more productive? Or for what reason does anyone or any business want to become 40% more productive? Is it to cut down your time? So you have more time to do other stuff, or is it to make more money? Or is it to cut out people? I think your thought about how close we are to that emotionally intelligent machine, so to speak, we’re there now. So the idea that consultants need care and creativity, true, but actually the AI machines now are so advanced. And I’m only talking about two or three of them now one being called Hume, that it does actually have that ability at this moment in time. And I know you must probably dislike this intensely, but I truly believe that with lots of businesses, it’s simply the data that sets them free. And I know there’s a lot of argument about, you know, what data points do you care about and what metrics do you have for success? And should you use things like AI in HR? I think you were right five years ago. I think we’re now getting into the nuance of data manipulation so well with artificial intelligence that it will surpass human’s abilities in this very, very soon. And it’s just going to be the amount of data that you put into it. Asking a machine to care… caring is just a series of biological and chemical prompts. You know, I believe that’s what emotions sometimes are, and you can program something to be emotional. It doesn’t have emotion, but you can program it to care because the outcomes of caring are pretty well documented.
[00:32:10] Dr Simon Moore – IB: This is just interesting from a human point of view. Reality and perception are two different things. And from a psychology point of view, usually the most important thing is the perception. So it may well be that, you know, these systems are kind of… they can care in the sense that they’re kind of programmed to do it. It’s not that they can do that. It’s the perception of the person interacting with them that that’s real. So in other words, it’s the perception that this machine, this system, hasn’t got any human experience. They don’t really understand human stress because they’ve never really experienced it. They don’t understand human loss because they’ve never experienced it. So it’s the perception that’s the problem.
[00:32:54] Liz Timoney: I think to your point, Simon, there’s an amazing opportunity here where we could get machines to be helpful gatekeepers around values and culture and intent. So whether we’re saying, you know, you sound like you’ve got a bit of anger in your voice or something like that, and you can start to unpick what’s going on for people. You know, it sounds like we’re heading down a road where we’re valuing the profits of the company over the safety of the manufacturing team. Because back to Phil’s point about what makes an elite consultant for me is partly courage. That they have… they’re so grounded, they’re so stable and confident in who they are and what they know that they’re able to say to customers; you say that, but is that what you really think? Or, you know, you’re saying that and actually from your actions, it seems that this is what’s going on. Is that what you want? And I think that boldness is really valuable and that insight… can AI generate that? If it can generate that, then…
[00:33:48] Dan Sodergren: No it doesn’t, but it would help the consultant with the amount of nuance in the data source because otherwise it’s just an opinion. So this is why I get very passionate about it, because often people have opinions and they could be really intellectually correct, and they could be really really wise, and they could put it in beautiful ways. Simon’s words are always very, very compelling and Phil’s a genius at creating and putting things together as words. It doesn’t mean that they’re both right, it just means they’re really good at speaking. Now they could say stuff, which is just not data driven at all, because they don’t have the data. Whereas an AI doesn’t do that. It doesn’t… you can’t win it over based on how well you speak to it and how nice your personality is and all the other human frailties, by the way, and the great thing about being a great consultant is most probably also being quite good at sales, which I know I’m not because I interrupt people all the time. And I know my emotional intelligence is very low too. I’m not a very good consultant, but I’m great at training people. I think it’s sometimes… I’m good at showing people the opportunity. What I’m trying to get to, I suppose I might be failing to do it. Is that the data driven approach actually, I think gives us a much bigger opportunity for success, not just around capitalism, or making more money, because I’m not actually that fussed by that, but actually about human development and about self-development and about things that are important. However, coming back to Simon’s great point, and Liz’s as well, is that we need to teach people at a very young age, what these AI machines shouldn’t be used for. And I think at the moment, they certainly shouldn’t be used as psychologists or free help or whatever people are using them for. You’d be surprised how many young kids at the moment are using Snapchat’s AI because it’s in Snapchat and they’re using it as a friend and a psychologist. And it’s not trained on any of those things. And that’s terrifying because they’re thinking it in.
[00:35:23] Phil Lewis: I wanted just to go back to a point you were talking about earlier on Dan, and then Simon and Liz, you came in on as well, which is this point about AI as a kind of emotionally intelligent entity. Because where that conversation went is, it went into data or what data has it got, and how do you crunch data, and data can help you understand what’s going on emotionally, and everything else. And I feel inclined to challenge that because in my own work, I sit with clients and… physically in rooms, and I know that there is a huge amount of informational exchange going on between me and those clients, that has nothing to do with anything that we would call data in the sense of it getting programmed into… something that’s programmable into a machine, right? So I can intuit where clients are at, sometimes energetically. I also am picking up on signals from body language as well, and all that kind of stuff. And you sometimes just get a sense, it sounds very vague and perhaps a bit woo woo, but the kind of vibes that are going on in the space as well. I had it very, very strongly in a meeting only about two or three weeks ago. I was sat in this meeting and the clients all walked in, and I just immediately intuited that had a row before they walked in. And it took about half an hour where I was focused on the creation of a kind of supportive space for us to be able to move into a conversational mode that was more productive. So I don’t think anybody’s arguing AI can do that, but I think what does get concerning for me is when we start to see the discourse moving into AI being a substitute for caring, because there is so much about caring and there is so much about human interaction that I’m not convinced it’s going to be possible to program into a machine. And even if it was possible to program into a machine, I’m not convinced it could be desirable for it to be programmed into a machine, right? So, Simon, if you’ve got anything to say to that, but I think so much of how we communicate as human beings; what we say, what we hear and, what we receive and how we process information is nothing to do with data in the way that anybody that would think about computer science would articulate.
[00:37:46] Dr Simon Moore – IB: The way that the brain develops is to interact with other brains. Okay. So other physical, kind of biological entities. That’s just the way we are. And so if you look at how we interact with people as opposed to interacting with a system, what we know, and this has kind of been evidence for quite a number of years now, is that we have different hormones and some are good for us, some are bad for us. And what we know is that when we actually interact with other people, and I put a caveat in here, as long as they don’t have a chainsaw or other sort of physical weapon that they might come do us harm with… that generally speaking, we have a kind of rush of anti-stress hormones. Just being in the presence of a physical other person is kind of reassuring and anti-stressing. So we don’t have to do anything. You have to say anything. And we don’t get that interaction with AI systems. But I do agree with what Dan’s saying in the fact that we can kind of develop the systems and get to a point where we get somewhere in between. So, for example, we know that if you think about, I’m going to use the word generic AI systems, you know, a voice activated so you can interact with them and speak with them. There were nuances of what, you know, what that voice represents to us. So the gender of the voice, the age of the voice. So for example, and I found this really fascinating, there’s research out there that if we’re talking about trying to get people to be more environmentally friendly, sustainable, active. If you use an older voice, which you probably might perceive would be more authoritarian and slightly more experienced, we get less actual behavioural response to that than if we use a young AI voice, because the perception is the young voice has got more kind of, I suppose, skin in the game because they’re as young as me, they’re going to live as long as me, and they’re going to experience all these problems. The old person’s going to shuffle off soon so why should we be listening to them. And it’s just that perception… exactly the same things were said by the AI system. All they did was change the actual perceived age of that voice message. And it had a really different effect on the behavioural outcome of that. So I think it’s quite muddled at the moment. There are things where we could get better, but I think in support of Dan, there are things that we can do that can get better, but we were way off it at the moment. And then something that Liz said earlier, even if we think about semantics, you know, even the word ‘like’ will mean four different things to each of us. Though it’s a word that we all recognise, it doesn’t mean the same thing to us. And so we have to navigate through that field as well. What do we mean by ‘like’? So some of us ‘like’ will be it was okay Some of us ‘like’ means oh, it was really good. I quite… you know, I’m going to be understated. I liked it, I got excited but I’m going to use the word ‘like so we have all that kind of nuanced problem as well in terms of interpreting to your point Phil. What are these nonverbal signals? What do they mean? And, you know, for each individual as well.
[00:40:47] Liz Timoney: And I think, Simon, that gives us yet another problem with this, which is Phil’s point earlier was, which direction do we want AI to go in? What values are we imprinting? Where are we as consultants steering this ship? But we don’t agree necessarily, do we? You were saying, Simon, that most people don’t like talking to machines and they like, they feel relaxed in a group. But then you’ve got the introverts who are the opposite to that who are overstimulated in the brain and so feel anxious when they come into a room that’s very stimulating, full of extra information. And they like that interacting, you know, with something online that feels less overwhelming because much more about data than it’s about social preferences of them, as I’m sure, Simon, you’re well aware. So I think that this is a really difficult situation isn’t it? That we have got this great ability to control things, but we don’t know, we can’t agree on what we, the same direction we want to steer the ship in. And if we could, then to Phil’s point, I think, you know, we have a great opportunity.
[00:41:46] Dan Sodergren: I’m not one of these people who thinks that everything should be done by machines and we should all sit there in little boxes and never speak to each other. It’s the opposite. I think we should do speaking to each other and sitting around campfires and having drinks together a hundred times more than we do. Because the machines could do 80% of our work so we can have more of the human stuff. You know, computers allow us if we allow them to do so, and this is why it’s different to a computer. AI systems allow us to spend more time together and get to know each other more and do more of the human stuff than ever before. And also still make the same amount of money. I’m also a huge believer in maybe, dare I say it, a three day work week and seeing my daughter and seeing how she progresses as a human being and being excited by the environment. These things I can do more if I don’t work six days a week, and the machine allows me to potentially work three or four days a week. I can become a better human being because the machine does more stuff for me. I think that’s much more exciting.
[00:42:41] Phil Lewis: Where I go with all of this and what I’ve been driving at, I think in a lot of this conversation is that I think it’s really easy to other the problem of AI. So in other words, what we do is we say; yes, there’s tremendous potential and also there’s tremendous harm that can be done, but ultimately it is the responsibility of regulators to sort all that out. Whereas I think as consultants, as public speakers, as people who write books, as whatever else, we also are in positions of influence. And maybe we’re not in the position of influence to the same extent that a Rishi Sunak — if he is still indeed the PM by the time this episode goes out — is, but we are in some position of influence, and I think we can influence businesses. I think we can perhaps influence policy. And as we were talking about in the last episode with Carmel that I mentioned earlier on, we can take action in line with our values, and action in line with the future that we want to create. And so. I’d like to round out by asking each of you a question, which is this, which is this. For any consultant who is interested in AI, whether that’s AI in their own practice or interested in how AI might integrate with the organisations that they’re seeking to support. What’s one thing that they should pay attention to?
[00:44:15] Liz Timoney: Is what they should pay attention to when they’re deciding anything, which is the ‘why’. Why are you doing it? Which direction are you trying to take this in? If you don’t know why you want AI or what you hope to achieve with it or what the future looks like with AI in your business, then I would pause until you do.
[00:44:32] Dan Sodergren: I think you should actually not start with your ‘why’. You should start with your ‘who’. So who is going to be utilising this? Who is going to change because of it? And often leaders need to change too, especially when it comes to this kind of technology. And then the last ‘who’, and this kind of freaks people out sometimes is the ‘who’ is also the AI. Because if you’re using different types of AI, they are all different types of people with different abilities. And if you think you can just use, you know, one type of AI that can be in your business to make decisions, you’ve made a dreadful mistake. You also should be thinking about who owns this stuff, who owns your data. And so thinking about open source AI and owning your own large language models underneath all this, and not giving the power away to people, and this is no disrespect to Microsoft and Google, but not giving away your data and your power to other people that at any point could turn it off. I would, if you are really serious about AI, you also should be thinking about the ‘who’ — who is going to make this happen, just like we did with CTOs and servers years ago. You know, not everything had to go to the cloud, did it? You could have had a server in your office. That would have been more secure. It’ll be exactly the same with an AI system too.
[00:45:40] Dr Simon Moore – IB: I’m just going to pick up on something that Dan said there. And I… it’s the ‘who’, I think a lot of people like to demonise the whole area without really thinking about; we’ve got a part to play in this. So, you know, who… so we need that education to teach us about the appropriate way that we might want to use it. I use the example, I dunno, I might be telling my age here, but when you’re at school, and remember the old wooden rulers or the plastic rulers you used to have, and it’s like 99% of the class would use it to draw a line or measure, but you’d always get one idiot, wouldn’t you, who’d use it to smack you around the head or the back with it. So any tool is kind of rubbish in the wrong hands. So it’s kind of you know, following on from Dan’s point, I think we need a little bit of education about the user and what they, the role they play in the whole interaction here. But if we’re going to finish off on the theme of questions, then I would go with a ‘when’. So, you know, when do we use this? So that kind of rounds it off quite nicely, huh? Who, why and when. When should we be using it? Should it be using it just ad hoc? There are times probably when it’s more useful than other times. And there are times when you might not want to use it. And there are times when you should use it all the time.
[00:46:53] Phil Lewis: So what’s the one thing consultants should pay attention to? Why, who and when. Answered truly like a brilliant team of consultants there. Thank you very much.
[00:47:03] Liz Timoney, Dan Sodergren, Simon Moore: Thank you.
[00:47:09] Phil Lewis: So that was the panel conversation and I hope you found it interesting and worthwhile. I think it’s probably going to be the first of many that we have about AI over the coming months and years. This doesn’t feel like a topic that’s going to go away anytime soon. And some of the stuff that we were starting to scratch away out there about not only what does AI mean for consultants, but what is our responsibility as consultants when we’re advising clients, felt to me anyway, like pretty important themes that we will need to return to in the coming months and years. I was reminded actually listening back to that episode, of the episode we had that went out last month with Carmel McConnell where we started to talk about taking action in line with your values. And for me anyway, one of the things that the whole debate around AI challenges is my values as an independent consultant. I’m interested to think through the lens of the world I want to see, and through the lens of the opportunity that AI brings to organisations, and actually into our own industry as well, what’s the kind of input and advice that we should be giving to clients? How should we be trying to exert influence and directing the discussions and debates that are going to be inevitable over the course of the next few years? So I hope there’s some interesting stuff in there for you to think about too. As always with the podcast, very grateful for any support that you can show us. If you found value in this episode, if you could find a way to rate and review us on your favourite podcast platform, it really does help us out. And also spread the word as well. We’re hoping to try and help independent consultants. So as many of them as we can reach, and not just consultants as well, as many people who interact with consultants as we can reach as well, then hopefully, it’s all to the good. This podcast is a real labour of love from the team that makes it. So anything that you can do to support us, we appreciate more than you know. In the meantime, thanks as ever for listening. I’m looking forward to catching up with you in a future episode of The Consultancy Business podcast very soon. Bye for now.
Explore more