Technology & AI

You’re in Charge: How to Use AI as a Powerful Decision-Making Tool

If we focus on the jobs rather than the emotions, then AI can be a powerful decision-making tool.

February 07, 2024

Photo by Elena Zhukova

Artificial intelligence’s surge in power and accessibility has inspired polarized reactions. Some people are flocking to the technology with feverish excitement. Others can’t stay far enough away. Yet according to Kuang Xu, both of these responses might be the wrong ones.

“When people hear ‘AI,’ their brain kind of shuts down,” says Kuang, an associate professor of operations, information, and technology at Stanford Graduate School of Business. Whether someone feels exhilarated by the possibilities of AI or terrified by its uncertain impact, Kuang says these emotionally charged reactions are like “a fight or flight response,” inhibiting our ability to make good decisions.

Yet when implemented in strategic ways, AI can enable leaders to make decisions that are driven by data. With just a few simple lines of code, data becomes a powerful tool for businesses to leverage. “What decision can you change if you had the information?” Kuang asks. “Remember, at the moment, AI or data science is all about information. At the end of the day, even in the best case, you have to take that information and do something about it.”

It’s clear that artificial intelligence will integrate into every industry. Yet to harness its power, leaders need to make an emotional shift. They must, as this episode of If/Then: Business, Leadership, Society explores, move away from the fear of the change AI will bring, and instead see AI for the job it can do: provide data so leaders can make more informed decisions.

If/Then is a podcast from Stanford Graduate School of Business that examines research findings that can help us navigate the complex issues we face in business, leadership, and society. Each episode features an interview with a Stanford GSB faculty member.

Full Transcript

Kevin Cool: If we focus on the jobs rather than the emotions, then AI can be a powerful decision-making tool.

Marianne Shine: I’m Marianne Shine, and I am a marriage, family therapist. And I specialize in Hakomi, which is a somatic, body-based therapy.

Kevin Cool: Marianne is based on the Bay Area where many of her clients work in tech.

Marianne Shine: The tech workers that I’ve had come in here are usually sent by someone else, either a spouse or a partner that says you need to go to therapy. And they’re like, no, I can figure this out. I can grok it. And one of the first things I notice when they come in, they’re mostly living in their head. And when I finally tune them in to what’s below the neck and how they feel, their different emotions, it opens all kinds of doors.

Kevin Cool: One thing people in many fields have strong emotions around right now is artificial intelligence. It might be fear of what we don’t understand, anxiety about falling behind competitors, or excitement about what AI might do for us.

Marianne Shine: When we think about AI it can be a sense of like we’re being invaded. It seems like a threatening source. But at the same time maybe there’s some real good that can come out of it.

Kevin Cool: The impact of those emotions might not be much different from the more usual issues that come up in therapy.

Marianne Shine: The most typically things that I see these days is anxiety and panic. And some of the fears are real ones like exams coming up or a driving test, or approaching death. You know sometimes people don’t even understand why they wake up in the middle of the night and they’re panicky. Or they have anxious feelings in their whole body and they can’t put their finger on what it is.

Kevin Cool: Whatever it is we are reacting to, Marianne says our brains can send us into a survival mode where it can be hard to make decisions.

Marianne Shine: A lot of people have used this diagram of the hand where you create a fist and the front part of the knuckles in the fist are the prefrontal cortex. This is where we do all our quick reasoning. And it’s the first thing to go offline when we’re scared or frightened or threatened or anxious. And then we sort of move back into the more primitive parts of the brain like the amygdala and hippocampus that literally is all about fight, flight or freeze response. We’re offline. We can’t even think clearly because once we feel threatened our ability to think and rationalize is extremely diminished.

Kevin Cool: Before we make any decisions about AI in our organizations we need to get out of that survival state.

Marianne Shine: Though you could probably work better with AI if you can work better with your own emotions. And the way to tune into that is to see what’s happening to your body physically. So think of a hose, when you turn on the water it comes flowing out. But if there’s a kink in it, if there’s stress and resistance and contraction, the water is not going to flow as easy. So that’s a clue that emotions might be squeezing you in the way. Like working really, really late and being tensed and then coming home and being angry at everybody who loves you. It’s like, mmm, something is not working there so maybe I need to get more in tune with my emotions.

You could just substitute AI for that and go if I’m that tense around AI, or that worried or that fearful, I need to find out what’s going on inside my body — am I feeling personally threatened, am I feeling scared — and start exploring that so that you can have better, clearer, more flow state so that you can approach AI with all of yourself rather than just a fearful reactive part.

Kevin Cool: Organizations can use AI to power all kinds of decisions. But fear and anxiety about knowing where to start can get in the way. On the other hand too much exuberance could also derail us. How can we work through the charged conversations about AI to do what’s right for our specific organization? How do we navigate our human emotions to make the most of artificial intelligence? This is “If/Then,” a podcast from Stanford Graduate School of Business where we examine research findings that can help us navigate the complex issues facing us in business, leadership, and society.

I’m Kevin Cool, Senior Editor at the GSB. Today we speak with Kuang Xu, Associate Professor of Operations, Information, and Technology. Our focus this episode: If we focus on the jobs rather than the emotions, then AI can be a powerful decision-making tool.

Kuang Xu: I think one of the biggest thing I teach a lot of executives here at Stanford, and sometimes I will engage in industry clients, the biggest thing is when people hear AI their brain kind of shuts down a little bit, right? It’s kind of like a fight or flight response.

Kevin Cool: It’s fear response or something?

Kuang Xu: It’s a fear response, I think so. And we’re all human, we have that, which is that they hear something they’re not familiar with and the automatic response to just assume that everything you know is already out of date. It’s like, oh, my God, I don’t know what this AI thing is. And the thing I want to kind of drive home is for businesses like that a huge chunk of knowledge is not new. And to know where to insert a new thing is a beautiful art, and that drives a lot of efficiency and value.

Kevin Cool: How relevant do you think AI is for everyday people today, right now?

Kuang Xu: I would say one is probably the obvious one, which is that people now interact with AI every day even though they might not be aware of it, right? Every time you call Siri or Alexa, whatever, that’s AI. Every time you call Uber you use an app that matches you with a certain driver, certain shop, and that is also AI, or at least parts of it.

But the second thing I want to say for like everyday people, if there’s such a definition, I think to me the more interesting question is also AI inspires people. The reason people care about AI is not the same that people care about cement, and no offense to anybody who makes cement. But it’s an older technology, right? When you say cement people are like, oh, yeah, I know, there’s Home Depot. But it doesn’t inspire people the same way that AI or space inspires people. So I think in that sense I love that aspect of AI.

And it just makes people think about their life, their meaning of life, where they are in the universe and so on. It really is provoking people. And I think that interacts with everyday people too.

Kevin Cool: You point out that AI isn’t just about how to create a unicorn startup. It’s also something an accountant or a grocery store owner, for example, could benefit from. So let’s take the accountant. How would he or she integrate AI, and how much of that is about rethinking tasks that are currently being done by humans?

Kuang Xu: Let’s say you have a ChatGPT or a Grok or a Bard summarized text. This particular task is a huge achievement for AI, right? We have all seen how it works. However, if you really ask someone to do it can they do it? Absolutely. It takes a little longer. It doesn’t really require a rocket scientist to summarize meeting notes most of the time. So therefore right now as we understand it AI is not quite competing with a human on the most delicate way of summarizing a meeting note, right? It’s really about being able to do that automatically on thousands and thousands of documents for that purpose.

So I would say there are two things. One is a lot of service will come online, if not already online, that helps completely automate away some bookkeeping type of activity. Now I say bookkeeping is because for now AI hasn’t quite penetrated the physical world just yet. And there are a few examples they have, like Uber rides. Cars actually show up to your door because something is going on.

But most of the time it’s within the household. Within the business the robots are not quite moving around yet, right? It’s still processing digital information in an automatic way. So the first thing is automation, right?

The second thing I want to bring up is more nuanced, which is business intelligence, data-driven decision making, bringing it back a little bit, has been around for a long time. The theory we’re using for understanding automatization pricing, the concepts were there maybe in the 1950s, if not earlier. Why hasn’t it taken off? Meaning if you go to a noodle shop, you go to an accountant office, do you see them using advanced optimization and statistical tools? Not quite.

Now you could say, well, because people don’t understand how to use it. I don’t think that’s the answer because it’s quite easy now to train, to offer the product so that they can actually understand that. The big barrier is organizing data. So, in fact, it’s easy to sell a small business some kind of intelligence on analyzing customer turn, where it will not. But to get that information into the computer for a startup, for a company to churn that data, is extremely taxing because you would require a coffee shop owner after they close shop to then somehow manually type in stuff in the right place. That’s where AI I think is going to have a huge short-term impact. Suddenly there’s a flooding of structured data, which by now is not available yet.

Kevin Cool: Right. So we have an example of where AI has been transformative but includes people in the analysis, and that’s in radiology. So just talk for a minute about how that came to be and what the results of that are and why it might be, to some degree, something we can learn from that.

Kuang Xu: So that’s an area that we actually discuss quite a lot in our class. And the reason for that is this was sort of like the hot AI of 2016. So imagine you as an engineer or an entrepreneur student at that time, you look at this technology, you’re like what would I do with it. Well, because it recognized cats and dogs so accurately the first impression was, whoa, let me apply it to a business where the whole thing was about recognizing images. Radiology right there, right?

So kind of fast forward a little bit, companies were founded on this idea that we’re going to recognize, you know, pictures, tumors, and this can be for pathology, it can also be for other kinds of diagnostics, and we’ll replace humans. We’ll replace radiologists, and, you know, it’ll be great, right? We save money on radiologists. So long story short, if you look at the health care system today you don’t seem to find having an appointment with a robot. And it has already been seven years, so what happened, right?

So the interesting thing is fast forward eight, nine years, a lot of the companies that raised a huge amount of money to really have the best AI have not done very well. Whereas some of the leaders now in this field are exactly the company that spends a huge amount of money resource-wise on product development, software engineering, and less so on AI. It’s counterintuitive because you would have thought the leader in AI would have won, right? So anyway, so that’s one big lesson from there.

For example in radiology what’s interesting is that it’s very hard to completely replace radiologists. But it turns out there’s like the lower-level type of work where people are not in an extremely acute situation that they are dying, and they get put into a queue. And out of these people in the queue some people’s conditions are very serious but without a radiologist looking at it, it’s not very easy to tell who is in such a dire condition.

It’s kind of like a radiology intern that scales to millions of patients, and then they can still be looked at by a radiologist but now you have the screening procedure that picked out very highly at risk patients. And they get treated. Patients recover faster, and the hospital makes more money. So they have succeeded. It’s much less about any of the machine learning that’s involved, it’s really knowing how to insert AI in the right process.

Kevin Cool: You’re listening to “If/Then,” a podcast from Stanford Graduate School of Business. We’ll continue our conversation after the break.

So you described AI as an intern. If we think about that analogy and how that might be helpful for people to get their heads around AI, could that maybe make them less afraid of it?

Kuang Xu: Yeah, so that’s a great analogy. If I think a little deeper about what an intern really is, it’s the kind of person who it obviously entails they’re less experienced, but also they’re able to do the vast majority of work — not vast, but let’s say 80 percent of the work a very experienced person would do. However, there is just that 20 percent they can’t do, and that lack of that 20 percent kind of kills the independence of the person. So you probably wouldn’t put an intern on a huge holistic project, even though that person can do even 95 percent of the stuff. But that 5 percent might kill.

I think there’s a perfect analogy in the radiology. You could say even though a machine learning algorithm can be perfect at identifying the shape of a tumor on a picture, but because they don’t have enough contextual information about the personality of the patient, maybe ethnicity, maybe history, maybe the radiologist talked to the person, that’s just not encoded. So it’s a little unfair for the AI, but, hey, you just don’t know. And therefore it really gets that 80 percent done, but that left, that 20 percent, for a human to process. But however you’re just one-fifth the workload for the human. You went from 100 percent to 20 percent now, right?

And I think that’s a pattern that we are seeing a lot, even pre to the AI boom. Meaning the last five years, which is a lot of startups, big companies, are using AI and machine learning in this 80/20 mode, okay? So what I mean by that is they often start out trying to automate things 100 percent. What they end up is automating things 80 percent. And then they come up with the architecture where the vast majority of the work, the kind of brunt of the work, was borne by AI or machine learning were optimization models. But then they have a fairly clever way of interweaving humans and machines so that, that last difficult 20 percent can be processed by human beings, right?

So radiology is a great example. And I believe a lot of moderation at social media platforms, they try to protect people. They don’t have to see too much violence. But sometimes it’s very unclear and you fall to the humans.

Kevin Cool: So if I’m a business leader and there’s a rush now to adopt AI, there’s probably like pressure from shareholders to get involved in this and improve our business, be transformative in some way, but if I’m sort of in the nascent stage of this, what questions should I be asking before I go headlong into some kind of AI project?

Kuang Xu: Let’s say it’s a business that hasn’t had much quantitative capability in house. And let’s say I don’t even have any team that does data science, right, at the moment. So for a business like that my first suggestion would be forget that AI exists, ironically, for now. And the best thing discipline-wise, just imagine it’s like the magic lamp, right, like you ask a question they will answer you no matter what. So don’t worry about what kind of question they can answer, just assume they can answer all questions, for now, right?

Imagine there is such an oracle out there you can commit a couple million dollars, a couple hundred thousand dollars, to get that oracle for a few years. For now just don’t worry about that oracle, worry about yourself. Worry about if you do get to ask three questions what question would you really want to ask, that’s the first one. But to understand that is the more concrete recommendation is what decision can you change if you had the information, because remember at the moment AI or data science is all about information. They don’t ship you a truck and start running stuff for you. Therefore at the end of the day even in the best case you have to take that information and do something about it.

So I often encounter a situation like a company comes to me and they will say we have a lot of data. I just, I’m anxious because I know my competitor is probably doing something with thousands of patients and customers and all that. I’m sure we can do — there must be gold here. I say, yeah, maybe there is gold, but could you just map out all the things in your company right now that you think you can change. Can you change procurement? Oh, it’s kind of hard. We negotiated contracts a couple of years ago. I can’t change that. What about pricing? Well, there’s regulation on that. Well, okay, that’s fair. What about advertising? Oh, okay, that’s something maybe we can do. Okay, that’s good.

So you kind of go through that practice and then quickly realize 80 percent of the things you thought would be really cool you actually cannot change, right? And over the 20 percent you can change, 10 percent of which don’t matter if you change anything. And then finally what you really want to narrow in on is something I can change and it shouldn’t really matter because I know I’ve done it in the past. And, you know, I really messed up one day and did very well one day.

Okay, so something I can change, it matters, and I had no idea how to change, great, then let’s talk about AI, machine learning, optimization, and modeling. That’s a great point to start. Because you’ve already done the hardest part actually, to identify that intersection, the decision making that you can change, it matters, and you don’t know how, but the rest is up to the oracle and we’ll figure something out.

Kevin Cool: What excites you about AI?

Kuang Xu: I think the part that excites me about AI — It’s a tough topic to have, like AI is just so overbearing, right? This word just didn’t mean anything. But if I were to say what excites me professionally about data science and machine learning and data-driven decision making then there’s a lot of stuff that I’ve always been excited about, I think will continue to be excited about, only made more so by the excitement around this general topic AI.

So in that sense I’m excited. I’ve always been excited about making decisions and dynamically with data, and AI kind of turbo charges that route, right? So it’s not quite that AI itself was the source of such agitation.

Now there is one way in which AI is the source of such agitation. I think it’s the emotion that AI evokes in people, and that is amazing. That is so fun.

Kevin Cool: And what are some of those emotions?

Kuang Xu: The emotion is first of all inspiration, right, people start to dream about things. And then second comes fear, which we talked about. And then the third one is very interesting. When you’re excited and you’re fearful the next that comes it’s not an emotion per se, it’s a reaction. It’s confusion. Meaning you actually lose the ability to think like you did before, which is very counterintuitive, right?

Kevin Cool: What emotions come up with students when you’re teaching about AI?

Kuang Xu: I feel excitement from the students obviously. But between excitement and maybe some sense of stress, I definitely sense the stress. I don’t know why but maybe it’s because there’s so much uncertainty. Maybe we haven’t quite dealt with this type of uncertainty up until now. And why it’s so I guess goes back to the product is insight, the product is information. And the information can change in such subtle, uncertain ways that you cannot even quantify.

For example if the technology that’s trending is battery technology, well, you can ask how much charge does it hold and you will feel pretty comfortable that you know the capability of this thing, you know how to build companies, you know how to orient your career. How do you orient a career around a chatbot, a knowledge source, what does that even mean? Do I have a job in the future, do I not? Do I use this thing, do I not use this thing? It’s completely boundaryless in some ways, right?

And I try to put some boundaries for our students. Again I’m not saying these boundaries are fundamental, maybe tomorrow they will be broken. But by putting boundaries, so for example in class we just did a case. I showed them how to use some of the open-source LLMs, large language models, to build a recommendation engine for movies. So basically it allows you to compare a text and say, hey, if this passage is similar to that passage, right? It’s in fact a foundation layer for most of the foundation models, that they process the text in such a way that higher levels can build on it. So this is the kind of stuff.

I actually wrote a whole case with Python, and it took me maybe like a couple of hours during the day to put it together. I ran it in our class and people were amazed and excited. Why, because I think the emotion behind that was — By the way, this is not even a class about AI. It’s the core MBA business and analytics class I teach here at GSB. And people were just on fire. They were like, wow, this thing is great.

And I think the reason they feel so on fire is not just like, oh, it’s a movie list. Well, they’ve done that, right? They’ve been to Netflix. They see the capability. It’s how simple it is. It’s how that they understood now the backbone of this movie recommendation engine. And I think why that matters is because people have been hearing these key words like recommendation, LLM, large language models, everywhere. The number of minutes and the amount of time people spend hearing about it is so overwhelming.

But then you look back, nobody ever bothered showing you just the few lines of code in the way that you understand, oh, this is what it’s really doing. So that drop of this vague anxiety to something that — I essentially, what I did, I put LLM into a box. Of course LLM can do a lot of things, but, hey, here, you’re just doing this thing and this is how you were doing it. And suddenly it becomes —

Kevin Cool: Friendlier

Kuang Xu: — friendly. Yeah, it’s a defined modular in some way more controllable, right? And I think that’s a big piece of that emotional puzzle, which is boundaryless objects are scary. And what do you do with it, well, you put some boundary on, right? And then you be flexible with the boundary. But at all times you need some boundary to engage with this object. And I think for AI this wasn’t clear for people. And that’s something I’m hoping to do. I kind of joke that sometimes teaching AI is kind of like being a therapist.

Kevin Cool: Well, I’ve never heard that associated with AI before, so you’ve given us some new things to think about, Kuang. Thank you for being here.

If/Then is produced by Jesse Baker and Eric Nuzum of Magnificent Noise for Stanford Graduate School of Business. Our show is produced by Jim Colgan and Julia Natt. Mixing and sound design by Kristin Mueller. From Stanford GSB, Jenny Luna, Sorel Husbands Denholtz, and Elizabeth Wyleczuk-Stern.

If you enjoyed this conversation, we’d appreciate you sharing this with others who might be interested and hope you’ll try some of the other episodes in this series. For more on our professors and their research, or to discover more podcasts coming out of Stanford GSB, visit our website at gsb.stanford.edu. Find more on our YouTube channel. You can follow us on social media at StanfordGSB. I’m Kevin Cool.

For media inquiries, visit the Newsroom.

Explore More