Technology & AI

Is Your Business Ready to Jump Into A.I.? Read This First.

Expert advice on how to move fast — without breaking stuff.

October 25, 2023

| by Dave Gilson

No one wants to miss out on the opportunities AI offers. But where to begin?| Jin Xia

This spring, just as ChatGPT was surpassing 100 million users, Kuang Xu was teaching a new class that dug into one of the most pressing questions of this transformative moment: Now that we seem to be on the cusp of the artificial intelligence era, how do you best put these technologies to use?

Or, more precisely, as Kuang sums up the questions at the core of AI and Data Science: Strategy, Management, and Entrepreneurship: “How do you build products that are deeply integrated or powered by AI and data science? And on the flip side, how do you manage teams who are doing AI and data science?”

Kuang, a professor of operations, information, and technology (OIT) at Stanford GSB whose research focuses on using AI and data-driven decision-making to power businesses and policy, has been grappling with these topics for a while. “I actually started thinking about this class way before ChatGPT really hit,” he explains. The ­curriculum was informed by his research and consulting work, where he’d seen what happens when companies hastily build out AI or data science teams in response to pressure from their boards. “Five years later, everybody leaves or gets fired. Why? Because they never integrated. It’s not a trivial thing to integrate these new technologies.”

No one wants to miss out on the opportunities AI offers. But where to begin? AI has become a shorthand for everything from generative tools like ChatGPT and machine learning to computer vision and robotic process automation. How can business leaders figure out which AI tools they need and then move fast without breaking stuff — or the people they work with? Here, Kuang and other GSB professors offer their perspectives on the steps you can take to deploy AI nimbly, strategically, and responsibly.

Get Ready to Jump

“Until recently, companies from outside the tech sector would come into the Stanford GSB Exec Ed program and worry that they were behind on AI,” says Susan Athey, PhD ’95, a professor of economics at the GSB and an expert on machine learning as well as AI governance and ethics. “I often would reassure them that more complex forms of AI didn’t bring enough value to ­justify the investment.”

Yet that’s changed in the last year or two: The quality of AI has exploded just as more companies are getting serious about collecting and analyzing data. “That combination,” Athey says, “puts us in a position where things will move very fast.”

The mass adoption of ChatGPT is a concrete signal that the AI rush has arrived, says Gabriel Weintraub, a professor of OIT working in data science. He recently presented a framework for evaluating AI investment opportunities at a workshop on using AI for business transformation that he organized with his colleagues Mohsen Bayati and Stefanos Zenios. “We do believe this is the real deal,” Weintraub says. To the AI-curious business leader, he advises both swiftness and caution. “I think you need to jump in — but in a thoughtful way. When there’s all this hype, it’s very easy to forget the basics. You’re still creating value by addressing a customer pain point, and basically, what AI is doing is giving you new and potentially transformative ways of creating value.”

Quote
The bottom line is that AI and data science are supposed to help you drive better decisions. Always.
Attribution
Kuang Xu

While the pressure to jump is intense, AI may not be essential for all businesses at this moment. “I ­absolutely think it depends on the business,” Kuang says. He suggests a thought experiment for anyone who’s unsure: Imagine an oracle that can answer any question about your business’s future. If it could predict demand for your product, would that change your pricing? “When you start asking those questions, it gets more concrete, and people realize, ‘Oh, I probably cannot change my price because of regulatory guardrails or reputational concerns.’”

Now use the same process to evaluate a potential AI solution: If it works as planned, what would you do differently? If the answer is “not much,” then maybe you should wait before springing for something like an AI-powered analytics app. “The bottom line,” Kuang says, “is that AI and data science are supposed to help you drive better decisions. Always. If there’s no way for you to change your operation or decision-making, then having more information doesn’t help you.”

Amir Goldberg, an associate professor of organizational behavior whose research incorporates data science and organization studies, emphasizes the unknowns and complexities that still surround AI. “For certain things where the optimization problem is well defined, like simple aspects of supply chain management, adopting AI is a no-brainer because it’s proven and we know how to use it. But on other things, like managing relationships with your employees, the opportunities and the risks both appear colossal.” Overall, he says, “It’s not a binary decision: Do I do AI or do I not do AI? It’s: How do I integrate AI into my operations?”

Find the Right Tools

To integrate AI, you need a strategy. “If you don’t even have a framework in place to deploy an AI solution, it’s a lot more effort,” says Mohsen Bayati, a professor of OIT who studies the mathematical and algorithmic foundations of AI in data-driven decision-making. You’re not alone if you don’t have an AI strategy yet. A little more than 50% of “AI high performers” report that they have a clearly defined AI strategy or vision, according to ­McKinsey. And nearly 80% of all other companies have yet to develop an AI strategy.

The next step is finding the right tools, which requires balancing curiosity with caution. More AI solutions are now readily available, “but that hasn’t taken away the challenge of matching the right solution to the right question,” says Jann Spiess, an assistant professor of OIT who studies data-driven decision-making and AI-human interaction. Whether you buy or build your AI tools, it’s essential to make sure “they actually do something and don’t just blindly solve some technical problem that may not be the right one to make progress.”

Quote
You need to jump in — but in a thoughtful way. When there’s all this hype, it’s very easy to forget the basics.
Attribution
Gabriel Weintraub

Athey notes that the field has opened up for firms with less technical firepower — but plug-and-play applications have yet to be perfected for many customer-facing or mission-critical applications. “It’s not a bad idea for firms to try to engage with the new tools, because the barriers to adoption are lower,” she says. “But I still think that there’s a big gap between something that kind of works and something that really works and is safe. We lack off-the-shelf tools that help businesses evaluate performance and manage risk. There are so many dimensions to consider and not enough established approaches to fixing problems once they are identified.”

Counterintuitively, the sudden explosion of AI’s capabilities can make it harder to find the right tools. Bayati calls this the “alignment gap.” Usually, when people raise concerns about AI’s alignment, they’re thinking about the existential risks posed by superintelligent AI running amok. Bayati is referring to more immediate, practical questions: How do you know an AI tool can really do the tasks you give it? Moreover, what strategies can you use to adapt AI when it falls short of your expectations?

This was less of an issue with older AI models that were trained on a narrow set of problems and data. New tools like large language models are going far beyond their training to do things their developers never anticipated. ChatGPT was initially designed to predict the next word in a sentence — who knew it would be able to pass MBA exams, debug code, or ace tests of cognitive development? “That’s where the alignment gap is — the differences between the training and the task,” Bayati says.

Image

Like all things AI-related, the technological cusp we’re on is both tantalizing and a bit terrifying.| Jin Xia

Take AI for a Test Drive

“Assessing the alignment gap is not necessarily easy, but you can try to check it with a small experiment,” Bayati continues. He and Weintraub suggest testing AI tools on “low-hanging fruit” like streamlining processes and workflows before committing to full-scale deployment. “Find a small problem or a small angle to introduce this tool and then find ways to test it,” Bayati says. “Iterating quickly is key to answering ‘Is our approach of using AI to solve this problem important? Is this going to move the needle or not?’”

Kuang also recommends prototyping and exploration, starting with applications where tweaks can generate insights without serious disruption or harm. “The applications where you have seen experimentation being successfully deployed are often where they are not critical high-profile decisions,” he says. The recommendation engines used by streaming movie and music sites are a good example. “If I recommend a song that’s not your genre, nobody gets hurt.”

Quote
AI is not going to replace people’s strategic thinking. It’s not going to replace their creativity.
Attribution
Amir Goldberg

Yet risks may quickly pop up when recalibrating more sensitive functions, like pricing. “Even for companies that are very data- and machine learning-driven, they’re very conservative in using AI to drive pricing experimentation because of the huge liability and huge reputational risks,” Kuang says.

Similarly, Goldberg notes that it’s one thing to run experiments on supply chains or inventory. It’s another to run experiments on the people you manage. Imagine a large company trying out an analytics tool that leads to some employees being fired. Even if it affects only a fraction of the workforce, he says, “the implications on those people’s lives would be immense.”

Goldberg acknowledges there’s an uncomfortable paradox here. Limited, low-stakes experimentation probably won’t produce big breakthroughs — and the risk-taking that could lead to those breakthroughs will bring down some companies. “The big winners of the AI era, if we can learn from the past, are going to be the ones who revolutionize their processes,” he says. “The problem is, you can’t revolutionize by careful A/B experimentation.”

Human Decision-Makers Still Matter

AI-driven data analysis can be a powerful tool for finding correlations and making predictions that inform your decisions. “If you can construe problems correctly as prediction problems, then you can identify the right subtasks that you can use the machine to help you make decisions with,” Goldberg says. “One big challenge with the adoption of AI is to think in an abstract way about what type of problems are AI-learnable and can then be outsourced to the machine.”

AI can also be deployed to free up human brainpower for more ambitious tasks. “We see over and over that algorithms hold great promise at improving decisions,” Spiess says. “I see AI as a tool to augment human ­decision-making by allowing us to scale our expertise so we can focus on the harder cases because automated ­systems can take care of the cases for which the answer’s pretty clear.”

However, outsourcing problem solving to AI doesn’t mean the technology has all the answers. Goldberg says that AI isn’t a substitute for essential leadership ­qualities. “AI is not going to replace people’s strategic thinking. It’s not going to replace their creativity,” he says. “It’s not going to replace judgment, which is basically how to translate a prediction into a decision.”

Kuang agrees that business leaders should not let their expertise and judgment take a backseat to AI or cede decision-making authority to the data. “You are still the decision-maker. You cannot outsource that,” he says. “Once you outsource it, it could become a free-for-all cage fight among different teams with diverging priorities and incentives all arguing that they’re ‘using data.’”

Quote
I still think that there’s a big gap between something that kind of works and something that really works and is safe.
Attribution
Susan Athey

Keeping people in the loop still requires an awareness of the limitations of human judgment — even if decision-makers are consulting algorithms designed to minimize bias or unfairness. Spiess recommends a more holistic view that considers the distinct yet complementary abilities and flaws humans and machines bring to decision-making. “We shouldn’t forget that when the algorithm enters, we should continue to audit the final decisions and not just focus on the algorithm in isolation,” he says. “It’s easier to open up the algorithm than it is to open up the human brain.”

Ensuring that decisions informed by AI are fair and transparent requires firms to recognize biases and edge cases as well as the importance of ethical guardrails, Weintraub says. “In data science teams, the rule of thumb used to be that 80% of the effort is the data engineering, getting the input data clean. Now there’s going to be way more effort on the output side — inspecting, testing the models, and monitoring the results,” he says. This will be critical to reducing the alignment gap. He refers to a concept shared by his OIT colleague Stefanos Zenios: “You need to go from a minimum viable product to a ‘minimum viable responsible product,’ which I think is a good way of summarizing it.”

A Tool, Not a Lord

Like all things AI-related, the technological cusp we’re on is both tantalizing and a bit terrifying. “It’s super exciting, and it’s still not so easy,” Athey says. “Companies are going to be facing hard choices.”

Just as no one in 1980 could predict how personal computers would revolutionize business or the economy, no one can say exactly how AI will transform organizations in the decades to come. All we know is that it will — and those changes will be profound, Goldberg says. “These algorithms are going to change the ways by which we do things. It’s not that they’re going to substitute already existing mechanisms or some of our tasks. They’re going to redefine how we do the work.”

How we do the work will depend on our understanding of AI’s role. Weintraub compares the technology to a hammer: A lot of people are swinging it wildly at every nail they see, hoping it will connect. He suggests another approach: “Fall in love with the problem and not the tech. You’re still solving a problem for a user. Figure out what the important nails in your business are, whether and how the AI hammer is helpful on them, and embrace these opportunities.”

Kuang sums up the message that he and his Stanford GSB colleagues have been sharing with students and business leaders: “Don’t abandon the old-school principles of being a good manager. Make sure you understand AI deeply enough. Once you can break down the costs and benefits of the entire system into easily understandable modules, then it really turns AI into a tool. But if it remains opaque, it becomes like a lord. You want a tool, not a lord.”

Susan Athey was interviewed by Julia M. Klein.

For media inquiries, visit the Newsroom.

Explore More