Researchers discussing AI and democracy included, from left, Andrew Hall, Stanford GSB; Ethan Bueno de Mesquita, University of Chicago Harris School of Public Policy; Emilee Chapman, Stanford University; Greg Martin, Stanford GSB; and Kristian Lum, University of Chicago. | Julia Yu
In an age when trust in the origin and veracity of information is already low, the emergence of artificial intelligence as a powerful new tool to develop and distribute content poses a serious threat to democracy, according to a panel of experts who met recently at Stanford Graduate School of Business.
Stanford Business, Government & Society Initiative
The Business, Government & Society Initiative focuses on transformative issues facing the world, including technology, free markets, and sustainability.
Leveraging the school’s leadership expertise, the initiative is dedicated to supporting new models of cutting-edge research, bringing changemakers together, and enhancing the curriculum to prepare responsible leaders for a dynamic world.
Cosponsored by the Stanford GSB Business, Government, and Society Initiative and the Harris School of Public Policy at the University of Chicago, the seminar is the first in what the organizers hope will be an ongoing series exploring the growing sophistication and potential misuse of AI. It follows a daylong meeting in August that brought academics, civic leaders, and representatives from leading tech companies together to discuss the issue. The Business, Government, and Society Initiative focuses on the transformative issues facing the world, including technology, free markets, and sustainability. Leveraging the school’s leadership expertise, the initiative is dedicated to supporting new models of cutting-edge research, bringing changemakers together, and enhancing the curriculum to prepare responsible leaders for a dynamic world.
The goal of the AI and democracy seminar, according to Ethan Bueno de Mesquita, a professor and interim dean at the Harris School, “is not to go straight to policy recommendations, but to provide analysis and clarity.”
Whether and how AI is regulated “has real consequences for society,” de Mesquita argues, and answers “are not obvious.”
AI tools are being deployed in several insidious ways, says de Mesquita, and the threat to democracy as a presidential election looms is an urgent matter. “If the conversation about policy, about civil society, about self-regulation by industry is going to get us somewhere, it needs to get us somewhere pretty soon if we are going to have a free and fair election.”
Most critically, de Mesquita says, is the risk of “degrading the information environment” through the use of deep fakes that could alter the outcome of an election, or what he called “an October surprise.” AI-enabled content developed by a campaign or political party to influence voters by smearing an opponent can be “super convincing” and difficult to counter once it has been absorbed, especially close to voting day, he notes.
Part of the problem, de Mesquita says, is that the fake content is so difficult for a typical person to detect that “it breaks down trust that we can believe in what we’re seeing” and therefore impugns real information.
Andrew Hall, a professor of political economy at Stanford GSB and a co-organizer of the seminar, noted that while the threats posed by AI are serious, the technology also gives rise to opportunities. Chat bots are capable of synthesizing large amounts of complex information in engaging ways that could help voters, Hall says. One potential use would be a bot that could converse with prospective voters who wanted to learn more about a party’s policy platform. In addition, AI could enable under-resourced campaigns to reach voters effectively and economically.
A longer-term risk, according to Hall, is that generative AI could accelerate the consolidation of information into a small group of online platforms. He explains: “Because so much of our social, economic, political, and cultural lives take place on these platforms, the decisions that a social media company or a search company makes about who is allowed to say what, what information is true or false – they have pretty big influences on society,” Hall says. “Who are the right actors to make those decisions? This potentially could result in a very worrying dystopian future in which what ideas are valued or expressed in society come under the influence of a set of rules decided by a small set of companies.”
Similarly, Stanford GSB professor of political economy Gregory Martin fears that as AI continues to improve, it could supplant human reporters and result in fewer news providers determining what people see and hear. And as newspapers fire reporters and adopt the technology to write articles, the quality and accountability of the information may be in jeopardy.
Kristian Lum, a research professor at the University of Chicago, examines algorithmic bias and trust. She says large language models are subject to the biases of programmers and could reinforce stereotypes. “It’s tricky to assess these things. The subtle things that can creep in and play a role will take us a while to sort out.”
Emilee Chapman, assistant professor of political science at Stanford, warns that If these issues are not resolved, it could alienate voters from the entire election process and further weaken trust in institutions. Generative AI creates “this other layer of technology and complexity that makes politics opaque. These are supposed to be occasions when people feel uniquely empowered and have a sense of ownership over democracy, and I think we should be especially concerned about the erosion of that feeling.”
For media inquiries, visit the Newsroom.