As more AI-generated content seeps into the information ecosystem, Professor Andrew B. Hall fears it could contaminate our political discourse and democratic processes.

Hall, a professor of political economy, has spent much of his career researching democratic systems and political polarization within them. “I don’t think we know how [AI is] affecting polarization yet,” he says in this bonus episode of If/Then: Business, Leadership, Society. What is clear is that AI “could be fairly disruptive to the workings of our electoral system in the pretty near future.”

With a presidential election fast approaching, Hall sees several ways that AI could muddy the political waters. As misleading or fake content is generated and distributed at scale, “people could be more misinformed and make decisions they wouldn’t otherwise about who to vote for,” he says. Even if that misinformation is not created, people’s belief that it’s out there could change the election outcomes. “That itself [is] a risk to the system,” Hall says. “The more people don’t believe that the whole process around our democracy is fair or has integrity, the less likely they are to accept outcomes or to buy into the society that they’re part of.”

However, Hall also sees ways that AI could provide solutions to some of the problems that beset the political system. As this episode of If/Then explores, if we want to distinguish fact from fiction and maintain trust in our democracy, then we must understand AI’s impact on our political landscape, in the 2024 election and beyond.

If/Then is a podcast from Stanford Graduate School of Business that examines research findings that can help us navigate the complex issues we face in business, leadership, and society. Each episode features an interview with a Stanford GSB faculty member.

Full Transcript

Note: Transcripts are generated by machine and lightly edited by humans. They may contain errors.

Kevin Cool: In our first season of If/Then we talked to political economy professor Andrew Hall about his research on how AI will impact democratic systems in the United States. Today we want to bring you some bonus content from Professor Hall on his thoughts about how AI will affect political polarization, what should we be wary of, but also how AI can be a useful tool in disseminating information to voters.

Andrew Hall: I’m Andy Hall. I’m a professor in the political economy group in the Stanford Business School.

Kevin Cool: So Andy, you study polarization and the effects of technology on the outcome of elections. How are technologies like AI affecting polarization right now?

Andrew Hall: I don’t think we know how they’re affecting polarization yet. I think we have a sense that they could be fairly disruptive to the workings of our electoral system in the pretty near future. I think even that’s not for sure, but I think we have a sense that if it makes it easier for people to make misleading or fake content and to distribute that content at scale, then we could be worried that it’s going to further erode the information environment around our elections. And I think there’s kind of two layers to that. The first is people could be more misinformed and they could make decisions they wouldn’t otherwise about who to vote, for example, and they could become more polarized as part of that process though we really don’t know that that’s the case. The second layer is whether or not that misinforming occurs the very belief that it could be occurring and that this new technology that we’re unfamiliar with could be changing the outcomes of elections. That’s sort of itself a risk to the system because the more people who don’t believe that the whole process around our democracy is fair or has integrity, the less likely they are to accept outcomes or to buy into the society that they’re part of. So I think it’s early days to make any strong claims about what its impact is going to be on polarization. But I think we have a sense that it’s potentially disruptive to the information environment in ways we should pay attention to.

Kevin Cool: And it seems like this technology is arriving at a time when already there’s a loss of trust in terms of institutions and maybe the election system itself. So in the short term anyway, what are some ways we can govern this or put up guardrails of some kind to avoid the worst outcomes that this might produce?

Andrew Hall: I think in the very short run between now and November, I think there’s probably two particular things we should do that we’re already exploring. One is to really focus on the distribution of AI generated content. I think there’s been somewhat mistaken focused by some people on the generation of the content itself that it’s problematic that someone in their basement is creating text or images that could mislead people or upset them and so forth. There are of course problems there that we should explore, but that in and of itself has no impact on elections. The impacts, if they exist, are going to occur when that content that fools someone or makes them feel differently than they would otherwise gets distributed at scale and also isn’t counteractive by other content that those people see. And so we should be really focused on that problem of distribution. And if we want to zoom in even more, I think we should be particularly focused on what I would call the October surprise scenario.

The October surprise scenario here, which seems very likely to occur is late in October in the lead up to the election, the sudden release of some extremely damaging looking content from a citizen journalist or another source that we’re not necessarily sure is checking the content carefully that makes it appear like one of the candidates has committed a very serious crime or admitted something shocking or so forth. And it’ll be at that moment where we’ll have this very difficult decision as a society. Do we think this is real content? And that could go one of two ways. It could be fake and some people could believe it’s real and it could change their decision before we have time to change their mind. Or it could go the other way. It could be real. And one of the candidates who’s featured in it could insist that it’s fake.

We’ve already seen both of these things occur in elections, in other countries, in Turkey and in Argentina and other places. So we need to be mindful of those dual possibilities and we should spend a lot of time monitoring the distribution of content, putting a lot of priority and trust of content that has been vetted by major organizations of varying ideologies. And we should be particularly skeptical this cycle of video or photos coming from non-major outlet sources late close to election time. The second thing I want to mention is this sort of belief or even panic around what AI could do. I think a huge mistake we could all commit together would be to panic unduly over the effects of these things. This problem of distributing content that’s misleading or polarizing its scale. We know that problem. It’s existed for a while. We certainly haven’t solved it, but it’s not new.

And AI, I don’t think fundamentally doesn’t really change that dynamic. It’s already a problem at scale. We need to focus on effective content moderation that’s good for the information environment without being unduly chilling on free speech. And that problem is still the same this cycle as every other cycle. We should be very wary of claims that there’s something really special and new here that’s going to require total panic over the outcome of the election. And in particular, I want to highlight something I think is almost guaranteed to happen, which is going to be some company, some probably startup is going to tell the New York Times or some other major outlet. We created this special chatbot and it has so much new AI in it that no matter what you think, you talk to it and suddenly you completely change your beliefs and you change your vote.

They’re going to have really strong economic interest to make that claim. It makes the money, they sell some kind of product. We know from a huge amount of the study of political behavior that changing people’s minds that way is incredibly difficult. And everything we know about generative AI so far suggests the scope for that kind of attitude change is very limited. That doesn’t mean it can’t happen and that we shouldn’t keep an eye on it. I think it would be a huge mistake either right in the buildup to the election or especially just after the election for the losing side to start to say, oh, this company that built a chat box swung the election. And there’s almost guaranteed going to be startups that make that claim and we should be super skeptical of those claims.

Kevin Cool: So you’ve hinted at this, but the average person cannot validate the veracity of a particular claim, something that’s AI generated that’s fake. How are they going to know? So first question is who does that? Who’s responsible for that? And then secondly, is AI potentially part of the solution? Could we build AI products that would detect AI fakes, for example? I know you’re not a technologist, but

Andrew Hall: Yeah, it’s an interesting question. The short answer is I don’t know and no one knows. I’ll respond in a couple different ways. First, there is ongoing work to do what’s called watermarking AI generated content. I think the idea there is a very logical and a very tempting one is to say it’s going to be really hard to figure out whether a piece of content is true or not. We’ve been trying to do that for a while. There’s a lot of problems with it. We probably shouldn’t do too much of that. What we can do is be transparent about the provenance of each piece of content. So we’re not going to tell you if it’s true or false, but we’ll tell you whether it was AI generated or not. And that’s something tech companies and civil society have been partnering on for a while now and trying to work through.

It turns out for a couple reasons that while I think it’s a logical and admirable thing to do, it’s just not going to be the panacea we would like it to be. And there’s really two reasons for that. The first is it turns out it doesn’t seem possible to make watermarking stick in the sense that if you’re an adversarial actor, you can almost always, if not always take something that’s been watermarked and figure out a way to remove the watermark. So an example of that would be like I use DALI, let’s say to generate an AI generated image, and I post that to Facebook and what is DALI, sorry? DALI is a generative AI tool made by open ai. You put in a text prompt and it generates an image based on your prompt. You can do a lot of things to edit the image and so forth.

You could definitely use it or any of the many competing tools made by other tech companies or startups to produce political misinformation in an image form. Now, different AI companies try to stop you from doing that in various ways, but one way or another, people can get around it. Now once you post something from DALI or another service, it could be watermarked, so it’ll say, no, this image was AI generated and the major social media platforms have all said they’re going to do that. So you’ll see that online and you’ll see that it’s tagged as AI generated. If someone wants to be an adversarial actor, it’s relatively straightforward to do things like take a screenshot of the image on your phone, remove the watermark and post it. And there’s just really, we haven’t figured out a way to deal with that. The second problem is even deeper is even if we could figure out a way to do that, it doesn’t really get at the root problem, which is about the information environment. The root problem is not whether a piece of content is AI generated or not. It’s whether it was intended to fool you into thinking something that’s not correct. And that’s a content moderation problem, not a watermarking problem.

And watermarking can actually mislead you. For example, tons of people use AI on Instagram or TikTok to adjust the red eye in their photos. Those are all going to get tagged as AI generated images, but they’re obviously not really misleading in any particular sense. On the other hand, tons of misleading or problematic content gets produced that isn’t AI generated, just old school and that won’t be tagged. And so it’s just not really going to solve the problem. That’s watermarking. There are a lot of other ways to answer your question. One thing I’ll just note, I guess two things actually. One is we can use the same generative AI methods to do content moderation and everyone is exploring that and it seems likely that will help us to scale content moderation. Again, though clearly not going to be a silver bullet for any of these problems, but probably a helpful piece of the toolkit.

The last thing I just want to mention is there’s already been, we’ve seen the steady growth of what I would call crowdsourced efforts to flag content. And that I think is a really interesting area that’s likely to continue to grow. And the reason I think that is that we’ve now seen it proliferate across platforms that take very different approaches to these problems. So one of the things I think is most interesting is that Twitter, which is known as sort of the like now or X as it’s now called, it’s known as the, we don’t take very much content down platform, and yet it too has embraced these methods for crowdsourcing the flagging of misinformation through what they call community notes. And so I think a likely long-term place we’re heading in terms of our toolkit for how we are stewards of the online information environment is going to be more crowdsourcing and more other kinds of democratic tools to allow users to self-govern.

Kevin Cool: In November, 2023, professor Hall participated in a panel about AI and democracy hosted by the GSB and the University of Chicago Harris School of Public Policy. He spoke about what benefit AI could bring to our political system, including how it could help local candidates reach voters.

Andrew Hall: I think it’s very understandable to be focused on the threats of a new technology and people are right to be thinking about that now before the election. But there are also really interesting opportunities for this new technology in the political space, but we wanted to spend some time talking about that as well. And so we know that one of the big challenges in democracies is making sure that people have access to useful information about candidates, about politicians, about what’s going on in the world. And we know that’s challenging for a number of reasons, most of which have to do with people’s level of interest or level of engagement with the information that they’re provided with today. And we think there’s pretty interesting evidence that these new AI generated chatbots are actually a potentially very engaging way for people to access syntheses of large amounts of complex information that could be used for voters in pretty cool ways.

So we have one of the co-authors on the white paper Yamil Velez actually has some ongoing research on this where he takes in a bunch of party platforms and information about parties’ positions and puts it into a special customized chatbot that you can then talk to and you can ask it questions about, I care about the following issue, can you tell me about in my state or my locality, like the positions of the parties or candidates on that issue? How should I think about that? You can engage the bot in a much deeper conversation that I think could turn out to be a really interesting and valuable way for people to learn more about politics in a more accessible, more user-friendly, less kind of intimidating way. So we’re very interested to follow you Yamil’s research on that as well as other people. But we encourage others to think about studying the area.

We think it’s pretty important. We know that down the ballot campaigns and candidates have a lot less resources and it’s harder for them to reach voters. It’s harder for them to package up information for voters. And again, these tools may allow those down ballot candidates to actually provide more information in a more accessible way to voters. And we’ve seen this with previous technologies. There could be, we’re not saying we’re sure about, this is just a possibility. It could be that at the top of the ticket, these technologies are quite disruptive in ways we worry about. Whereas down the ticket where information is already very low, it could actually be beneficial to candidates. And so we’re interested to see in, let’s say in the US and state and local races, are we going to find positive ways that otherwise under-resourced campaigns are able to use these tools.

So really want to expand on that as well. We think it’s an important area. Finally, I’m just going to talk about one bigger picture concern that came up that’s probably longer and slower moving than just the 2024 election, but which we think is also important for people to be thinking about now, and this is the risk of what we call information centralization. So one of the things we’ve seen over the last 10 years is increasing concerns that these very large online platforms, because so much of our social, economic, political, and cultural lives now take place on these platforms. The decisions that a social media company or a search company makes about who’s allowed to say what information is true or false and so forth, they have pretty big influences on society and there’s been a big debate in the social media sphere, and I’ve been involved in this work about who are the right actors to make those decisions that really affect our collective ability to communicate with one another about politics, for example.

It’s possible, we don’t know what’ll happen, but it’s possible that generative AI is going to accelerate that challenge in the sense that imagine a world in the future where a lot of us are generating a lot of our materials, whether it’s news articles or political essays or speeches or posts or whatever, through these generative AI tools. And if they’re concentrated in a small number of large platforms, there’s a potentially very worrying dystopian future where what ideas or values are expressed in society comes under the influence of a set of rules decided by a small set of companies. And that could really accelerate the challenges we’ve already seen in social media. And we want to highlight that because we think that’s an important area for people to start thinking about now. Well, the actual use of these tools is still relatively limited in society, and we might want to think about ways to govern these generative AI platforms of the future to make those rules and those guardrails set by processes that people think are fair and legitimate.

Kevin Cool: This then is produced by Stanford Graduate School of Business. For more on our professors and their research or to discover more podcasts coming out of Stanford GSB, visit our website at gsb.stanford.edu. Find more on our YouTube channel. You can follow us on social media @stanfordgsb. I’m Kevin Cool.

For media inquiries, visit the Newsroom.

Explore More