How Is AI Impacting Democracy and Voting?

© Photo by Lila Patel | Adobe Stock

9 min read

This article was not written by an artificial intelligence. It was drafted by me, Joe Vukov, a living, breathing human being. I started writing it en route to an academic workshop in San Diego. I then edited it in my dining room in Chicagoland, where I live with my wife and three kids, sipping a cup of coffee and worrying a bit about a lesson I have to teach tomorrow.

Trust me? I hope so. But in an era of artificial intelligences (AI)—particularly large language models (LLMs) such as the now-infamous ChatGPT—trust can be difficult to come by. ChatGPT, after all, can draft entire essays (much like this one) at the click of a button. The essays it produces are “originals,” or at least insofar as the combination of words has never been produced before. And they are impossible to “watermark”—there is no tried-and-true way of telling a hand-typed essay from the AI-generated knock off. LLMs, moreover, are just one form of AI: AI can now create lifelike video, sketch images, compose music, crunch data, and produce audio files. Still trust that I wrote this essay on my own? Again, I hope so. But as I said: trust is becoming increasingly difficult to come by. 

This should leave us worried. Trust, after all, acts as a foundation for human relationships and institutions. Artificial intelligences—with their capacity to replicate human modes of creativity and interaction—can undermine this foundational trust. And once the foundation crumbles, the edifice topples with it. 

Nowhere should this concern us more than in our political lives. Democracy depends upon our trust: in our politicians, political processes, election results, and, most importantly, in each other. Of course, trust in each of these areas has already been eroded. Yet the rise of AI raises new, crucial challenges. In this upcoming election cycle, we must be aware of them. And prepared to meet them. In what follows, I consider major challenges that AI poses to our political life, and strategies for responding to these challenges. 


Misinformation and disinformation are nothing new. Yet AI raises these challenges in new ways, and to an unprecedented degree. To understand how, we must understand what AI is, and how one is built. 

The first step to constructing an AI is feeding it a massive amount of data from which it can “learn.” This is called its “training data set.” What the training data set is composed of depends on the kind of AI you are building. For an image-generating AI, the training data set may consist of a repository of millions of photos. For an LLM, vast quantities of written text. And so on. You get the picture.

The problem is that, if an AI is to function properly, its training data set must be enormous. This leads to problems because as we curate training data sets large enough to build an AI, these data sets simultaneously become impossible to fact check. For example: ChatGPT used all of Wikipedia as part of its training data set. Try having your summer intern fact check all of Wikipedia. Not going to happen. 

Trust acts as a foundation for human relationships and institutions. Artificial intelligences—with their capacity to replicate
human modes of creativity and interaction— can undermine this foundational trust.

The result? Even an AI built with the best of intentions inevitably contains misinformation in its training data set. And an AI built using misinformation produces misinformation. Garbage in; garbage out. The products of any AI, then, inevitably contain misinformation, half-truths, alternative facts, and outright lies. 

And that’s when AIs are built and used using the best of intentions. Which of course they aren’t. And won’t be. Indeed, AI, when used with the intention to deceive and mislead, has the capacity to produce disinformation at an alarming rate. Previous election cycles have been dominated by worries about “fake news.” The worries are well-founded. They have, however, increased exponentially since the advent of AI. Just a few years ago, the production of fake news was limited by the typing speed of bad faith actors. No more. Those same bad faith actors can now churn out gigabytes of disinformation in the time it took them to pen a single article. 


In the movie Inception, Leonardo Di Caprio and his team enter dreams to implant ideas into unwilling targets. The movie is (thankfully) science fiction. You needn’t worry about Leo haunting your dreams any time soon. Yet a different kind of inception—one powered by AI—is not only possible. We’ve been living with it for years. 

Here’s what I’m getting at: AI has already proven itself to be capable of nudging our opinions this way and that. Never mind the newer, flashier forms of AI. Think here simply about the algorithms that govern your YouTube and Amazon recommendations, or your social media newsfeeds. YouTube and Amazon use my habits to predict what I’ll like next. But, of course, in doing so, they also shape my preferences. A recommendation might pop up that I would have never picked myself, but soon, after watching a few videos or purchasing a few books, I find myself going along with the recommendations. 

In previous election cycles, we’ve seen how these kinds of nudges and recommendations have led to polarization, echo chambers, and the siloing of ideas. I hate to break it to you, but AI hasn’t become less powerful since these became problems. Hollywood got this one wrong: we needn’t guard ourselves from inception while sleeping. It is while we’re awake and scrolling that we are most vulnerable to it. 


AIs can be used to produce strikingly real images, sound clips, and videos. Early in 2023, for example, AI-generated images circulated of Pope Francis wearing Balenciaga. Today, AI-generated YouTube videos knock off famous aesthetics, voices, and visual styles. You can watch a trailer for Lord of the Rings as directed by Wes Anderson, listen to David Attenborough describe the dining practices of patrons of a Taco Bell (in Attenborough’s parlance: “tack-o bell”), or see what the Harry Potter characters might have looked like had they hailed from Berlin (or the local body building circuit). The images, voices, and scripts? Entirely AI-generated. Also (at least for me): thoroughly entertaining. 

But after a minute’s reflection, also disturbing. AI, after all, can not only be used to generate Hermione and Ron styled as Berlin partygoers. It can also be used to create a fake stump speech by your local congressional candidate. Or a fake film roll of a State Senator, supposedly leaked by an attention-seeking aide on the Hill. Or Watergate-style recordings captured of a rival party’s meeting. 

AI-generated deepfakes have great comedic potential, yes. But also great potential to confuse the public. And in doing so, undermine democratic processes. 


We face the first presidential election cycle in the era of AI. That should scare you. At least a little. It scares me. But we needn’t be despondent. There are strategies we can adopt that will help us navigate our political lives in an era of AI. Above, I shared challenges AI poses to our political life. So, to maintain balance, I close with some strategies for responding to those challenges. 

Educate Yourself

The first step to addressing the challenges raised by AI is to educate yourself in those challenges. If you have made it this far in the article, you have already gone some way in doing that. You know that images, voices, and texts can be generated by AI, and that it is near impossible to sort these from the genuine versions. You know that AI has the capacity to nudge your beliefs and opinions this way and that. You know that misinformation has quickly become simple to produce, and difficult to trace. 

That might seem depressing. But the knowledge itself is empowering. Once we understand the challenges AI poses, we can adjust our habits of media consumption accordingly. By way of analogy, consider your relationship with the internet. No one in the twenty-first century assumes all the internet is true. We know it contains broken links, out-of-date website, and plain old disinformation. And yet: near all of us still use the internet. Knowing about its limitations allows us to use it in ways that (mostly) keep us from being deceived.

Likewise, we’ll need to take our knowledge of AI as a prompt to relearn and revise our habits. Just as we currently wonder whether a photo of a college roommate has been touched up or photoshopped, and just as we view the internet through a filter of suspicion, we’ll need to question whether a speech given by a rival party’s candidate is a deepfake or the real deal. As we do this, however, we’ll need to guard ourselves. Not to always assume the worse of others, or to develop a deep and cynical skepticism. 

Channel Your Inner Luddite…and Bartleby

The Luddites of industrial age England smashed the technology they saw as taking away their livelihood. Modern Luddites don’t smash tech but do eschew it. 

I’m no Luddite. I own a computer. And a smartphone. And a whole case of clickers and adaptors that I carry in my backpack so I can hook up to projectors in the classrooms where I teach. Like I said, I’m no Luddite.

Yet we can learn from the Luddites in navigating political life in an era of AI. Sometimes, the best way to resist the problems raised by new technologies is simply refusing to adopt them. I can resist Amazon’s algorithms by shopping at a local bookstore. I can bypass social media echo chambers by subscribing to and reading from a wide swath of media outlets. 

Likewise, I can avoid the challenges raised by AI—the nudges, the misinformation, the deepfakes—by actively choosing to take a step back in technological time. We can refuse the offerings of AI by talking politics with neighbors rather than AI-powered bots on the internet. By listening to stump speeches IRL rather than scrolling through online videos. By building communities at the local political level and leaning on them for our political formation and information

Once we understand the challenges AI poses, we can adjust our habits of media consumption accordingly.

Melville’s Bartleby said it best: I would prefer not to. The rise of AI threatens to undermine many crucial institutions, including increasingly fragile political institutions. AI promises to feed us the deepfakes, misinformation, and nudges that, to some degree, we crave. We want to see our political rivals fail. We want our own biases confirmed. We want to rebrand deepfakes as hidden truths, so long as the slant lines up with our own. AI can deliver on all those desires. Before accepting the offer, however, we should look to Bartleby. His simple act of refusal proved revolutionary. If enough of us follow his example in the face of AI, ours could be as well. 

Join the conversation. Send your thoughts to the editor Jon Sweeney.

Joseph Vukov is an Associate Professor of Philosophy at Loyola University Chicago. He is the author of Navigating Faith and Science (2022) and The Perils of Perfection: On the Limits and Possibilities of Human Enhancement (2023).