Forum: Is AI good
for democracy?

Forum

Is AI good for democracy?

Panelists

No, if it supercharges the anti-democratic trends caused by social media.

Andrew Yang

Noble Mobile

AI could be good for democracy if it is used to distill complicated information and convey policy trade-offs to help voters make informed decisions. It could be bad for democracy if it is used to replace workers, foment mistrust, profit off of user data and increase the risks of identity theft — that is, if it supercharges some of the trends that emerged with social media, which, contrary to earlier hopes, is now associated with the erosion of democracy. Which of those two worlds do we live in? I think you know.

Andrew Yang is CEO of Noble Mobile.

Yes, because AI can help citizens interpret laws and regulations.

Patrick Utz

Abstract

Since our country’s founding, statutes, amendments, regulations and case law have grown exponentially — outpacing the ability to track and interpret them efficiently. Fortunately, AI has made all of this information more digestible because large language models can process massive volumes of text and information and synthesize it — which helps to make government more accessible.

This year, we’ve witnessed an unprecedented amount of deregulation that has restructured health care, international relations, energy, manufacturing, finance and more. AI can measure the human impact of these government actions. It provides a critical translation layer — cutting through the noise and esoteric jargon to provide us with clear, easy-to-understand synopses, at a fraction of the cost and time it would have taken before.

While we are still far from where we need to be, I’m confident that AI is bringing us a whole lot closer to a more informed public and more accountable government.

Patrick Utz is CEO and co-founder of Abstract, a platform that uses AI to synthesize legislation and regulation.

No, because AI centralizes power, isolates citizens and can be too easily manipulated.

Michael Kleinman

Future of Life Institute

AI poses significant threats to democratic governance. First, it centralizes information flow in unprecedented ways. When people increasingly rely on AI sources for news and information, a handful of companies effectively become gatekeepers of how we understand the world around us — including and especially politics. We’ve already seen this danger with Grok’s tendency to align with Elon Musk’s worldview, demonstrating how easily these systems can be manipulated to serve individual or corporate interests rather than the public.

Second, if these same companies succeed in building artificial general intelligence and deploying AI agents that displace human workers at scale, we’d face catastrophic power concentration. A few tech giants would control not just information, but the entire economy. This would be a level of centralized power incompatible with democracy.

Finally, chatbots represent social media’s engagement-driven model on steroids. The goal of AI companies seems clear: citizens spending many hours daily interacting with AI rather than with each other. When human connection is mediated entirely through profit-seeking algorithms designed to maximize engagement, we lose the common ground necessary for collective self-government. Democracy depends on citizens deliberating together; AI risks leaving us isolated with our personalized algorithms instead.

Michael Kleinman is head of U.S. policy at the Future of Life Institute.

Only if experts and vulnerable populations have more input.

Nicol Turner Lee

Brookings

AI will only be good for democracy if the public interest is a critical factor in its design and deployment — and on this front, there are multiple areas for improvement. AI-enabled technologies are often launched without expert input and feedback from leaders in fields like health care, education, housing and finance, undermining both trust in these tools and their effectiveness. In addition, the lack of representation of more vulnerable populations among the developers creating the models prevents AI from being optimized for all users and, in some instances, sustains long-standing systemic biases that foreclose on equal opportunities.

AI will be one of the most transformative technologies of our time, but the benefits will be muted unless developers balance innovation with efforts to incorporate a wider variety of human experiences at all phases of implementation.

Nicol Turner Lee is a senior fellow in governance studies and the director of the Center for Technology Innovation at The Brookings Institution.

No, because AI threatens privacy.

Clara Langevin

Federation of American Scientists

If democracy is a political system of individual rights, we’re already at a pivotal point. That’s because AI technology is challenging existing data protection frameworks — and as a result, encroaching on individual rights, especially with regard to privacy.

How? Well, some AI systems can allow the intentional or unintentional reidentification of supposedly anonymized private data. These capabilities raise questions about privacy, consent and a new era of personal data being stitched together, analyzed and exploited systematically.

A worst-case scenario might mean targeting individuals at scale by, for example, intentionally making voting access more difficult for certain groups. Or by eliminating the privacy of an individual’s vote. Or something else we can’t anticipate that drives our society further from democracy’s ideals.

So yes, frontier AI will pose novel risks. That said, democracy can also be undermined without sci-fi scenarios, by simply automating existing unfairness. We need to advance clear, enforceable protections against discriminatory AI systems. If we don’t, we risk the further erosion of trust in our daily systems. Nothing could be more damaging than an erosion of trust in government and, by extension, democracy.

Clara Langevin is an AI policy specialist at the Federation of American Scientists.

No, because AI will take jobs, which will weaken democracy.

Adam Conner

Center for American Progress

For many experts, it’s difficult to envision how AI could be beneficial to democracy — and much easier to imagine how it could be harmful. Advanced AI can easily be misused to spread disinformation through deepfake videos or increase the sophistication of illegal influence campaigns in a way that could undermine confidence in elections and increase polarization. We also know that democracies become unstable and even fall when economies are weak. If AI takes as many jobs as some experts predict — without any sort of government response — then AI could lead to the end of democracy as we know it.

By the way, the inverse question is important too: Is democracy good for AI? If AI creates massive job displacement, there may be a huge backlash. Voters may reward politicians who limit or punish AI for taking jobs. That’s why those building AI would be wise to think about its impact on the lives of average citizens.

Unfortunately, as always, we will likely take the most American approach, which is to let these new ideas rip and pick up the pieces later.

Adam Conner is vice president for technology policy at the Center for American Progress.

No, unless the civic environment can counter AI’s darkest tendencies.

Farah Pandith

Council on Foreign Relations

Our current global landscape reveals an essential truth about how humans and technology have performed over the last quarter of a century and what it could mean for the next 25 years: If left alone without careful, nuanced oversight and guidance, algorithms (designed for profit and engagement) determine societal health and well-being. To apply this lesson urgently as we blast into the artificial intelligence galaxy is an essential task. With just 45% of the world’s population living in a democracy and 71% using the internet, humans must be deliberate and focused about what we value and why. Though the vast majority of Americans say they believe in democracy, 70% of registered voters surveyed by the U.S. Chamber of Commerce Foundation two years ago failed a basic civic literacy quiz; just 50% could name which branch of government makes laws. Democracy requires human emotion: A person must feel and want the experience of living with freedom, civil liberties, rule of law, free and fair elections and so on. Democracy also demands human participation and tending. When President Ronald Reagan established a democracy infrastructure in the form of the National Endowment for Democracy, a civic identity was reinforced. Today, profound responsibility lies on AI founders who must collectively decide to advance and protect those values. It may be too big a leap to hope for. Witness just a few examples of the consequences of no social platform guardrails: extreme polarization, an increase in distrust, and weakness in human perception about what is real and what is not. Democracy can’t thrive in these circumstances. To strengthen democracy at home and abroad, human innovation is required to remake the civic environment so that AI’s darkest tendencies are diminished. AI is not a beacon just yet. Caveat emptor.

Farah Pandith is a senior fellow at the Council on Foreign Relations and the Muhammad Ali Global Peace Laureate.

AI is neutral. It’s really a question of how we use it.

Kay Firth-Butterfield

Good Tech Advisory

Despite its apparent superhuman abilities, we would be wise to remind ourselves that AI is neither superhuman nor a magic wand. It cannot automatically fix the problems we have created for ourselves.

Because AI is simply a statistical prediction machine, it is a neutral tool in the conversation about democracy. It employs the thoughts of humans as its foundation but how it is used is the choice of the user: It can create massive amounts of mis- or dis-information, which can undermine political systems of all stripes, or it can be used in legitimate political ads. We can also use AI as a tool against ourselves: Humans using AI appear to be affecting their ability to think and learn, which does influence their ability to make good political choices.

It is deeply unfortunate for democracy that the use of AI as a tool of bad actors cannot be discarded. As a result, humans are distrustful about AI and the effect it will have on society in general and politics in particular. Most would like strong regulations for this powerful tool.

So, it’s not correct to ask whether AI is good for democracy. The question should be: How do we prevent humans from using AI to undermine our social systems?

Kay Firth-Butterfield is the CEO of Good Tech Advisory and a recipient of a 2025 Time100 Impact Award. Her latest book, “Coexisting With AI: Work, Love, and Play in a Changing World,” will be published Jan. 13.