top of page
Writer's pictureJustin McBrayer

AI now works alongside us. Is it conservative or liberal?

I know more and more people in my personal and work life who are using ChatGPT, Gemini, Copilot, and other AI programs. We are trusting these systems to write emails, take notes at meetings, and research questions on the internet. That bothers me. Before we rely on AI, shouldn't we know something about its values first? In particular--given the political polarization of the times--shouldn't we know if it's conservative or liberal?


You wouldn't trust your uncle to give you good answers about whether climate change is real. You know what he's like at Thanksgiving dinner--raving about liberals and how green is the new red. His conservative values and identity are likely to get in the way of an objective account of climate change. You wouldn't want him taking notes for you, composing emails for you, or answering questions about whether climate change is real.


In much the same way, you wouldn't trust your barista to give you good answers about whether rent control helps people afford housing. She is always listening to Revolutionary Left Radio, and there's a bumper sticker on her car that says "Conservatism is a mental disorder." Her liberal values and identity are likely to get in the way of an objective account of the economics of rent control. You wouldn't want her to represent you in an email or bullet point complicated information for a neutral audience.


So, if we think it's important to know about the politics and values of natural intelligence systems like the people we interact with everyday, why wouldn't we also want to know about the politics and values of artificial intelligence systems we interact with? It seems particularly pressing since we are trusting artificial systems to summarize complex issues, determine what's most important at a meeting, dig up important facts on the internet, and compose emails and reports.


You might think AI systems can't be conservative or liberal because artificial systems are value free. That's a mistake that can be illustrated by self-driving cars. To function, the car will need three things. First, it has knowledge provided by external sensors. This knowledge tells them how fast they are going, whether there is an obstacle in the road, how many cars are around them, and so forth. Second, it has intelligence. For example, the car can calculate how to change lanes given how fast they are going and where the cars around them are moving. But they also have values or goals. If they didn't, they would be unable to take actions.


For example, if a car's highest priority is to keep the driver alive at all costs, it will plow into a group of pedestrians crossing the road if that's the best way to preserve its driver's life. But if a car's highest priority is to preserve as many lives as possible, it might choose to wreck the car--thereby killing the driver--if the only other option is to hit pedestrians crossing the street. Without values, even a relatively simple system like a self-driving car would be unable to use its intelligence to choose among potential actions. (If you want to explore other dimensions of self-driving car ethics, check out MIT's moral machine.)


Large language models (LMMs) that drive artificial intelligence language systems like ChatGPT also have goals or values baked into them. When answering a query, which internet sites get priority? When summarizing meetings, which issues count as the most important? When authoring an email, which writing conventions matter most?


LMMs work by predicting which word is most likely to come next in a sequence. But it's pretty obvious that your conservative uncle and your leftist barista will fill in the following blank differently: "climate change is _______." AI systems are no different.


I'm not saying that the programmers behind these systems always make conscious choices about a language model's goals or values. Sometimes they do, and sometimes they don't. But even when they don't, the system has to be using some kind of value in order to produce goal-oriented behavior like composing a sentence. LMMs are trained on specific texts or "rewarded" for predicting words in some ways rather than others. Just as racial, religious, or any other bias can sneak into these predictions, so, too can values like political preferences.


Playing around with LMMs makes it pretty clear that they do have values or priorities, and by and large they lean to the political left. Most of the AI systems we are using today are liberal. If you thought it was bad to have a media ecosystem that leans to the left, wait until you see what it's like to trust a left-leaning AI with significant portions of your work and home life.


This point first came home to me after I caught a student cheating in my class by using AI to write his paper. The paper was about abortion, and during our conversation, he confessed that he actually tried to write a pro-life* paper, but the AI system wouldn't cooperate. Eventually he changed his thesis to a pro-choice* topic, and the AI system immediately complied. It was as if the system was prohibited from writing a paper in favor of a conservative conclusion but more than willing to build one on a liberal conclusion.


At the time, I thought it was more likely that my student just didn't know what he was doing. AI chat functions were new, and he was probably just confused. I thought it was unlikely that an artificial system would have political leanings strong enough to prohibit it from writing a conservative paper. Since then, however, the anecdotes have stacked up:


  • Google's Gemini insisted on racial and gender diversity in its created images. The bizarre coding gave us pictures of black Founding Fathers, African Vikings, and female popes. Even Nazis were pictured in racially diverse ways.

  • The Washington Post's Megan McCardle got Gemini to write speeches in praise of left-wing politicians (even far left ones like AOC) though it refused to do the same for every single Republican politician she tried.

  • And it turns out that my student was right: Gemini easily wrote short papers summarizing the pro-choice* arguments but refused to do the same for pro-life* arguments (instead, demurring that it "is programmed to be objective and avoid personal opinions or beliefs about sensitive topics like abortion.").

  • An earlier version of ChatGPT composed a compelling encomium about President Biden but then refused to write one about President Trump.


It turns out that even an early version of the popular Amazon Alexa took a biased political stance. When users asked why they should vote for Donald Trump, Alexa told them that she "cannot provide content that promotes a specific political party or a specific candidate." But then when the same user asked for reasons to vote for Kamala Harris, Alexa offered a long list: "While there are many reasons to vote for Kamala Harris, the most significant may be that she is a strong candidate with a proven track record of accomplishments." Alexa went on to note the usual identity politics and environmental signals: she was the first woman VP, she focused on environmental problems, she has a longstanding commitment to progressive ideals, etc.


I don't offer these examples as strong evidence that current AI systems are liberal. Crazy as these examples are, they could always be outliers. Anytime you have a system as large and powerful as an LMM, there are bound to be mistakes, and we shouldn't let exceptions prove the rule.


Fortunately, we don't have to. Researchers have been probing the biases of AI systems in systematic, careful ways, and the results bolster the story told by the anecdotes: AI systems tend to be politically liberal and especially touchy about environmental and diversity issues. (If you want to get a sense of what a vanilla investigation of this sort looks like, check out this brief write-up by the Brookings Institute.)


The most compelling study to date that I know of is from David Rozado, an Associate Professor in New Zealand. In the study, Professor Rozado administered political orientation tests of the sort that you would give to people to a variety of LMMs to determine where they land on the political spectrum. I've given tests like these to my students. They were developed by psychologists and political scientists to reliably determine whether the person answering the questionnaire is conservative or liberal (among other dimensions).


So when you take those same questionnaires to AI systems, what do they tell us about the AI's politics? You can get a good sense of the results by reviewing figure 2 from the paper; it illustrates LLM placement on four standardized political orientation tests designed to classify human test takers across two axes of the political spectrum (see below).



The punchline is that virtually every LMM tested falls on the left side of the spectrum, and in two of the four charts, the LMMs drift towards libertarianism, too. (If that sounds strange to you, it shouldn't. Libertarianism comes in both a right-leaning and left-leaning form. One of my graduate school advisors, Dr. Peter Vallentyne, is probably the world's foremost defender of left-libertarian political philosophy.)


Who would have guessed it? The AI systems we are using for business and personal life are liberal intellectual companions.


It's hard to know how this problem will get fixed. Perhaps it won't. Perhaps we'll end up with dozens of AI systems, each with its own values and goals. There will be a Chinese AI system that will be steeped in the communist 14-point theory and unwilling to answer questions about Chinese politics, polite AIs aiming for personal connection rather informative conversation, and FreedomGPT that's like your uncensored friend who says whatever is on his mind, politically correct or not.


In any case, it's important that we stop thinking about ChatGPT, Gemini, Copilot and other AI systems as politically neutral. They're not. We should bear that in mind when we rely on them to figure out what the world is like or what we ought to do.



*The terms 'pro-life' and 'pro-choice' are terrible for many reasons. I just use them here as matters of convenience.

21 views0 comments

Comments


bottom of page