Why Do Polls Differ So Much on Trump Job Approval?

By Jonathan Draeger
Published On: Last updated 06/19/2025, 06:36 PM EDT

In the latest poll from InsiderAdvantage, conducted June 15-16 with 1,000 likely voters, President Trump enjoyed a plus-10 net job approval rating, with 54% approving and 44% disapproving. In contrast, a Quinnipiac poll taken June 5-9 with 1,265 registered voters found the president seriously underwater – with only 38% approving and 54% disapproving.

If both polls had been conducted using the exact same methodology, their stated margins of error suggest there would be roughly a 1 in 1.75 million chance of producing results this far apart. While some variation is expected due to random sampling, such a wide discrepancy is almost certainly driven by differences in polling methods, such as weighting, survey mode, and question design.

Poll Weighting

One key factor is the categorization of Americans being polled. The InsiderAdvantage poll surveyed “likely voters,” which differs from the general U.S. adult population. These types of polls are more common near election season, as polls of registered voters (or just adults) won’t necessarily be as representative of the voting-age population as likely voter polls. In the last RCP Average for the 2024 presidential election, 14 of 17 polls were of likely voters.

In the Quinnipiac poll, the sample was registered voters, which made up the other three polls in the final 2024 RCP Average.

However, a poll of registered voters doesn’t necessarily mean Quinnipiac simply asked 1,265 registered voters whether they approved of Trump’s job performance and recorded that 38% approved and 54% did not. That result is shaped by weighting. After conducting the poll, pollsters adjust the results based on the demographics of those who responded. For example, if 75% of respondents were women, the pollster would assign greater weight to the 25% of male respondents to better reflect the actual makeup of the population.

The statistical methods used for weighting are one major source of difference between polls. In election-year polls it’s even more critical, as the “likely voter” group is different from national population statistics. If one pollster believes that a lot more women are going to vote in this election than another pollster does, they might weight their poll results differently, leading to even more wide-ranging results.

Method of Polling

Survey methods can also be a contributing factor to differing results. Some pollsters, such as Quinnipiac, still use live calling with interviewers asking questions. Others use different procedures, ranging from texting to using web-based “panels” of previously selected respondents.

Polling results can also be influenced by what seem like subtle variables. For instance, even if an equal number of men and women respond to phone interview polls, they might skew toward people with regular work schedules. On the other hand, polls with many questions that take longer can emphasize a bias toward people with excess free time. Pollsters such as Trafalgar Group have also said they’ve transitioned to using a mixed method of polling, because with controversial questions or around controversial politicians, some respondents might be hesitant to tell someone on the phone they have a certain opinion, but would be willing to voice that opinion in a web- or text-based poll.

One concern for pollsters using web-based platforms is that the results could be biased in favor of younger people who are more comfortable with new technologies.

Question Design

Social scientists have known for years that the ways pollsters design and word the questions also affect the polling results. For common questions like “Who are you going to vote for?” and “Do you approve of Trump’s job performance?” this is less of a problem because the questions are often the same regardless of pollster. However, questions that delve into complex public policy issues can vary greatly – deliberately or inadvertently – depending on the precise language employed.

For example, when the unrest was breaking out in Los Angeles in early June, YouGov asked the question, “Do you approve or disapprove of recent protests in Los Angeles against U.S. Immigration and Customs Enforcement (ICE) actions?” The results were that 36% approved while 45% disapproved. If YouGov had instead asked whether respondents approve or disapprove of recent “riots” in Los Angeles, the results likely would have been significantly different.

All of these factors can significantly change results, which explains how two polls vary so widely. To reduce the effect of one specific polling methodology on the results, RealClearPolitics averages the polls in order to get a better sense of the opinions of the general public.

2025-06-20T00:00:00.000Z
Every Week
The Takeaway
A special edition RCP newsletter that keeps you in the know on all the latest polls this election season.

State of Union

.