Although polling data is omnipresent in our society, used not only by political campaigns but also by marketers, businesses, nonprofits, and governments, it takes a front seat in election season, when the candidates and the media are constantly jumping on polling numbers to create the next day’s headline.

This election year is no different. When the polls failed to correctly predict the results of Michigan’s Democratic primary, where Bernie Sanders edged out the favorite Hillary Clinton, Americans started to worry about the accuracy of polling. But the experts — including many of those in central New Jersey, where modern polling was born — have been unperturbed. They know that polling is based on sampling assumptions that just might be wrong. And of course polls have the possibility of statistical error.

The fundamentally unpredictable nature of polls may remain constant, but the changing nature of the polling industry itself has figured in the news more than it usually does in an election year. The poll results were critical in the early months of the crowded Republican presidential primary campaign, when the data was used to determine which candidates appeared in the debates televised in the prime viewing time and which ones were relegated to the “kids’ table” debates that aired in earlier, less desirable time slots.

The industry itself has also undergone a reexamination of its techniques because of the upheaval in the way Americans communicate, having largely abandoned landlines for mobile phones. The result led the grand-daddy of American polling organizations, the Princeton-based Gallup Poll, to withdraw from its daily tracking of how candidates are faring with the voters and concentrate instead on analyzing the issues (see sidebar, page 35).

And the current election season has seen the emergence of several pollsters into the national spotlight, especially the Monmouth University Polling Institute based in West Long Branch. Though just a little over 10 years old, the Monmouth organization has broadened its reach to the national arena, and played a part in determining which Republicans ended up in which debates last fall. The founding director of the Monmouth Institute, Patrick Murray, will elaborate on polling issues on Thursday, April 7, when he speaks at the monthly luncheon of the Princeton Chamber of Commerce.

Of the Michigan polling errors, Murray says simply: “Eighteen other states were right; Michigan was a one-off.” He attributed the miscalculation to a “combination of the cell phone sample in Michigan that was not as robust and a higher than expected turnout among young people. He added that there was “a lot of movement in Michigan at the last minute.” He does not attribute the Michigan call to a particular sampling method that was wrong because, he says, “every poll was off to some degree.”

Poll aggregator Sam Wang, professor of molecular biology and neuroscience at Princeton University, has organized the Princeton Election Consortium — election.princeton.edu — an initiative that seems out of line with his academic credentials. But by using probability and statistics to analyze complex experimental data, Wang aims to provide informed analysis of national elections — largely by analyzing polling.

Wang attributes the mistaken predictions in Michigan to the fact that pollsters generally use past elections to decide how much to weight each voter. “In 2008, Michigan had half of its delegates stripped because of a rules violation, and Barack Obama was not even on the primary ballot,” Wang writes in an E-mail. “Consequently pollsters had an inaccurate model of who would vote this year. That’s why their polling error was restricted to Michigan.”

Wang adds an example of another, subtler pollster error. “Pollsters seem to underestimate Ted Cruz voters’ likelihood of voting. So the Trump-versus-Cruz race is somewhat closer than analysts realize. Even so, Trump is still nearly certain to get a majority of delegates, and therefore the nomination.”

Frank Newport, editor-in-chief of the Gallup Poll, notes that Michigan has an open primary, which meant it was harder to isolate who was going to vote on the Democratic side. Because younger people tended to be correlated with votes for Sanders and blacks with votes for Clinton, “A modest differential in turnouts for those groups has a big effect,” he says. This is particularly true because the number of Democrats voting in Michigan was relatively small.

“Primaries are much more challenging for pollsters than the general election,” Newport says. By election time, a significant group of the electorate is locked in, and pollsters are trying to estimate turnout, as well as any small group that may vacillate. “In a primary it is easier for lots of people to change their minds at the last minute.”

To understand where the industry is today, a sense of history is important. Early Gallup polls were done in person through a sampling methodology that identified certain neighborhoods, Murray explains. Information for the General Social Survey, which has monitored trends and constants in social characteristics and attitudes in U.S. society since 1972, is still gathered in person.

Telephone interviewing did not become the standard until the 1960s, when more than 90 percent of households had phones. In fact, except for specific population studies, Murray says, “as recently as 10 years ago, almost all surveys were by landline telephone, except for specific population surveys.”

Then came cell phones, which created a quandary for pollsters. “If we continued to use only landlines, we would be missing a skewed segment that is younger and more urban,” Murray says. Furthermore, lists of existing cell phone numbers were not initially available, although today they are pretty robust.

The biggest challenge to using cell phones is a federal law created to protect cell phone users from marketers — back when people paid by the minute for incoming phone calls. This law makes it illegal to load cell phone numbers into a computer to be selected randomly and pre-dialed, as is done with landlines. Because of this law, which also applies to legitimate surveys, people must be hired to individually dial cell phone numbers, which raises costs for pollsters.

Survey response rates have also dropped drastically since Murray’s college days in the 1980s, when a university poll would get response rates of 80 percent or higher.

The absence of caller IDs in the 1980s helped bolster response rates. “It was in the days before market research, and people would pick up the phone because they didn’t know who was on the other end,” Murray says.

Telephone response rates started to decline in the 1990s, Murray says, and today are hovering at 10 percent. People tend not to pick up the phone if they don’t know who is calling, but once they answer, there are “not more people who refuse to be interviewed.”

A final challenge involves sample creation when using both landlines, which represent households, and mobile phones, which represent individuals, in a survey.

At the Monmouth University Polling Institute, where Patrick Murray came in as founding director after a national search in 2005, the initial mission, he says, was “a fairly blank slate.” It started as a statewide poll, in partnership with the Asbury Park Press and other Gannett newspapers in New Jersey, with a full-time staff of three as well as student workers.

In 2010 the institute started dipping its feet into other states, looking at different Congressional races, and also did regional polling on coastal management issues. That year PolitickerNJ.com listed Murray on its “power list” of the 100 most politically influential people in the state.

In 2012 Monmouth started with national polling around the presidential election. By the next year, Monmouth was rated the most accurate in predicting Chris Christie’s 2013 landslide re-election and the U.S. Senate election that took place three weeks earlier. Monmouth earned an A- rating from FiveThirtyEight.com, the prediction blog run by Nate Silver.

In 2014 it added some state races around the country. When it sees a potential for student learning, the Monmouth Institute also does polling for government and nonprofits.

Two years ago Monmouth’s new president, Paul R. Brown, came into office, Murray says, “with the idea that Monmouth has a lot of resources that are national, and we should make a commitment to having a national audience and making sure that national audience is aware of the resources that Monmouth has.” Two resources were mentioned explicitly: the polling institute and the basketball team.

The basketball team this year enjoyed a 28-8 record, defeating UCLA, Notre Dame, USC, Georgetown, and Rutgers, won its regular season conference championship, and competed in the National Invitational Tournament, losing in the second round.

The Monmouth University Polling Institute has also earned a place in the national spotlight. The Monmouth poll has played a role in the network news shows’ decisions on the debate line-ups. And Monmouth director Murray has even become a part of the partisan political debate. After a Monmouth poll last summer that showed Gov. Chris Christie losing support in New Hampshire, the candidate took a shot at Murray, who uses Twitter under the handle @PollsterPatrick: “Just look at Patrick Murray and his tweets. There couldn’t be a less objective pollster about Chris Christie in America,” Christie said.

In his work Murray finds that most journalists are not very data savvy. “They look for the most fantastic polling numbers,” he says. “You can do interesting stuff about what issues are driving the electorate and what is concerning them back home, but the headline will always be who’s ahead in the horse race.”

Media sources vary in how particular they are about polling methodology. “There are some media that will ask very specific questions about your methodology before they report you,” Murray says. Most media ask about margin of error and how people are interviewed. ABC News also asks Monmouth where they get their voter lists, what their response rates are, and what is their percentage of bad telephone numbers.

Yet, in Murray’s experience, very few look at how questions are asked. “Print media is a medium of language and use of words,” he says. When journalists ask him why one poll says one thing about the Supreme Court nominee and another may claim something that seems contradictory, Murray asks, “Did you read the question they asked and how it was phrased?”

Murray developed his sense of how to write questions through two decades of experience. While in Washington for a semester, he landed a part-time job interviewing for a pollster. “I got firsthand understanding of how that population unraveled,” he says. “When you understand the interaction between the interviewer and respondent, it makes you better at writing questions.”

To understand how people actually talk, he goes out into the world to listen to the language people use to talk about issues. “Diners are the best place to eavesdrop — you hear the exact words and phrases they are using,” he says. In fact this year he went to Iowa and New Hampshire to learn exactly what people were thinking and how they were talking about it.

Using the polling after Merrick Garland’s nomination to the Supreme Court was announced, Murray says that most polls ask respondents whether they think he is qualified or not to be a Supreme Court justice, and people give an answer without knowing anything about him, but rather based on their opinions about President Obama and their party affiliation.

In Monmouth’s poll about this, they first asked interviewees: Do you think he is qualified or haven’t you heard enough about him to say? To the half who did not know enough, they asked: If you had to make a guess, what would it be?

“We did not push people into making a choice that they were completely uninformed about making,” Murray says. “We felt if we did, it would be more about Obama who appointed him rather than about the Supreme Court nominee himself.”

Moving to how surveys are used, Murray says, “Survey research writ large is an extremely important measurement tool.”

The Monmouth University Polling Institute just completed a poll for the Division of Vocational Rehabilitation Services, to better understand levels of satisfaction among its clients. Polls are also used to measure image and awareness of organizations, to do needs assessments, or to do major perceptions among a particular group.

The NCAA recently sponsored a study at Monmouth titled “Consent Communication: What Does it Mean for Student-Athletes?” looking at awareness of sexual consent among college students. It found some differences between athletes and non-athletes and suggested some interventions that can help improve awareness that “yes means yes” and that lack of a “yes” means “no.”

In 2011 an Institute poll on civic engagement in New Jersey had many on-the-ground consequences in the state. “It led to some questions being raised about how people obtain information on what is going on at that local level that matters most to their pocketbooks,” Murray says. Since most New Jerseyans seek local information online at the 500-plus municipal websites, he and Kathryn Kloby, assistant professor in the department of political science and sociology, studied how easy it was to obtain information from each website.

Working with 20 students to analyze the sites, they assigned one set of teams to look for 100-plus pieces of information on each site and another to time how quickly they could find key information. The report ranked the websites, and the New Jersey League of Municipalities sent it to all Jersey municipalities to use as a benchmark for improvement. Murray says the report has led to actual improvements in different towns.

Murray also shared what he saw as a unique polling effort by the Institute. “After [Hurricane] Sandy hit the state, we recruited a panel of the hardest-hit victims. We started with about 1,500 and have been tracking their progress every year.”

The Gallup Poll: Another major polling powerhouse with Princeton roots is, of course, the Gallup Poll. Whereas the Monmouth Institute has increased its focus on national primary polling, Gallup is devoting fewer resources to tracking the traditional horse race — that is, if the election were held today, who would you vote for? Instead, Newport says, Gallup has decided to help “move the democracy forward” and “put our primary focus on the public’s perception of the issues involved in the election and evaluating what the candidates are proposing and how the public is reacting to candidates and why.”

In particular, Gallup has looked at Americans’ responses to various candidate proposals.

At the recent AIPAC, American Israel Public Affairs Committee, conference Senator Ted Cruz proposed that the United States recognize Jerusalem as Israel’s capital and move its embassy there from Tel Aviv. “What we found in our research is that although that issue has immense importance in the Middle East, for the average U.S. citizen it draws a blank,” Newport says. Gallup found that 24 percent agreed on moving the embassy to Jerusalem, 20 percent disagreed, and 56 percent did not know enough to respond. They also found that Republicans tilt toward favoring this proposal and sympathize more with Israel whereas Democrats lean toward opposing and sympathize more with Palestinians.

When Gallup looked at the American public’s reactions to Cruz’s proposal of a 10 percent flat tax, Gallup found “a tepid response” and that “it is not a strong priority for Americans.”

As to Cruz’s anti-government proposals to shut down numerous departments, agencies, and other programs, Newport says, “It draws a total negative. Americans’ relationship to their government is complex. Their ire is mainly focused on Congress, and the public has quite a bit of recognition that the federal government has a role to play in our lives.”

Terrorism, Newport continues, is not a top concern of Americans; although it does rise after terrible bombings. Immigration, however, is an issue where Americans, even though many trace their heritage back to refugees, have almost uniformly rejected accepting refugees, except in 1999, when 66 percent favored taking refugees from Kosovo. Americans rejected taking children from Germany in 1939 as well as taking anyone from Europe in 1946 and 1947, from Hungary in 1958, from Vietnam in 1979, and from Syria in 2015.

Gallup’s primary focus is not public opinion polling but involves interviewing and consulting with business, industry, and government, for example, on how to have more engaged employees or customers.

ORC International (formerly Opinion Research Corporation and based at 902 Carnegie Center) has for the past eight years worked with CNN to conduct national, state-level, and post-debate, post-speech polls, as well as research to support CNN documentaries. Dave Murray, senior vice president of U.S. marketing, says that the CNN ORC International Poll tries “to highlight insights from Americans’ perceptions and attitudes about candidates, topics, and issues.”

The CNN Poll is typically conducted at least monthly from ORC International’s telephone facility in the United States, using a dual frame random sample of landline and mobile phone numbers. To assign percentages to landline versus mobile numbers, ORC uses data from the National Health Interview Survey. CNN’s national polls usually include questions regarding perceptions of the president, Congress, and others in the political world or of particular issues. Not too long ago, for example, they included several questions about the Confederate flag and what it symbolized for people.

Around Martin Luther King Day, the CNN polls always ask about discrimination, formulating questions as did previous CNN pollsters, to look at changing perceptions over time, particularly among African-Americans, younger adults, and millennials.

Beyond political polling, ORC International’s primary polling effort is the CARAVAN Omnibus, a shared-cost survey. Multiple clients include their own questions, often about perceptions of their company and industry or how behaviors and attitudes have changed in a given category, but all share in the costs of common filtering questions like gender, age, and region.

ORC International also provides its customers business intelligence insights, which might include surveys of a company’s customers or employees, changes in the marketplace, new product opportunities, or insights into how to improve products.

The Princeton Election Consortium, organized by Sam Wang, the molecular biology professor, has a very different approach to polling. Recognizing that each polling method has its flaws, he combines them in hopes that their errors cancel one another out. He merges data from many existing polls to do a meta-analysis of the numbers. He explains his approach and responds to questions in an E-mail.

In making single-state predictions, Wang says, pollsters vary in how they identify likely voters, both in terms of how they contact voters and weigh the data. “As a group they do a great job in presidential years,” Wang writes. “I view my main task as coming up with a way to collect their data in an unbiased manner.” He places a big priority on simplicity and transparency, and his approach involves “taking the median — the middle value when you arrange all the numbers in order.”

This approach, Wang writes, “does a great job, and doesn’t give too much weight to crazy outlier results. Using the median, fancy corrections of individual polls are unnecessary.”

He also takes a probabilistic approach that combines states with each other to consider trillions of outcomes all at once. “Taking all scenarios into account helps reduce uncertainty, and is a big part of why I came within one electoral vote of the actual outcome in 2008, and missed only one state in 2012,” Wang writes.

“In Presidential election years, my approach does as well as, or better, than the competition,” Wang writes, adding that “the main reason this works is that pollsters as a community do a great job.” Wang believes that not being a political scientist has been a big advantage to him this year because it helps him focus purely on voter opinion data. “Many political scientists were convinced that Donald Trump had little chance at the nomination this year,” Wang writes. “Careful examination of polling data, and Republican Party rules, led me to see back in January how strong his position was.”

Regarding the approach of Wang and others who aggregate polling, Newport observes that this is “statistically not a bad idea in theory, if you aggregate scientific measures, but there are a lot of caveats, like the quality of polls you put in.”

Patrick Murray grew up in Camden County in South Jersey, where his mom stayed at home with the children and then worked in a department store. His father was in the pressroom of the Philadelphia Bulletin.

Murray wrote a column about politics while he was at Lafayette College, but his entry into polling was his part-time job in Washington at Peter Hart Research. “I found it fascinating, calling people all over the country and asking who they were going to vote for and why,” he says.

He graduated from Lafayette in 1987, and several years later decided to go to graduate school in political science at Rutgers, earning his master’s in political science in 1993. “I thought I was going to be a political science professor,” he says. “Then I realized that everything I was interested in had some survey research component to it-I was more interested in practical politics than theoretical political science.”

While a graduate student, Murray started working as an assistant at the Eagleton Poll at Rutgers University, and left 10 years later as assistant director of the Star Ledger/Eagleton Rutgers Poll. He also did client research, needs assessment, and program evaluation for nonprofits and government agencies.

During 2004 Murray worked for the Rutgers Bloustein School of Public Policy while they were starting their survey research center. In 2005 he moved to the Monmouth University Polling Institute.

The election season brings a media jamboree of polling results, but whether these polls affect voting remains largely a matter of opinion.

Murray suggests that polling may have some small effect on strategic voting. In Iowa he went back and re-interviewed voters he had spoken to the week before the caucus and found that some people who were initially backing candidates registering in the low single digits switched because they wanted to back someone who had a better chance of getting the nomination. Mike Huckabee, he says, may have lost a point in Iowa because of people who decided to back Marco Rubio, but it didn’t seem to make a big difference in the outcome.

Dave Murray of ORC suspects that polling does have some impact on voting. “If your candidate way ahead, may choose not to vote because not convenient. There is no evidence, but it wouldn’t surprise me,” he says.

Newport notes that both effects highlighted by George Gallup, a “bandwagon effect” by people who want to vote for the winner and an “anti-bandwagon effect,” are hard to document. “I think voters take into account a wide variety of types of information in deciding who to vote for,” he says, from scurrilous ads and robocalls to the opinions of neighbors, Newport says. “Philosophically it doesn’t bother me if it [polling] does have an impact.”

Facebook Comments