Newspapers were regularly churning out opinion polls this election - but they do have an agenda

Election aftermath: The pollsters

Election aftermath: The pollsters

In the fourth part of a special week-long series of features, politics.co.uk looks at what the election tells us about opinion polls.

Come back tomorrow for our final piece on the political parties.

By Ian Dunt

Polling has a dodgy history. The dreaded subject of the 1992 election still sends shivers down pollsters’ spines. The final polls were out by eight per cent that year, underestimating Tory support by four per cent, overestimating Labour support by a similar margin, and thereby failing to predict a Conservative victory.

Even the 1997 Labour landslide is not quite as complimentary as many believe. Pollsters did not perform any better, generally overestimating the Labour lead by somewhere between six and eight per cent. But because they picked the right winner, there was no post-election night soul-searching.

Polling is still big business, and the reliance on polls this time round was more substantial than in previous elections. The Sun ran a daily tracker poll by YouGov and several other papers were churning out opinion polls several times a week. The polls became the central aspect of media coverage of the election, taking up far more column inches than campaign events.

But the polls didn’t just reflect the public opinion, they also helped to cement it. Rather ironically, it was a YouGov poll that revealed the extent to which that process was taking place. One survey found that if people thought the Liberal Democrats had a significant chance of winning the election they would win 49% of the vote, with the Tories on 25% and Labour on 19% – more than enough for a clear majority. When ‘Cleggmania’ hit, that kind of finding became politically important, because it suggested that good poll ratings for the Lib Dems would themselves help increase the party’s support.

But many of the YouGov’s findings for the Sun weren’t reported, leading to a pretty severe spat between the Independent and the tabloid. After the Independent highlighted its redesign with an advertising campaign attacking Rupert Murdoch’s influence, James Murdoch, the media magnate’s son, and Rebecca Wade, chief executive of News International, were widely reported to have stormed into the office of Simon Kelner at the Independent. Days later it reported the uncovered You Gov findings stories, as the two papers veered closer to an outright media war.

Peter Kellner, president of YouGov, is proud of the company’s record at the election. “The attacks were just silly,” he told politics.co.uk. “In the end our record proved we got it right. We were attacked at one point by the Telegraph for being pro-Labour because I’m known as a Labour supporter – and at other points we were attacked for being right wing because we were polling for News International. I think there was one point where we were being accused of being anti-Lib Dem because of a particular questionnaire.”

The implied accusations of political bias in the use of surveys by newspaper editors shows the problems pollsters can find themselves landed with when they team up with newspapers. This is always the case, of course, but it becomes more noticeable as election time approaches and emotions rise.

‘Cleggmania’ was not down to political allegiance or individual polling results. It was recorded by all pollsters for a sustained period. The first indication of how wrong they all were came as polls closed on May 6th and the exit poll was released. For hours political pundits laughed at it, before we gradually realised the exit poll was right and everyone else was wrong. In the end, the Lib Dems actually lost seats.

“We’re all scratching our heads on this, none of us are quite sure,” Kellner says. “At YouGov we’re doing a lot of post-election work to explore this. I don’t know for certain what went wrong. One thing we don’t think is people lied. The exit poll was spot on. If it was people lying you’d expect the exit poll to have been overstated, just like the exit poll in ’92 understated the Tory victory.

“My guess is. some young people wanted to vote Lib Dem in protest and got to the polling station and thought: ‘I’m choosing a government’.”

The particular characteristics of those saying they would support the Lib Dems are widely acknowledged to have played in a role in the drastic overestimation of the party’s support.

Andrew Cooper from Populus also cites the quality of Lib Dem support. “Even at its peak some of the evidence from other poll data suggested their rating was quite frothy, in that it was disproportionately made up of younger voters, who are less likely to vote, and disproportionately made up of people who hadn’t voted in 2005,” he told politics.co.uk. “Throughout the campaign people who said they were going to vote Lib Dem were more likely to say they would change their mind. It was softer support. The way the polls worked, they ought to have captured that but they didn’t.”

It appears the weakness of the Lib Dem support was evident in the last polls, which were already showing a swing back from the Lib Dems. The final polls for Populus took place over two days, for instance. The first day saw the Tories on 37%, Labour on 28% and the Lib Dems on 27%. The next day the Tories were on 38%, Labour on 30% and the Lib Dems on 24%.

“The main evidence that we ended up overstating Lib Dems was because of this very late swing,” Cooper says. “It’s probable there were points in the campaign when the media frenzy about Nick Clegg created a climate where someone who hadn’t made their mind up got polled and just said that, without having to particularly mean it. It was just a response to a moment. Many who didn’t end up voting Lib Dem may have voted for other small parties, in an effort to vote against the mainstream.

“One of the challenges pollsters face is that most people are very poor predictors of their likelihood to vote. The Lib Dems have a proliferation of under-24s who insist they’re definitely, definitely, definitely going to vote and then actually don’t.”

But problems weren’t just restricted to the Lib Dems. The Tories were consistently overrated and Labour was consistently underrated. “Traditionally it’s the opposite,” Ben Page, chief executive of Ipsos Mori, points out. “All of this stuff is slightly spurious because all the surveyors have a margin of error of three to four per cent and that’s assuming we’re creating perfect random samples. We have to do it in a few hours. A proper statistician would say you can’t calculate the margin of error, because they are not perfect random samples.”

The problem of imperfect random samples is just one of many. Analysts are also concerned at the process of extrapolation, where the data set used for a poll of, say, 1,000 people, is then used to calculate the national picture. Not only that, but the polls are calculating popular support, not seats.

“It’s one of the curiosities during elections, which is that we’re trying to measure something which is ultimately irrelevant – the national share of the vote, not seats,” Page says. “The relationship between the two is difficult to model. The crude way it’s often done is the uniform national swing. One thing we know is those models are always bound to be shaky, but this time we saw that to the Nth degree, because there was no national swing. We saw some very volatile local seats.”

The 2010 election threw this problem into sharp relief, because the results refused to present a pattern. Putney, for instance, saw a ten per cent swing to the Tories, but neighbouring Tooting, which only needed a six per cent swing, failed to overturn its Labour MP. It was as volatile an election as anyone could remember.

“I think this is a very interesting phenomenon,” Kellner suggests. “Here’s a guess: I think there was a lot more anti-political feeling because of expenses and other things. As a consequence, more people in many seats formed a specific local judgement about the party or the candidate. Before, being a good MP in an area with an active party was worth. 500 or 1,000 votes. At this election it was worth between 2,000 and 4,000. If you look at some of the better Labour results – Gisela Stuart or Margaret Hodge, for example – it was because they had worked really hard as local MPs and got the local party working really hard. Local voters managed to offset the general disenchantment.”

Any inaccuracies were forgiven however, with the exit poll Ipsos Mori produced for the BBC, ITV and Sky, which was right on the money. Fellow pollsters gave Ipsos Mori full credit. “The exit poll was a piece of genius, and they were also spot on five years ago,” Cooper says. “They go to the same polling station each time and in effect simply measure that swing in that polling station. It produces a number of seats, and not a share of the vote. One of its merits is that it doesn’t have to model the extent to which people are really likely to vote.”

The accuracy of the exit poll is in many ways rather boring, because we can’t replicate its methods in campaigns, only on polling day, but it does reaffirm this type of poll as the most accurate indicator of voter intention before the actual results come in.

Even without the exit poll, pollsters can be relieved that despite allegations of political favouritism, a volatile electorate and wildly inaccurate projections of Lib Dem support, they actually had a pretty good election. After all, poll after poll strongly suggested a hung parliament, and that was indeed what happened. The fact that the Lib Dems and the Tories underperformed and Labour over-performed did not change the fact that pollsters got the general outline right.

“I think in defence of the polling community we said at the end that there would be a hung parliament with the Tories as the largest party,” Kellner says. “So we got the big story right and the secondary story wrong.”