I’ve been catching up on sleep after the election, but this is just to add a brief, post-election round up of how the polls performed. In 2015 and 2017 the equivalent posts were all about how the polls had got it wrong, and what might have caused it (even in 2010, when the polls got the gap between Labour and the Conservatives pretty much spot on, there were questions about the overstatement of the Liberal Democrats). It’s therefore rather a relief to be able to write up an election when the polls were pretty much correct.
The majority of the final polls had all the main parties within two points, with Ipsos MORI and Opinium almost spot on – well done both of them. The only companies that really missed the mark were ICM and ComRes, who understated the Tories and overstated Labour, meaning they had Conservative leads of only 6 and 5 points in their final polls.
My perception during the campaign was that much of the difference between polling companies showing small Conservative leads and those companies showing bigger leads was down to how and if they were accounting for false recall when weighting using past vote – I suspect this may well explain the spread in the final polls. Those companies that came closest were those who either do not weight by past vote (MORI & NCPolitics), adjusted for it (Kantar), or used data collected in 2017 (Opinium & YouGov). ComRes and ICM were, as far as I know, both just weighting recalled 2017 past vote to actual 2017 vote shares, something that would risk overstating Labour support if people disproportionately failed to recall voting Labour in 2017.
The YouGov MRP performed less well than in 2017. The final vote shares it produced were all within 2 points of the actual shares, but the seat predictions showed a smaller Tory majority than happened in reality. Ben Lauderdale who designed the model has already posted his thoughts on what happened here. Part of it is simply a function of vote share (a small difference in vote share makes a big difference to seat numbers), part of it was an overstatement of Brexit party support in the key Conservative target seats. Whether that was having too many Brexit supporters in the sample, or Brexit party supporters swinging back to the Tories in the last 48 hours will be clearer once we’ve got some recontact data.
Finally, the 2019 election saw a resurgence of individual constituency polling, primarily from Survation and Deltapoll. Constituency polling is difficult (and I understand has become even more so since the advent of GDPR, as it has reduced the availability of purchasable database of mobile phone numbers from specific areas), and with small sample sizes of 400 or 500 it will inevitably be imprecise. Overall, it performed well this time though – particularly given that many of the constituency polls were conducted in seats you would expect to be hard to poll: unusual seats, or places with independents or high profile defectors standing. David Gauke’s support was understated, for example, and in Putney constituency polling overstated Lib Dem support at the expense of Labour. However, in many places it performed well, particularly the Chelsea & Fulham, Wimbledon, Finchley and Esher & Walton polls.
And with that, I’m off for a nice Christmas break. Have a good Christmas and happy new year.