There has been much commentary since Saturday night about how wrong the polls were and whether polling needs to change.
One of the main goals of the Voter Choice Project was to figure out ways to poll better, because polling hasn’t been great for some time. From theories of ‘shy’ voters, to sampling error, and random commentators and political scientists who don’t study elections just making up nonsense about what happened, there’s plenty around for you to choose your own preferred explanation.
The Voter Choice Project is a panel study. It repeatedly surveyed the same group of voters throughout the election, including how they finally voted, to try an understand vote decision; specifically how, why and when people change their vote. There was a late shift – around 35% of our respondents shifted in the last week in either first or lower preference intention in the last week. But it went in both directions, which makes it hard to understand and analyse. We’re going to take the time required to understand what voters are saying and what happened.
The regional differences made this election very difficult to poll. We knew that Queensland was going to the LNP with a thump, that NSW was holding with likely no net change, that Victoria was trending toward the ALP. This was particularly frustrating for people like myself who kept saying Labor would not win majority, hammering that there was no trend, and pleading with people to ignore the 2PP and stop using calculators. We do not have a national electorate. We have 151 house electorates, and you need to win a majority of seats, not a majority of the vote.
In truth, there wasn’t even a consistent trend within states. When I was asked in the final days before the election if Labor would pick up an additional 5 seats in WA as the bookies predicted, my answer was a flat no. I’d give them Swan only. (And I’m still a little surprised Hannah Beazley didn’t get there given the very strong cross-party support.) Canning was clearly going to the Liberals, and Hasluck and Pearce were holding for their incumbent Liberal members, while there was a big swing against the Liberals in Curtin, as you’d expect with the loss of Julie Bishop.
Were our numbers ‘wrong’? Yes. I’m adult enough to own it, analyse it, and learn from it. I had a consistent false flag of an absurdly high support level for Australian Conservatives that never materialised, and independent support that crashed harder than a demolition derby right on the eve of the election.
We also were trying something different to generate a two party preferred number, actually asking people how they intended to preference a list of 6 hypothetical candidates, and in the last two surveys, their actual candidates for the House of Representatives. So our 2PPH6 number always varied from the 2PP number of the established polls that used calculations based on the 2016 preference flow.
Here’s what I (and everyone) did wrong: translating those preferences into a binary ALP v Coalition figure. That’s not reality. That sees nearly all votes and preferences in ALP v Greens seats like Melbourne and Grayndler go to the ALP, and most preferences from non-incumbents in Coalition v Independent seats like Warringah or New England going to the ALP, when the ALP never had a chance. In all, there were 15 seats that were not ALP v Coalition battles.
Recoding the last survey before the election for both Incumbent v Challenger, and Coalition v Other, and without changing any of the other weightings, this is what happened: a reversal of the likely outcome. The Incumbent v Challenger number indicates a change of Government is unlikely (and is extraordinarily close to the actual 2PP vote count); the ALP v Other figure shows the true level of ALP support.
These alternative concepts are not that challenging to all that is known about Australian elections. The history of psephology in Australia has always been in an ALP v other frame. However, either concept requires an acknowledgement this is not really a two party system anymore, and correct analysis requires properly accounting for those seats that are not contests between the two major parties. This simple adjustment to current practices is not that simple, and raises some curly questions… how do we predict which seats will not be non-major party contests (except for the six electorates where the incumbent is not a major party candidate), and what do we do with the genuine three cornered contest?
While the Incumbent v Challenger number is closer to the actual outcome, I believe the ALP v Other number is more accurate, as there was a late shift in sentiment. Polls are a snapshot of the point in time they were taken – if you want a prediction, then you need to get into some predictive mathematics, but I think I’d lose you if I started talking about modelling or Bayesian inference. However, allowing for the late shift is something that should be factored into any reporting of polls – and that requires accepting that the Australian electorate is not, as has consistently been believed since Don Aitkin’s seminal work in the 1960s, stable.
So what did happen in the final days to shift the sentiment? Four things have been identified from the Voter Choice Project research so far.
- Hawke’s death which stirred voters up;
- Shorten and Bowen telling people not to worry about Franking Credits which of course made more people look at the policy which had already caused that enormous slump in primary vote;
- the odd resurfacing of rape allegations against Bill Shorten due to an attempt to get the case re-opened on the Wednesday before the election – not widely reported, but spread through right wing online spaces (especially Facebook) like wildfire and shifted lower preferences; and,
- the dominant narrative of an expected landslide for Labor (with little attempt to dampen expectations) causing many voters to vote for the Coalition to stop the landslide.
Which brings us back to the 2PP, given it was the incorrect interpretation of the 2PP numbers creating that ‘landslide’ narrative. When Malcolm Mackerras did his work on 2PP and the pendulum back in the 70s, it was sharply criticised at the time for concealing the true rate of change in the electorate. His PhD thesis was rejected. Twice. But the damage was done, the media had latched onto the 2PP and the pendulum as an easy tool to explain and predict elections.
And if you have a consistent national trend, and a stable electorate, you can usually get away with it. Not this time.
Most of the polling for this election (especially Ipsos) wasn’t that far from what was going on. Every poll except Newspoll had Labor’s primary vote at 33-34% two weeks out, but there was a lot of bounciness in the numbers. And, as I commented before the election, the lack of narrative for the election was causing a chaotic range of phenomena from huge pre-poll, to soft voting intentions, to a big late shift.
Late shifts are becoming increasingly common. Social media has changed the landscape: people decide later due to the abundance of information at their finger tips, are more likely to conceal or even deny their actual voting intentions if their party is not popular (conditioned by the abuse they cop online), and the capacity to influence voters right up until the last minute is greater. How do we adapt as pollsters?
The fundamental science of polling is still solid. Verifying respondents as being on the electoral roll does cull a lot of the noise. More frequent or rolling polls will capture the late shifts, particularly if we do not try and correct them. However, the public will not deal well with more frequent polling numbers, so methods of smoothing and less frequent releases will still need to be deployed to ensure the numbers do not create instability or unwarranted momentum.
Do we need bigger samples? Yes, is the short answer. A national poll sample needs to account for regional differences, which requires more people. The seats with between 30 and 40 participants performed exceptionally well in the Voter Choice Project; less and it wasn’t reflective of the electorate, more and the sample became biased towards the ‘noisiest’ candidates, with a motivated supporter base more eager to be polled. 30 respondents in 151 electorates is a sample of 4530.
Most importantly, we need to ditch the myth of the 2PP’s predictive power in the commentary. The ideal would be to poll in all 151 seats, and derive a seat count number for election predictions. That’s expensive and time consuming, so unlikely to happen.