2017 is a Super Election Year in Germany. After Saarland, elections are held in Schleswig-Holstein and North Rhine-Westphalia, followed by federal elections in the fall. However, the recent election already showed that the polls and the outcome diverged once again.
Why election forecasts often miss the mark.
After Brexit and Trump's triumph, we shouldn't be surprised anymore that in neck-and-neck races, the famous "swing voters" can often tip the balance. Even in the case of Saarland, statisticians warn not to dramatize the deviation between predictions and reality. After all, the latest poll was within the usual margin of error of +/- 3 percentage points for all parties except the CDU.
Every forecast is subject to uncertainty. Even the best forecasts are at risk of deviating significantly from reality, even if large deviations are not particularly likely. They are not impossible. After all, every forecast is based on a sample, which can be quite different from the entire population of voters for various reasons. The most significant influencing factor is that election forecasts are based on surveys of eligible voters, but they may not ultimately represent the actual voters.
To minimize these uncertainties in election forecasts, research institutes use special weighting procedures. In particular, the group of undecided voters is weighted, those who indicated in the survey that they do not yet know whether they will vote or who they will vote for. Therefore, it is not clear whether the undecided voters will ultimately appear in the statistics as voters or non-voters. Various weighting factors, especially historical developments such as the voting behavior in the last election, are used for the forecast. But other conceivable effects on the discrepancy between voting intentions and actual voting behavior are also considered.
That means: Not only is the sample itself only a more or less accurate representation of eligible voters. But also the method of converting eligible voters into actual voters is based on estimates. This is often overlooked as long as the extrapolation works well because such weighting procedures are the "trade secret" of research institutes. However, if voting behavior changes significantly, the sources of error can multiply, and then the forecast may miss the mark. Especially in a small federal state, such deviations can have significant effects.
Already in the state elections in Saarland in 2012, a neck-and-neck race between the CDU and SPD was predicted. In the election results, the CDU (35.2%) won with a significant lead over the SPD (30.6%), with a voter turnout of 61.6%. The subsequent voter migration analysis by Infratest dimap showed that the CDU lost around 12,000 voters to the group of non-voters, and the SPD lost another 7,000. The non-voters were thus the real winners of the election.
In 2017, a completely different picture emerged. This time, the CDU was able to mobilize 28,000 non-voters, while the SPD mobilized at least 13,000. 8,000 SPD voters switched to the CDU, presumably those who wanted to avert the danger of a Red-Red coalition. Thanks to the no-longer-non-voters, Saarland now has a voter turnout of 69.7%, the highest value in more than 20 years.
From this, it can be concluded that historical voting behavior does not provide a good forecast for weighting factors. Our party landscape has changed significantly in recent years; rather, one should ask what would be wrong if voters did not react to phenomena such as the AfD or the weakness and resurgence of the FDP at the federal level.
So, if a declining voter turnout has been assumed over the past years, rising trends have been observed in recent years. If the polling institutes were to consider such current trends more strongly and historical behavior less so, they could probably improve their predictive accuracy. Perhaps the institutes should simply ask voters more often how likely they are to vote and not just who they will vote for if they vote.
Furthermore, more frequent surveys in closer time intervals would be helpful – because in the last two weeks before the election, a trend towards the actual election results emerged. Just under two weeks before the election, the CSU and SPD were only one percentage point apart, three days before the election, they were already five percentage points apart. If this trend had been simply continued (which is statistically risky and not recommended), the forecast would have been much closer to the actual result.
Or one could simply abandon the rather static surveys altogether and work more with election markets, where participants trade virtual party shares. Because there, a new mood is created every few minutes, trends and mood swings can be read more clearly. Although election markets also have their weaknesses, if combined with traditional surveys, a more comprehensive and perhaps more precise picture could emerge.
コメント