Elections ballots
Elections ballotsFlash 90

(This is the second part of a two-part series. For part one, click here.)

With less than a month until the elections to the 21st Knesset, Israel’s media outlets are awash with numbers.

Every week, nine or ten election polls, conducted by more than a dozen different polling agencies, are published, each becoming the top news item and the center of a disproportionate amount of attention.

These surveys, some argue, shape electoral realities as much as they reflect them. And to some extent, this is true.

Recognizing the powerful influence polls have in a country where half a dozen parties currently in parliament are in danger of failing to clear the minimum threshold, Israel’s central election committee recently expanded its regulations regarding the publication of polls.

With punishments including not only fines but also jail time, the election committee now requires greater transparency from firms publishing scientific polls, including information on the methodology, who ordered the poll, and details regarding the results for smaller parties which fail to cross the threshold.

But just how accurate are these polls? Should voters really be putting much stock in them? How are these polls even conducted?

Demystifying the polls

In the early years of the state, political polls were something of a rarity. Expensive and difficult to conduct in an era when telephone connections were still the exception rather than the rule, election surveys were conducted face-to-face, typically in the form of exit polls.

As telephone lines proliferated, however, pollsters were able to expand the use of call-based interviews to conduct more extensive surveys. With some populations – particularly Arabs, haredim, and residents of the periphery – less likely to answer pollsters or to have phone lines, polls still included face-to-face interviews for at least part of the surveys in order to capture a more representative sample of the population.

By the 1980s, however, randomized, telephone-only surveys were introduced, becoming the mainstay of Israeli polling for the next two decades.

Such polls, even today, typically survey some 500 to 1,000 respondents. Pollsters generally draw two separate samples – one of Israeli Jews, typically making up 80% or so of the survey, and the other of Israeli Arabs, making up the remaining 20% of the survey sample.

Some polling agencies will use various methods to attempt to screen for likely voters, including asking respondents whether they voted in previous elections (if they were eligible).

In a typical poll, say of 800 respondents, only four-fifths (particularly early on in a campaign) or so might provide an answer when questioned as to who they would support if elections were held today. That would about leave 640 valid ‘votes’ in the survey, including around 130 Arabs and 510 Jews (and other non-Arabs, who are typically included in the Jewish sample).

Each respondent who ‘votes’, therefore, makes up 0.156% of the total survey response, representing some 6,790 projected votes out of a total electorate of some 4.35 million likely voters.

Rounding up, if a party receives 21 ‘votes’ in our hypothetical survey, that party is said to clear the electoral threshold, as it has received more than 3.25% of the vote.

In most polls, parties which fail to clear the threshold are not assigned seats, though some agencies allocate the seats to any faction which receives at least one seat-worth of votes.

Since 2005, some pollsters have conducted internet polls, signing up respondents – usually in exchange for a small payment for completing polls – to a database, which includes the participants’ demographic information.

These internet polls allow pollsters to conduct surveys with significantly larger samples (a recent Miskar poll, for instance, surveyed 4,606 people) at a relatively low cost. Internet polls also allow polling agencies to create more detailed samples, choosing samples representative of the general population in terms of gender, age, ideology, religious identification, etc.

As with the shift to telephone polling, however, some pollsters integrate internet polling with some telephone interviews, given that some demographic groups are less likely to participate in internet polls, and the people who sign up for such surveys are not necessarily representative of the demographic boxes they check off.

Less than a crystal ball, but more than a guess

Methodology aside, what is the actual track record of Israeli election polling? Is it really worth anything?

Polling in Israel presents certain challenges not faced by pollsters in the US.

Unlike America’s first-past-the-post electoral system, which tends to encourage two major parties, Israel has dozens of parties participating in this election, with a dozen or so having a reasonably realistic chance of making it into the Knesset.

That exacerbates other, more universal problems with polling.

For starters, polls everywhere are a snapshot in time. Respondents are asked how they would vote if the election was held today – but in reality, a lot can happen between the survey and election day. Even if a poll were 100% accurate in measuring the electorate on any given day, the electorate can and will shift to some extent as time passes and new developments occur.

Also, a larger percentage of voters remain undecided until just before the election – well after public polling is prohibited. Those voters may have given the pollster an answer, but without really firmly backing that party, and are liable to change when it comes to actually voting. And remember, there are also many voters who don’t give any response to the pollster when questioned – but who do finally make up their mind and vote on election day.

Because of these undecideds, often times Israeli elections have a last-minute surge for one or sometimes two parties – a surge too late for public polling to catch.

Polling by the numbers: 2015

Keeping these factors in mind, Israeli polls – at least in the aggregate – have a decent record.

In 2015, an average of 22 polls conducted during the final nine days of polling correctly predicted the results for seven of the eleven parties which had a realistic chance of entering the Knesset. For two more parties – Yesh Atid and Kulanu – the results were only a seat off for each. Only two parties – the Likud and Jewish Home – had results significantly different from what the average of polls suggested they would receive. The Likud was projected to win 22 seats, while in reality it won 30. The Jewish Home, on the other hand, won 8 seats, but was projected to win 12.

That means that the average of polls correctly placed 88% of Knesset seats, or 106 out of 120, placing nine of the eleven parties within one seat of what they actually received.

When looking at the distribution of seats between the two blocs – the right-religious bloc and the left-Arab bloc – the success of the polls in predicting the results becomes even clearer.

The average of polls gave the right-wing – religious bloc 66 seats and the left-wing – Arab bloc 54. The real results were just a seat off for each, 67 for the right-religious bloc and 53 for the left-Arab bloc.

Despite the success of polls to project the size of the two Knesset blocs, the failure of most polls to predict the Likud’s 30-seat win in 2015 sticks out like a sore thumb.

The polls, on average, missed the true results by eight seats, with some polls off by as much as 10. The closest poll was conducted by Geocartography, which still only projected 26 for the Likud, far off from the final result.

This discrepancy between the accuracy of polls in projecting the blocs to their failure to project the Likud’s win can be chalked up largely to two patterns in voting: the late-breakers, and a willingness of voters to shift to other parties within the same bloc.

There is typically one party which receives a disproportionate amount of undecided voters, making it the election 'surprise'. Gil, the pensioners’ party’, was, in 2006 teetering on the threshold according to polls, between 0 to 2 seats, but surged to 7 on election day. Kadima benefitted from a similar surge in 2009, going from an average of 23 seats in the last week of polling to 28 on election day. In 2013, Yesh Atid outperformed the average of polls by 8 seats.

In 2015, the Likud clearly won the bulk of late-breaking voters.

The Likud also benefited from the other pattern – voters migrating from one faction to another within the same bloc. In fact, all eight of the missing mandates which the polls failed to project for the Likud can be attributed to the Jewish Home – which won four seats less than polls projected – and to the right-wing Yahad faction, which narrowly failed to pass the threshold.

Unlike in the two-party system in the US, voters in Israel often have several parties they may potentially vote for, and vote strategically based on a number of factors including but not limited to: the likelihood of their favored party clearing the threshold, desire to help one of the two or three largest factions secure the premiership, concerns the voter's particular demographic sector won't be sufficiently represented in the next Knesset. Thus, most discrepancies between polls and actual election outcomes can be reasonably attributed to shifts within blocs, with voters migrating from one party to another party within the same bloc.

Concerned by the possibility that the left-wing Zionist Union could defeat Netanyahu and usher in a leftist government, right-wing voters abandoned the smaller Jewish Home and Yahad parties in droves, bolstering the Likud. These were voters who likely had told pollsters they would vote for the Jewish Home and Yahad, and only changed their minds after public polling was prohibited four days before the election.

Polling by the numbers: 2013

If the polling in 2015 was largely accurate, if the Likud’s late-breaking voters are taken into account, 2013 was somewhat less of a successful year for election polling.

Unlike in 2015, where the average of polls accurately predicted the relative strength of the right and left-wing blocs to within a single mandate, in 2013 the polls were off by between five to six seats on average, underestimating the left-wing bloc and overestimating the right-wing bloc.

The average of the last 26 polls, conducted during the final nine days when publishing polls was permitted in 2013, showed the Arab parties winning 11 seats (which is what they indeed won), Meretz getting five seats (again accurate), Labor 17 (it really won 15), Kadima 2 (accurate), Hatnuah seven to eight (it was 6), Yesh Atid 11 (it won 19), Am Shalem and Otzma Yehudit failing to cross the threshold (they both did indeed fail), Shas winning 11 (accurate), United Torah Judaism receiving six (it won 7), Likud-Beytenu with 33 (the union really won 31), and the Jewish Home with 14 (it won 12).

Similar to 2015, there was an election surprise, with voters breaking at the last minute for a single party. This time it was Yesh Atid, which outperformed the average of polls by eight seats, winning 19 seats, compared to just 11 which it had been projected to win.

Some of this can be attributed to the phenomenon of voters shifting from one party to another within the same bloc. While Yesh Atid outperformed expectations by eight seats, Labor underperformed by two, and the left-wing Hatnuah underperformed the polls by between one and two seats.

The apparent shift of voters from Labor and Hatnuah to Yesh Atid likely accounts for three to four seats of ‘missing mandates’ not projected in the polls.

But the remaining four to five seats also seem to reflect a different pattern in Israeli elections – the floating voters.

Having some overlap with late deciders, floating voters are a well-documented phenomenon in Israeli elections since the 1990s.

Floating voters tend to be middle or upper-middle class, largely secular, Israelis from the center of the country who lack strong allegiances to any one party. They usually (though not always, as in the case of Gil in 2006) favor larger parties (Likud, labor, Kadima, Kulanu, Yesh Atid, Blue and White) and eschew sectorial and overtly ideological candidates perceived to be on the right or left fringe.

These voters can play a decisive role as spoilers in altering the right-left balance, as they may shift from right-leaning parties like Kulanu and the Likud to left-leaning ones like Blue and White, Labor, or as occurred in 2006 and 2009, to Kadima. They are the voters most likely to be influenced by developments during the campaign, such as the attorney general's backing for indictments against Binyamin Netanyahu.

Floating voters who initially backed the Likud likely account for three seats projected for the Likud, but which ultimately went to Yesh Atid.

Going back even further to 2009, the smaller pool of polls conducted got the blocs spot on, with the average of final polls showing 55 seats for the left-Arab bloc, to the 65 seats for the right-wing-religious bloc, accurately reflecting the actual results.

This despite middling results in terms of predicting each individual party. Kadima, which was polling at around 23 to 25 seats ended up with 28, while Meretz, which had been polling on average at about 6 seats, yet won only three, barely clearing the threshold.

Like in 2015, much of the polls’ failure to correctly project mandates for individual parties can be attributed to shifts within the blocs, with Labor and Meretz losing seats to Kadima, Meretz to Hadash, and Yisrael Beytenu to the Likud.

So what can the polls actually tell us (if anything)?

The big picture is that Knesset polling, even in the aggregate, has limited utility.

That isn’t to say that the polls are of no use, or that they don’t tell us anything of value. Averages of polls, which smooth out the outlying numbers found in individual surveys, have been fairly accurate for most parties, getting to within a seat or two of the actual results for 9 of the 11 major parties in 2015, 10 of 12 of the major parties in 2013, and of 12 of 14 major contenders in 2009.

But given the large pool of ‘floating voters’, the ban on publication of polls during the final few days of the campaign – preventing public surveys from revealing last-minute changes as undecideds make up their minds – and fact that every election in the past two decades has had some kind of surprise, usually affecting one of the two largest parties, polling data must be taken with a very large grain of salt.

Looking back at the past three elections, however, some clear patterns emerge.

First, polling for sectorial parties – like the two haredi factions and the Arab parties – tend to be more accurate. With a well-defined voter base, there is less margin for error for these factions.

On the other hand, polls tend to be less able to accurately project the results for right-wing parties, like the Likud and Jewish Home, where voters seem to be more willing to cross over from one faction to another on election day, after having told pollsters they planned to back the first faction.

Both in 2013 and 2015, voters who had supported the Jewish Home ‘migrated’ to the Likud on election day, likely in the hopes of ensuring Netanyahu retained the premiership.

Also, large parties of the center-left and center-right competing for floating centrist voters also tend to be more likely to surprise pollsters with election-day surges – though in the past two decades, they haven’t suffered similar election-day plummets, winning far fewer seats than the average of polls projected.

The worst underperformance by a major party in that time frame was the Likud-Beytenu list in 2013, which undershot expectations by three seats – or just 8.8% of what the average of polls had projected for it (the list won 31 seats, three below the 34 polls had shown before election day).