2014 political polling was biased: big feedback may be at work

In an earlier post, I described the problem of dynamic feedback when the population becomes aware of the big data that others are using to try to predict that same population’s behavior.

The earlier successes of big data often shared the common trait that the population was not aware of the big data projects.   They did not realize their activities were being recorded, or the extent to which their data was being collected and combined with other data.   They also did not comprehend the goals of the analytics performed.

The early successes of big data benefited from a double blind trial.  The subjects were not aware of the data and the data collectors were not aware of the purposes of the analytics that requested the data.  This seems similar to controlled psychological or medical trials involving double-blind studies to eliminate the possibility of bias in the experiment.

I argued in that earlier post that the early successes of big data may be temporary because they worked best when people were not aware of the project.   In recent years, big data has become a popular topic for general discussions involving a variety of big news events ranging from revelations of the extent of data collections by government and businesses to how those projects can impact individuals lives.   Many news stories have become controversial enough to make the extent of data collection and analytics widely recognized.  The subjects are no longer blind.

Today, people are aware that they are leaving data trails that are being collected by multiple entities for a wide variety of purposes arguably rarely in the person’s best interest.  Even targeted marketing that has good reason to assume I may want a product may not be in my best interest if it arrives at a time I had no intention of buying anything like that product.  Other big data projects have less beneficial objectives.

Also, today people are aware of the scale and precision of the data being collected.   They understand that they are being sampled for their demographic and geographic categories in order to be compared with all of the other categories.

With this awareness comes the opportunity for feedback loops.   I suggested that people may begin to game the system by manipulating their behaviors in a way to influence the big data analytics perhaps in a way that would be to their benefit.

Initially, this may occur within competing big data projects.   Different companies competing for the same customers will be collecting and analyzing data from this same population.  They will be using the data to come up with optimal strategies.  However, they are also aware that their competitors are doing the same thing and they are aware of the tools and techniques their competitors may be using.   One team may attempt to predict what the competitor is observing in their analytics.  For example, they may predict their competitor to be using machine learning that creates a specific category of a population of a certain characteristic.   The first team may then proceed on a marketing plan to influence that population so that their behaviors change enough to no longer fit the predicted category.   The competitor’s algorithm will fail to find the population they are seeking and this will allow the first team to have more exclusive access to this population.

I realize that discussion is a little dense.  Competing big data projects become game-players similar to participants of old-style board games.  The players understand the basic rules of the game and can observe each other’s positions while they try to outsmart their competitors.   Big data projects will analyze the data both for their private benefit and for what their competitors may be observing.   This presents the possibility of manipulating the data that to confuse the competitor’s analytics.

Influencing the population using big data analytics is a form of a feedback loop.  The feedback changes the behavior of the population.   If some of the existing feedback mechanisms are not recognized, the behavior of the population will become less predictable and less stable over time.   The population will begin to behave in new ways and in ways that contract our notions of how people should behave.  The predictive power of the analytics will decline.

Eventually the feedback actors (the game players) will include the targeted population itself.  The population is able to access and analyze the same data available to the various groups that are trying to take advantage of that population.   This can result in a a multitude of feedback looks and a system that will be impossible to predict.   There may be no way to know all of feedback loops are in place and what purposes they are attempting to achieve.

The previous success stories of big data analytics exploiting cleanly separable homogeneous categories will become a thing of the past.   The past successes of Big Data will become less frequent because the data will become disturbed by all of the feedback loops.

An analogy in cooking may be the project of skimming floating fat off the top of some broth.   The historic success of big data is analogous to the broth being still so that the fat settles at the top.   In that case it is simple to just skim the top.   The effect of feedback would be like putting the broth and fat in a blender so that that fat is distributed throughout the broth.  Separating the fat from that emulsion is much more difficult.

Just this week, we in USA have had a national election for federal government offices.   The election results surprised many for being so different than the latest polls predicted.  In the past, polling has been much more accurate to predict actual election results.   One explanation is that this was a unique situation that is described as a wave election: recent news events motivating different people to vote than who usually vote.   This is a reasonable explanation.

I am suspicious though that there may be more at play than that.   Each election occurs with some element of influence of the most recent news events.  Often the latest news has no relevance to the campaigns or even candidate experiences up to that point.   The polling practices have done well in the past because they include mechanisms to consider these factors along with the longer-term factors.   In 2014, there were multiple concerns about foreign relations and epidemic but these news stories were already pretty old and widely recognized. I expect that the polls included these considerations in their computations, but the polls were still unusually inaccurate this year.

In the past two years (since the last national election), there has been a lot of sensationalized news about big data and how various groups are using it in ways that appear manipulative or invasive to privacy.   There is a public awareness of the big data projects.   They may not yet have access to the data, but they are no longer the blind subjects in an experiment.   Their more complete awareness of the scope and purposes of the data may be biasing their answers to polling questions.

Obviously, polls are not blind a blind experiment.  When people answer a poll, they know the pollster is collecting data on their personal opinion.   What is different today is that people are more aware of how their responses will be used in analysis.  The pollsters will combine responses across wide areas.   The pollsters will also attempt to infer attitudes by how certain questions are answered.  For example, a poll may explicitly ask for party affiliation but also ask an opinion question whose answers correlate with party affiliation.  A specific example is a person claiming to be independent but answering opinion questions that strongly correspond to one of the parties.   The pollster may infer that he independent voter is likely to vote for that party instead of some independent candidate.

Polling has a long history of being accurate.   Over that history, improvements to polling techniques has improved the accuracy.  And yet now it seemed to have failed.   It may have failed because of some historic wave of last-minute opinion-changes.   Alternatively, it may have failed because the population has fundamentally changed in a way that it defeating the established methods of interpreting polls.   The population today is more educated about the polling and the cross-tabulations used for weighting the results.  They may more cautious or more devious in their answers to those cross tabulation questions.

The polls worked earlier because they could isolate large groups as having similar behaviors.  The early successes benefited from broad geographic locations or ethnic identifications can predict something.  Feedback can cause these broad groups to fragment and disperse.  New polling results will have to identify increasingly larger numbers of tinier groups in order to find similar predictive power.   This will increase the cost of polling by requiring many more samples.  Eventually, the polls (and data) will have to be micro-targeted to characterize views and motivations for each individual.

This micro-targeting of polling may be necessary soon because the feedback is already becoming similarly micro-targeted.  The campaigns are using big data to target individualized appeals instead of group appeals of the past.  A recent example even explicitly warned individuals “Who you vote for is private but whether or not you vote is public record”.  People are aware they can no longer be anonymous member of a larger group or community.  They now know they are being targeted individually.   I suspect this individual targeting will change people’s behavior to become more individualistic and more distant from their previously associated groups.   The individualized targeted campaigning is fragmenting the broad groups that benefited historic polling analysis.

It make sense to me that the political polling should be at the same granularity as the campaign targeting.  For example the 2012 campaign targeted single women as a specific group.   It makes sense that to be relevant the polls would have to isolate the respondents who happen to be single women.   When campaign targeting becomes more individualized, the polling must also become equally individualized.

As the polling becomes more individualized, the individuals will become even more aware of their contributions to the data.  They will also have incentive to observe the data or at least observe what others are reporting on the data about the groups they belong to.   I imagine that more will begin to adjust their behaviors to distinguish themselves from the broader group that they may not agree with.   They may give different answers, not necessary wrong ones, but answers that will better reflect how they want to be categorized.    A simple example may be to report a higher interest in voting so that they will receive fewer get-out-the-vote calls.  Broader examples may come from an organized campaign by some group to suggest that acting in a particular way will associate the person with that group and thus strengthen the influence of their positions.

I suspect some element of behavioral modification from awareness of data analytic polling may already have happened in the 2014 election cycle.   I offer no evidence that this happened.   But I note the coincidence of the poor performance of the polls with the recent news in the past two years that popularized the notions of big data analytics and how intrusive they can be.  That coincidence suggests it is possible some people may be adapting to the new reality of government by data.

Even within the profession of political data analysis (including polling), the field has become very competitive.   The various polling groups are competing with each other to get the best estimates by using the most data with the best algorithms.

On election night, the data project becomes calling races as early as possible.  The winning analyst who calls the race the earliest with good data justification and who turns out to be right.

Calling elections early are only valuable on election night for the news cycle.   The actual vote counts that matter are officially reported usually within a day or so.   By that time all the ballots will be counted and there is no need for analysis.  There is a just short window of time when the analysts have a marketable product of a calling an election based on a fraction of the ballots being counted.

Of particular interest to me was the US senate race in Virginia that ultimately was won by the Democrat who pollsters expected to win by a wide margin.  The surprise for the evening was that the race was much closer than expected.   Despite the final very close race, one analyst group called the election 30 minutes after the polls closed and they turned out to be correct.

The above link is their explanation for their confidence of the early call.   They based it on historical data analysis to identify a bellwether county that matches the results for the entire state.  In particular, they noted that a republican cannot win the state unless he also wins this county by at least 55% of the vote.  This county reported earlier than the other counties and it was clear this threshold was not going to be close to the threshold.  As a result, they were confident in their call of the election based primarily on this bellwether county.

I applaud the data work to identify this bellwether from all of the other possibilities and to measure its strength in predicting the outcome.   I mention this example because they worked with data that was available to everyone else.   All of their competing analysts had access to the same data but either did not identify this county as a bellwether or did not recognize its predictive strength.

I recall that during that night, the other analyst groups not calling this election until much later and probably with their own justifications.  They missed the bragging rights of calling it first.

I think there could be fair bit of luck involved in this call.  The logic of the call appeared to be summarized as follows:

  1. Republicans who win a statewide election always gets at least 55% of the vote in this one county
  2. The republican in this race did not meet this threshold
  3. The state will be not be won by the republican

The results for this early-reporting county will predict the winner for the entire state.

While that was true in the past, it did not have to be true in the present.   Most of the later reported votes that ultimately decided the election came from counties in a different metropolitan area.  There may be differences local concerns and voting motivations that could have made a difference.   Given that final result was so close, some previously unknown issue could easily have negated this result.   But there was no such surprise.  The prediction was correct.

I offer an argument for having less confidence in this prediction.   This argument is that everyone else had access to this same data.  In particular, the competing campaigns had access to the same data and they could have discovered this same bellwether county.   Given the above logic that almost suggests a causal relationship, the republican candidate could have invested more time in this county to win it by a wider margin while still not campaigning in the regions where there are more votes for the democrat.   In that case, the campaign effort could have moved the data enough to at least delay the call for the state, or even mislead the call to call it for the republican before the bigger totals arrived from the more populous regions.

I am attempting to use this example to illustrate how multiple groups with access to the same data may be able to recognize how the competing groups are using this data and then set about a strategy to produce numbers that will trick the competing group to making a bad decision.   In this particular case, there would be no incentive to do this.  The only vote count that mattered to the campaign was the official count that would come later.  In addition, the announcement of vote counts only started after all of the polls were closed so there was no way an early call of the election could influence the voting.   The only thing at stake is the few hours of newsworthiness of an projected call before the official numbers are announced.

I present the example only as an illustration the vulnerability that occurs when multiple groups have access to the same data and the same tools but attempt to use those results for conflicting purposes.   One of those purposes may be to deliberately mislead the competing users of the same data.

As we get deeper into this new age of big data, I anticipate that even more people will have access to the big data and have the means to influence the data to suit their purposes.   In particular, I anticipate more individuals to get involved in gaming the data through the use of quick organization through social-media campaigns.   A social-media flash-mob event can fragment preconceived groups to an extent that would invalidate the historic data for making a prediction.

With the 2014 election surprising the polls, I wonder if this may already be occurring.  The population may already be fragmented into smaller categories than the pollsters expected.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s