The Real Winner of the 2012 Election: Analytics

First, let me start by saying this isn’t a Left versus Right post. It’s not pro-Obama or pro-Romney. This isn’t about who’s right or wrong. This isn’t about policy or political philosophy.

This post is about the clear winner of the 2012 Election, and that winner is Analytics (by a landslide). Let me be clear—the winner is Analytics, not data, because there is a huge difference.

Analytics at its simplest is “the science of analysis” (Wikipedia). Merriam-Webster defines it as the method of logical analysis. Both campaigns had data. All of the pollsters had data.

However, what carried the day was Analytics—not just Analytics, but Good Analytics. My definition of Good Analytics is analyzing data to draw meaningful and actionable insights. This is where the 2012 Election was won and lost.

Let’s take a look at Obama’s Analytics and subsequent use of these insights. According to an article posted by Slate, Obama’s Analytics department gave them a distinct advantage in voter turnout:

Over a two-week stretch starting at the end of July, the Obama campaign’s analytics department contacted 54,739 voters from paid call centers and asked them how they planned to vote. Obama’s databases already knew a lot about the approximately 180 million registered voters in the United States (and even a bit about those who weren’t registered, in a way that could help guide the campaign’s efforts to enroll them). The goal was to collect intelligence about potential voters’ 2012 intentions and distill that down to a series of individual-level predictions. The most important of these scores, on a range from 0 to 100, assessed an individual’s likelihood of supporting Barack Obama and of casting a ballot altogether.

Furthermore, they didn’t just look at data. They turned data into actionable insights as illustrated below:

 

Obama’s analysts built statistical models to pull out other factors that distinguished voters from nonvoters. Socioeconomic factors like income and housing type played a role; those who lived in multi-tenant dwellings, for instance, were less likely to vote. But within those households Obama’s analysts found a twist. A voter living with other people who had a demonstrated history of voting was predicted as more likely to turn out herself.

The Obama campaign’s algorithms ran the numbers and predicted the likelihood that every voter in the country would cast a ballot, assigning each a turnout score. Obama’s analysts knew how good their support score was because they polled a new group of voters to validate it: 87 percent of the time it would accurately predict an individual’s preference. But it would be impossible to confirm their algorithm’s turnout predictions until after the election. But they did their best to assess its accuracy by calling voters and asking them how likely they are to vote. Analysts know that people are poor predictors of their future behavior, but they got answers that confirmed that their rankings were at least sensible. Among the 10 percent of voters seen as most likely to vote, 95 percent said when contacted that they definitely would. Based on 2008 figures, Obama’s analysts assumed that 40 percent of those who told callers that they would “definitely not” vote ultimately would. “Possibly,” the analytics department advised in a September memo, this was “due in part to our GOTV efforts among these voters.”

As you can see from the Slate snippets, this is arguably the most sophisticated and well-implemented Analytics campaign in election history.

Obama's Victory: The Silver Method

Moreover, Obama’s pollsters not only had the right approach and analysis to polling numbers, they were in-line with Nate Silver’s projections, though he was roundly criticized prior to the actual election, most notably by Joe Scarborough (story here).

Silver’s Analytics approach allowed for more complex algorithms to incorporate weighted variables based on historical accuracy of different data sources. In 2008, he predicted the popular vote by within one percentage point and accurately predicted the outcome over every state with the exception of Indiana.

While this is stunning for a sharply divisive Presidential election, even more shocking was that Silver’s method was statistically significant enough to predict Senate races as well. In 2012, Silver’s predictions were 100 percent accurate, forecasting the outcome correctly in 50 out of 50 states.

This wasn’t an easy feat given that voter turnout changes, with minority, youth and women voters all comprising an increasing percentage of the total votes. Moreover, models have to be adjusted to account for mobile phones, which are harder to poll. Systematic data, including economic, polling and historical, can be recombined to create opportunities to woo the right voters at the right time.

Romney’s Plight: How It All Can Go Wrong

Romney’s team didn’t lack data. Frankly, they had more data than they could ever want. However, they failed on multiple fronts. First, they had bad data mixed in with good, which is problematic enough. But even worse, they drew the wrong conclusions from the good data they did have.

Analytics is more than the production of numbers. It’s looking at the right numbers at the right time and drawing the right conclusions. Romney’s team looked at the wrong numbers at the wrong time and drew the wrong conclusions. The GOP polls that Romney relied on provided overly optimistic polling numbers that helped more than they hurt.

Making things worse, the polling methodologies they subscribed to were off. Not just off, but biased toward their own belief system. It was hope mixed in with data—not a good combo.

Romney lost battleground states without even putting up a fight because GOP polling screened out voters that ultimately ended up voting for Obama. “There were just too many damn Democrats,” one pollster said.

Lastly, they relied on an untested system to turn these bad data and faulty insights into actions to drive votes. Romney’s poll monitoring system, ORCA, used smartphone technology to receive data in real time in an effort to better allocate resources. Technical issues and confusion prevented the app from being used efficiently, and it ultimately failed to provide significant metrics.

However, ORCA was not just a failure of monitoring Election Day results. It was illustrative of how misguided Romney’s efforts to use analytics were. Waiting until Election Day to test the system wouldn’t pass muster in any organization, much less something as high stakes as a presidential election. Instead, insights should be used as they were for Obama: earlier in the campaign to determine where indecision exists and then eradicating that indecision.

The wrong data means Romney and his team got caught with their pants down. Even up to the day of the election they were confident Romney would win “decisively”.

The After "Math"

So how did it all play out? In the end, the pollsters with the best approach to analyzing the data were right. They had the right forecasts and drew the right conclusions. The others, well let’s just say inferior analysis left them without credibility and looking for others to blame, like Karl Rove's now famous refusal to believe Ohio, and consequently the election, had been decided. This is just one example of a numbers guy being failed by his numbers.

As for the campaigns, Romney, according to sources, was “shell-shocked” when he found out late Tuesday that he lost. He believed he was going to win. He didn’t just believe—he was so certain that he didn’t write a concession speech beforehand and had to take a ride home in the backseat of his son’s car after the secret service dropped him post election results. He may have lost for many reasons; however, part of that reason was because his Analytics team failed him.

In contrast, Obama reportedly remarked early morning on Election Day, “we’ve got this,” not just because he thought he was the best candidate but because his team knew the numbers. He won the 2008 primary versus Hillary Clinton largely due to Analytics driving up delegates before Clinton could respond. This is a core competency of his campaign efforts. It was well resourced and well tested and, more importantly, it had the right analytics approach.

As I mentioned in the beginning of this post, data is useless. The true winner in 2012 was Analytics. And as an analytics guy, that makes me smile.

Written by Brian Easter on November 13, 2012

Comments

Add A Comment
Ashoor says:

Amazing post, thank you! Nice work explaining the difference between 'data' and 'analytics'

The GOP better pay attention to this in 2016.
It is very similar to sports: in basketball and in baseball for example, you often have two types of coaches: those who are all about Xs and Os (numbers and stats) and those who focus on 'fundamentals of the game' Unfortunately in politics, it has come down to analytics and not 'who will be the best president' .

Beaster 1 6apepxm
Written by
Brian Easter
Co-Founder