Media Coverage

December 3rd, 2013

5 Reasons Why Big Data Will Crush Big Research

5 Reasons Why Big Data Will Crush Big Research

Author

Peter Daboll

Ace Metrix via Forbes CMO Network

Peter Daboll, Ace Metrix via Forbes CMO Network

We all are hearing a lot these days about “big data.” But there is much confusion about what “it” is and what it means for marketers. Last year, Gartner defined big data as “high volume, high-velocity, and/or high-variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.”

That’s a pretty workable definition and it excludes, by its very definition, so much of what “big research” is today.

Technological innovation and processing speed are fundamental to systems that leverage big data to produce insights. Big-data sources are often linked to social media data exclusively, but can also include RFID data, logistics, production, and retailer scanner data – even weather or traffic patterns. Big data is about integrating data and analyzing patterns. Big data is not concerned with collecting the data – that’s because in today’s internet of things, data is abundant.

Traditional marketing research or “big research” focuses disproportionately on data collection.  This mentality is a hold-over from the industry’s early post-WWII boom –when data was legitimately scarce.  But times have changed dramatically since Sputnik went into orbit and the Ford Fairlane was the No. 1-selling car in America.

Here is why big data is going to win.

Reason 1: Big research is just too small.
There is really nothing “big” about the research that “big research” does. Not only are the sample sizes small, but often so are the issues that they are measuring.  Big-research projects either measure very narrowly defined issues, ignoring the larger environment, or very large issues with insufficient samples to be sensitive enough to measure appropriately.

Focus groups are a perfect example. Is that group of 12 people really representative of anything other than just the opinions of 12 people?  As an example, a brand-tracking study we did showed that because the samples sizes are small, only two of the 25 months reported are actually statistically significant. Any differences in the other 23 months are just sample noise. Yet these are SOLD as material changes to the clients. Clients hyperventilate over these minor changes as actual behavior. This “over-interpretation” problem is typical for brand tracking and other Big research projects. If the data can’t support a conclusive finding, then one is invented by an overzealous client service team. After all, no one wants to tell a client that their study produced no meaningful results.

Reason 2 : Big research lacks relevance.
The biggest problem is that most research is just too slow.  Many of the marketing decisions need to be made before that research project can be specified, proposed, approved, designed, fielded, aggregated, processed, reported and interpreted.  Clients often share stories with us about studies they approved only to receive the results weeks after the decision it was meant to inform had been made.  Joel Rubinson, a thought leader in the research space, says it well: “The cadence of marketing must match the just-in-time agility of consumers, and the cadence of marketing research must match the new cadence of marketing. Strategy needs to be prediction-led and tactics and optimization must move at near real-time speed.” As he says, most big research today is designed for comfort, not speed.

Beyond speed, there is a problem with the actual research methods themselves—many of them were developed decades ago.  For example, many big research copy-testing companies have surveys that take between 45-55 minutes to complete. No one in today’s world has time or inclination to complete such a survey. That begs the obvious next question which is who is actually completing them? If you can’t imagine anyone that you know completing a 45-minute-long survey, then you are likely looking at a representation problem.

Adding to the relevance problem for big research is its stubborn insistence around old norms and trends, often missing or incorrectly responding to fast moving, current trends.  We recently spent some time with a client whose normative database on ad performance goes back to 1972.  What possible similarity is there between a person in 1972 and today in terms of how they respond to an ad? Consumers just don’t react the same way over time.  As one of President Obama’s campaign advisors reminded us six years ago – historical data was completely useless during the Democratic primary; you had a woman and an African American competing for the nomination.

Reason 3: Big research doesn’t handle complexity well.
Big research has a tendency to oversimplify complex patterns due to the limitations of the measurement instrument.  The attitude is that “if I can’t measure it, it must not be there” when in truth, it is the bluntness of the measurement instrument that is to blame. Big data is inherently complex and involves searching for, and identifying patterns in large data sets. Today’s marketing challenges are based on integration of many data elements and trends – occasionally conflicting. Big data can dig deeper to identify why they are at odds with one another.  Couple that with analytical talent on both the software and client side and you can find multiple answers within complex data streams. Too often, however, researchers are paralyzed by having just two numbers that are contradictory.  Big research is not inherently structured to solve for this complexity, yet it is the new normal.

Reason 4: Big research’s skill sets are outdated.
Surprisingly, in my experience, traditional researchers are often poor analysts – they lack the fundamental data analysis and integration skill sets. Part of that is because they still operate under the assumption that data collection is the important part of their job. The key skills in today’s big data environments are data integration, triangulation, pattern recognition, predictive models and simulations. Further, big research is inefficient, relying on people to interpret results. Big data, on the other hand, does all the heavy lifting with machines. While people with the right analytical skills sets are still important, much of the trend and pattern identification is already done by the time it gets to them.

Reason 5: Big research lacks the will to change.
This lack of political will is probably the biggest challenge, because big research simply likes things the way they are. As with many now-obsolete industries, the internal resistance to adopting new technologies prevails because of short-sightedness on preserving the cash cow. I think the best analogy is that big research is still selling mainframe computers in a cloud-based computing world. One only needs to ask the music and newspaper industries how such stubbornness fares in a digital world. To put it in context, the Washington Post recently sold for $250 million. Google GOOG -0.16% bought a relatively unknown traffic app, Waze, for $1.1 billion. The difference was data.

To be fair, big data isn’t fully mature yet either and still exhibits growing pains – both in terms of implementation and representativeness. Overuse of Twitter feeds can create equally erroneous conclusions, for example. Just because a data set is large doesn’t mean it is unbiased. Cautionary stories abound regarding over-representation of the “lunatic-fringe” or loudmouths that have an overly emotional reaction to a particular topic.

We shouldn’t throw all the traditional research practices under the big data bus.  Big data has a lot to learn about projection, bias correction and sampling, which, when applied correctly, could yield even more important big data insight. But while the big data issues are fixable, big research’s issues are endemic.

To view the original article, visit Forbes.

AD TITLE

{{ title }}

BRAND

CATEGORY

{{ category }}

AIR DATE

{{ date }}

ACE SCORE

{{ rank }}

Scroll To Top