INSIGHT: Getting Results vs. Getting Answers

How often do you find your innovation portfolio under duress because you haven’t yet landed that big idea?  Getting rigorous quantitative proof that you have the next $100MM idea is challenged by the complexities of consumer behavior. Simply put – when a consumer makes a decision to buy a product (or choose your new concept), it is unlikely to be based on a singular factor, but rather lots of elements that shape the final choice. But what happens when the desire for innovation validation leads to “positioning” the data? – there’s a term for that in the scientific community, and it’s known as p-Hacking.

 

Social scientists are learning that getting a result is much easier than getting an answer. Meta studies trying to replicate previous social science studies that showed significance have frequently proved that only between 50% and 60% can be replicated with significant results (e.g. a p-value of 0.05). Reporters at Five Thirty Eight then went further and showed that a data set with as high as 1,800 combinations was able to produce over 1,000 statistically significant results! (try hacking their interactive data set to see what we mean:  http://fivethirtyeight.com/features/science-isnt-broken/#part2 . What’s going on in these cases? Is it fraud? What does that mean for those of us who use quantitative studies to analyze cutting edge innovations?

 

Subjective choice and human nature both play a part in the answer. Ioannidis, a Stanford meta-science researcher stated that “By default, we’re biased to try and find extreme results.” His point is that people want to prove something, and a negative result doesn’t satisfy that craving. It’s hard to let go of a cherished idea, especially one that we need to satisfy our portfolio goals. All of us are good at processing new evidence through the lens of what we already believe and confirmation bias can blind us to the facts; “we are quick to make up our minds and slow to change them in the face of new evidence.” To combat this effect, product innovators have different research/insight paths to consider:

1)      Prepared to Spend Time Getting Messy - ask a question, conduct a study, get an ambiguous answer, then run another study, and then do another to keep testing potential hypotheses and homing in on a more complete answer. This can either be a slow-build process, which works well but takes time.... or a process of rapid entrepreneurial insight gathering that can be executed both qualitatively and quantitatively in as little as as 12 weeks. Click here to learn more about Lightning Strike and how we utilize a structured format of rapid ideation to make even the most difficult opportunities come to life: http://www.mission-field.com/approach/#approach-1

 

2)      Test the Totality of an Innovation, Imperfectly, Sooner – if you, and your organization are more risk tolerant, then we advise seeking opportunistic and imperfect feedback quickly to putt the idea together (as best as you can) and see if it can stand on its own two legs. This often involves a longer horizon whereby the innovator executes disruptive ideation in research, then quickly conducts rapid prototyping in order to move the idea into blinded retail-testing scenarios. At Mission Field – we pride ourselves at promoting this methodology to save clients time, resources and money. Click here to learn how we make that happen:  http://www.mission-field.com/approach/#approach-1