At this year’s Insighter Virtual Conference, I hosted a session titled “DIY Insights,” where I shared with attendees how my company leverages DIY research tools to make business decisions.
In case you missed it, check out the video below, or read on to discover some key takeaways from my talk.
https://www.youtube.com/watch?v=jD7ZhFa8uvM&feature=youtu.be
First, a little about me: I served as an intelligence officer in the US Army for eight years. I later attended Emory University, where I earned an MBA with a focus on marketing. I began my marketing career at Procter & Gamble. Now I work at Lindt & Sprüngli, USA – the number one premium chocolate company in America – as an Associate Brand Manager on Excellence, for our delicious dark chocolate bars.
Let me start off by being clear; I’m not a full-time researcher. I'm an ABM at a company that delegates research down to its brand teams.
This article will outline how one brand team at a mid-sized food company plans, researches, and shares insights in a do-it-yourself, somewhat austere environment.
We, of course, use third parties to help actually execute our research. Full-time insights professionals can think of this as a consumer interview panel with n=1 😉.
Before I jump in, I want to give you a bit more background about my previous professional life and how it informs my approach towards marketing and consumer insights.
What Market Research Has in Common with the Armed Forces
After I got my bachelor's degree at UCF (Go Knights!), I commissioned in the US Army, where I spent the last five years of my service as an intelligence officer. I’ve led teams to solve problems on topics ranging from terrorism, narcotics trafficking, and Ebola. Slightly different problem set than confections.
An intelligence officer’s job is to support senior leader decision-making by collecting, analyzing, and disseminating information rapidly and accurately. Pretty simple, right?
But it's a very fast-paced environment with a lot of competitive activity and more data than you can ever hope to sort through in time. And in many circumstances, the other team is making sure they hide their sensitive information as well as they can to protect themselves and make your job harder.
Every piece of information you receive as an intelligence officer needs to be assessed for its validity and reliability. If you get an anonymous tip, it can be downright dangerous to act off of that single piece of intelligence alone. It could literally be a trap.
In working to answer your commander’s questions to guide his courses of action, the gold standard is multiple sources of reliable information that all agree with each other.
For example, let’s say you're building a case to take action. You have aerial imagery, radio intercepts, tips from human sources, all in agreement telling the same story. With that, you can confidently inform your commander of your assessment and the actions that it indicates. All of this data collection and analysis just sounds like a militarized consumer research department, right?
But that part of my life is well behind me now. I left the military in 2016 to pursue a career in business. Now, I'm at Lindt & Sprüngli, and we have some great brand leaders here, but research is mostly delegated down to us as ABMs to plan, resource, and analyze. So, let’s get into how we get it done.
Begin with the end in mind
When we talk about consumer research in this context, as a small, scrappy brand team, where do we start? It comes down to the old business school mantra of structuring the problem.
If you ask bad questions, you're going to get bad answers. Before we can do anything else, we really need to determine what question we're trying to answer.
At this point, I segregate our two types of business questions as exploratory and hypothesis testing.
Exploratory is what we do early on. We usually do that when we're looking at a new market or product, or if there are leadership changes and you want to give a general overview to help them familiarize themselves with a category.
I also like to think of the typical information that I consume daily as part of that exploratory work. It can be reading the newspaper, industry, or trade periodicals and some very interesting recurring surveys going on during COVID-19. But let’s talk about hypothesis testing.
Test your hypotheses
Depending on the scope of the project, we can have one hypothesis or an astronomical amount of them. What's important is to ensure that A: they're falsifiable and testable hypotheses.
And B: that they are appropriately prioritized, especially if your learning plan involves multiple questions. You have to determine which ones are critical to finding out if the project is viable and what the financial return could be.
Once you have a list of well-structured prioritized hypotheses, the next step is to ask, “what would a good answer look like?”
Let’s look at a generic, hypothetical example.
“If we update our display with Tagline ‘B’, it will drive higher purchase intent than our current display in the drug channel.”
We can prove that true or false. A good answer to satisfy this information requirement could be that “Tagline ‘B’ had a weighted top two box PI of 68%, a statistically significant improvement over the current display at a 95% confidence interval.”
That would be good. The research will give us the real answer, but you have to think through hypothetically, what are the information requirements to satisfy you?
This hypothesis is not exactly life-changing research. We're not going to change the world or the market with this one. But, this is the average kind of hypothesis that a marketer in a brand function will probably test on any given Tuesday.
Find the answer you seek
Now that we have defined what a hypothesis is and what a good example answer would look like, we need to figure out how we get to that answer.
There's an enormous toolbox full of ways to get information from consumers—especially nowadays, with so many online resources and vendors.
Just a few of the go-to methods that we would think of immediately as marketers in a CPG environment include:
- shelf testing
- live in-market tests
- one-on-one consumer interviews
- focus groups
- online surveys
Once you've identified what some good courses of action could be and the tools to use, you want to start thinking about your resource constraints and data fidelity requirements. Time and money are the most critical resources for nearly any business.
Let's assume for this example that you’ve determined your budget. Now you have to assess these tools against both your budget and the level of fidelity you need for your data.
While live in-market testing would give excellent fidelity, it's also going to take up too much time, and it could be costly, and this is a pretty low payout type of research.
Shelf testing will also be expensive, and the ROI is just not going to be acceptable for stakes like this.
Focus groups are great, and it's wonderful to talk to consumers, especially in an exploratory sense, when you're trying to get clarity on your problem. But it's very qualitative, and it's just not going to get to that level of confidence that you would need to make a solid recommendation.
One-on-one interviews are just a variation of the same theme. The depth of qualitative data is better, but it's still not going to get you to an answer, where you can confidently say, “This tagline on this display will move more cases.’’
That leaves us with online surveys for this example. They have a quick turnaround time, which seems to get shorter every year. They’re also convenient and relatively low cost compared to the alternatives.
Now that you've determined the method, you’ll need a partner to help you execute your research plan.
Select the right vendor
When you first start looking at vendors, they seem very similar until you begin digging deeper into their capabilities. But when you shop around (with a well-defined research plan in hand), you start to find that some vendors will have slightly different ways and capabilities to answer your question.
Get quotes from different vendors who can satisfy your requirements, and weigh their pricing versus their capabilities to see what’s the most suitable for your situation.
You need to make sure you have the right tool for the job. You wouldn’t want to put in a finishing nail with a sledgehammer after all.
Luckily, most vendors these days also have scalable sample sizes, so you can really dial it in to make sure that you're getting the right ratio of cost to benefit.
On a personal note, the one thing I always scrutinize with any vendor for a survey or any reach out to consumers is their samples. I need to be confident that they have valid methods for recruiting and refreshing their sample and that they're representative of the population at large.
Lindt is a national chocolate brand, so generally, we need respondents that represent the entire US population, that we can then later filter down to subpopulations if we need to. But we absolutely want a representative sample to ensure our data validity.
Analyze your survey data
Once all that’s lined up, it’s time to program your survey, pull the trigger, and get it out in the wild 🎉. Now that you’ve planned and resourced the information collection, what do you do with it afterward?
Generally, you just make a bar chart in PowerPoint, and you look at the biggest bar, and that's the right answer.
Problem solved, right? I’m kidding. I wish it were that easy.
Once you get the data, you can filter it out as appropriate to look at your target population. In this case, for our example, we'd be looking at people who shop at drugstores.
There’s a demographic question in there, “Do you shop at CVS, Rite Aid, Walgreens, etc.?” And we just use that to filter so we can start comparing results and outcomes.
I do most of my stat analysis through Excel to compare the data for stat sig differences, but there’s plenty of tools out there; the underlying math is all the same for simple problems like this one.
Once you've done the quant analysis, you can put it into a standardized research debrief. At Lindt, I developed this format to share with brand teams and other key stakeholders.
Presentation is everything
The debrief is a white paper document that lays out the context, the business question, the results, and, most importantly, the recommended actions.
It's essential to have this because it forces you to detail the survey structure and stimuli, which provides complete transparency with the business.
Often, you will present your findings to people who are quite senior to you. If they don’t like your conclusions, you very well might have to defend your methodology. So, you have to make sure it's airtight and clearly lay out how you did it.
Tell a story with your data
At this point, the research is complete, and it's analyzed. If it's a lower visibility project, this is often where you just make the call and move on to the next problem.
In our example, Stimulus A stats sig is better on purchase intent than the other two. That's probably going to be the call.
But suppose it is a bigger problem that you're trying to solve at the organization – a shift in strategy, a new product line, a change in your branding or communications. In that case, you need to make it into a story that you can convincingly use to explain to the rest of the organization.
If you say something like, “We found among drugstore shoppers that Tagline ‘C’ had a weighted top two box purchase intent 20% higher than the next. Even at a 90% confidence interval...blah, blah, blah.”
You can explain it all in really cold stat language, which I'm sure you as a researcher will appreciate. You’re like, “man, 90% confidence interval, that's great.”
But if you say that to a room full of senior management, they'll probably just hand you a box to start clearing off your desk.
What you need to be able to do is tell a story with your data and clarify why these things are happening.
One technique we employ is we always make sure to ask for open-ended verbatims at the end. We are particularly interested in getting to the why of it because you need to tell the story of what you found without putting people to sleep.
If things are being done well, your prior research should support your current research. Now, you're starting to get airtight. Just like I talked about at the beginning with military intelligence, multi-source intelligence will always be the most reliable.
When you have survey data or deep-dive into panel data, supporting it with other forms of data is critical. If you can demonstrate an in-market correlation, refer to notes from prior consumer interviews… plenty of options. But if you can take that existing data, add in your new research and make a cohesive narrative, your odds of success are getting a lot higher.
Successful DIY research outcomes
The consumer research I've commissioned or led has resulted in many great outcomes. Here are just a few.
- Innovation kick-offs - DIY research has given us the green light on whole new product lines, with millions of dollars of investment.
- Flavors and names - Making sure we introduced the best new flavors and found the best way to communicate it to consumers.
- Fundamentals of the consumer behaviors - It's reframed the way we think about how consumers interact with our brand and our category.
- Display designs - Whether it’s a simple tweak of language or the overall campaign theme, good research has guided us towards the right answers.
- Television scripts - When you're a part of creating an ad for a consumer product, there's a major swelling of pride when you see it finally come on TV. When you’ve seen the consumer insight you found become the “big idea” behind the ad, it’s even better.
What I’ve outlined above is not necessarily the best practices. It’s simply how one brand team in the food industry, acting as their insights team, is doing business. Hopefully, it gives researchers some clarity on how and why we make our decisions and how we can better partner together in the future.
Want to learn more about how Lindt’s brand team leverages DIY research? Connect with Nick inside the Insighter community.