Description
In this episode of The Curiosity Current, hosts Molly and Stephanie sit down with Charlie Grossman, Owner of CG Research & Consulting and former Vice President at BASES and Burke, to explore what makes research trustworthy and actionable. With over thirty years in market research, Charlie explains that while technology has made studies faster and cheaper, quality depends on context, consistency, and interpretation. He warns that DIY platforms and automation can sacrifice reliability for convenience, introducing bots, bias, and poor sampling. For Charlie, every number needs history behind it, without benchmarks or parallel testing, data loses meaning. He shares examples where high purchase intent failed to translate into repeat sales and explains how clear decision rules help teams avoid the “novelty trap.” He also highlights how human judgment remains essential even as AI transforms analysis, arguing that machines can scale insight but not replace empathy. True progress, he says, lies in mastering both, the precision of technology and the discernment of human experience. For today’s research leaders, this episode is a reminder that the fundamentals – clarity, consistency, and context, still drive the smartest decisions.
Transcript
Charlie - 00:00:01:
Where's the sweet spot when you start creating artificial data? How much error do you want? It can't be completely flippy and random; you don't want that. But if it's too robotic and it's the same, exact answer with no thoughts or feelings and no room for bias or influence, you don't want that either. So we're at a point where…
Molly - 00:00:21:
Hello, fellow insight seekers. I'm your host, Molly, and welcome to the Curiosity Current. We're so glad to have you here.
Stephanie - 00:00:29:
And I'm your host, Stephanie. We're here to dive into the fast-moving waters of market research where curiosity isn't just encouraged, it's essential.
Molly - 00:00:38:
Each episode, we'll explore what's shaping the world of consumer behavior from fresh trends and new tech to the stories behind the data. From bold innovations to the human quirks that move markets, we'll explore how curiosity fuels smarter research and sharper insights. So whether you're deep into the data or just here for the fun of discovery, grab your life vest and join us as we ride the curiosity current.
Molly - 00:01:07:
Good morning, good afternoon, and good evening. Today, we are joined by Charlie Grossman, owner of CG Research & Consulting. With over 30 years of experience, Charlie is an expert in helping businesses evaluate market research results and apply them to decision-making.
Stephanie - 00:01:23:
Charlie has worked in both consulting and leadership roles, including at bases and has over twenty years of experience leading qualitative and quantitative research. Today, we're gonna discuss how businesses can determine whether their research results are, quote, good or bad and how teams can use these insights to inform strategy.
Molly - 00:01:42:
We'll also dive into how technology has shaped the process of interpreting market research from DIY tools like SurveyMonkey and Qualtrics and, of course, our very own skipper from AYTM to advanced AI analytics and how Charlie integrates both to help his clients make smarter decisions. Welcome to the show, Charlie.
Charlie - 00:02:01:
Thank you. It's great to be here.
Stephanie - 00:02:04:
Well, Charlie, to kick us off, as we've mentioned, you've worked in market research for a while now. BASES, Burke, and, of course, in your own consulting practice. How have you seen the process of evaluating research results evolve over time, especially with the introduction of new technologies and methods over time?
Charlie - 00:02:22:
Well, you know, it's funny because there have been a lot of new technologies, and that's a good thing and that's a bad thing. I think a lot of times, new technology is there for convenience, but not necessarily quality. As an example, we all use a cell phone, right? But I'll tell you, when I had a landline, I never had dropped calls, I never had a lot of the issues that I now have with my cell phone, but we accept that because cell phones are obviously much more convenient. And so we're often trading off speed and convenience for quality. So I don't wanna sound like too much of an old timer, but I think with a lot of the technology advances in research, there have been some real degradations in quality. And I think we all know that in research, the quality of data has really, really gone downhill. And now we're playing catch-up, and there are a lot of great companies out there that detect fraud, detect bots, etcetera. But I go back to the day, and I'm gonna date myself, but I guess you guys already have. When I started, you know, research was done either over the phone or maybe in the shopping mall with a clipboard. That's antiquated. Mall intercept, you know, and that might be seen as antiquated, but you know what? The woman with the clipboard never interviewed a bot; she never interviewed the same person six times in a row, you know, she never interviewed someone who was speeding so quickly through the questions that they weren't even giving it sensible answers. So there's a lot to be said for the way research has been done. So when it comes to good research, I'd say these days, we're obviously much faster, and we're obviously much cheaper. But, you know, like, that old expression, you could get fast, cheap, or good quality, pick two. It's just really hard to get all three. But I will say that I've seen data, especially when I go through open ends, and I see the same, exact response three times in a row, or I see, you know, garbage answers. So with technology, Stephanie, I mean, I think you're asking a good question. In a lot of senses, research quality has gone down rather than up. And so, yes, we are playing catch-up. And, yeah, I think when you over-rely on technology, that could be a big mistake. And so I think these days, we still need human beings, and it's gonna really take a long time before we can completely replace human beings with technology that thinks and has a good perspective.
Stephanie - 00:05:02:
That's so interesting. One of the things I wanna pull the thread on that you were just talking about, before we sat down to record today, Molly and I were having a bit of a conversation about good versus bad. I wonder what Charlie's thinking about when he says that, because I think there are a couple of ways to think about what good and bad could be. And, like, one of them is, is this a good actionable result? Or is this a muddy result that is hard for me to interpret? That can have nothing to do with data quality and less to do with the fact that you have undifferentiated STEM, for instance, right? But it is interesting to know that your focus here, it sounds like, is really about good as in trustworthy data that we can build insights from, versus data that maybe is not data you should be building your insights and your strategy from. Is that fair to say?
Charlie - 00:05:46:
It is fair to say, and I think it's even broader than that. So there's good versus bad data, but then, when you get a number, what does that number mean? Is it a good number or is it a bad number? And, you know, a number kinda has a life of its own. The example I often use when I talk about numbers is, let's say you and I, go outside to measure the temperature, and my thermometer says 0 and yours says 32. So you might say, well, your thermometer is, you know, measuring a higher number. It must be hotter. But, you know, mine was centigrade and yours are Fahrenheit, so it's really the same. So, you know, a long way of saying a number is not just a number. And so it's very important when we get data to understand what the number means. And when I talk about good or bad, as an example, I get, let's say, purchase intent, right, because that's a very common metric, and a lot of decisions are based on that. So I test a product, and 80% say they either definitely or probably would buy. Is that good? Is 80% good? What does that really mean? So when I talk about the word perspective and when I talk about good or bad, what I'm really talking about in the big picture is, is my research actionable? Can I make a decision off it? And if I provide you a number, can you do something with that number? Can you go back and say, Oh, this is a good number, and it means I will have good results. That's really the bottom line. So that's really the big picture of what I'm talking about when I say perspective and good and bad.
Stephanie - 00:07:34:
Got it. So it's a little bit of everything that goes into that, it sounds like.
Charlie - 00:07:37:
It is. I remember a client once said to me, she said, Charlie, you know, when we're buying your research, all we're buying is a number. And if we don't trust that number or the number doesn't help us take action, we've just wasted everything. At least if you buy a home or buy a car and you don't like it, you could sell it. You buy a number and you don't like it, you've wasted everything. So I've always thought about that. Our job as researchers is to provide numbers or, you know, maybe qualitative data, not necessarily numbers, but data and information that has value, and it only has value if you can make a good decision that will help you. So that's where I get back to a number's not just a number. Data's not just data. You have to have perspective when you provide that data.
Stephanie - 00:08:28:
Context. Yeah. Makes a lot of sense.
Molly - 00:08:31:
And that context is so, so important when you're going to your customers, when you're going to the market, when you need to see those business impacts. So I'm curious. I wanna deep dive a bit more into that. How is the process of that? What, in your experience, have been key factors that businesses and researchers should focus on to ensure that their research results are well contextualized and that they are good and bad? How are you ensuring that they are actionable?
Charlie - 00:08:59:
That's a great question, Molly, and I think the keyword you said there was contextualized. Numbers in general, especially in our field, don't live on their own. And even we we started the conversation talking about good or bad, those are relative terms. If I say this is a good number, well, it must be compared to something else that's a bad number. So, context is key when it comes to research. So what am I talking about with context? Well, one is history. A good researcher understands history, understands what has worked in the past and what has failed in the past. So, again, if I give a number and say good or bad, I have to say compared to what and also based on what. So it's kind of important to know that when certain, let's say, new products that's been my field, and those are some of the biggest decisions, right? So, a new product has failed with this number, or a new product has succeeded with this number, or this number is predictive. Well, if a number is predictive, you've gotta have some history. And I don't think people realize the importance of history, but a good researcher is someone who's also a good historian, who knows what has happened in the past and when I give a number, if I have some perspective, I could say, well, this is what's happened in the past when I've reported the same type of numbers. So that's important for context. But, also, you know, if we're gonna build context, then one of the most important words is consistency. And that's kinda dangerous because we are talking about changing technology. And as I've said, changing technology is wonderful, but in the research world, it could really hurt you because when you change technology, you're often changing methodology, and then you've got apples and oranges, which is an expression we all use a lot in research, right? You're comparing apples and oranges. Then you've only got one apple, and you've got a history of oranges, and you really don't know what you're doing. You have no context. You have no consistency. You know, when I worked with BASES, so, I worked through various iterations and changes, but whenever we made a change, we parallel tested. That's not something that's done as much these days, and it is very time-consuming, it's very expensive. No one really wants to go through those processes, but it really helped. So the big one obviously is going from the mall intercept, the phone interviews, to online, and you definitely see differences. And there are a lot of differences people don't realize. One is the technology, of course, but behind the technology, it's not electronics versus face-to-face. There's so much more behind that. So when you're doing a face-to-face interview, you might not wanna insult the person. One thing I remember seeing is when we went online, all of a sudden, dislikes, you know, you'd get likes and dislikes. They might be saying, but the list of dislikes would really grow. And people would get angry with their dislikes, and they really let you have it. That never happened with face-to-face interviewing or even phone-to-voice interviewing. So that changed. And then with technology and context, there's a lot you can't control for that you used to control for back in the day when you were doing old types of interviewing. So, for example, when we did good old mall intercepts, which is, you know, kind of a joke right now, what we found is, here's an example, we would control for weekday versus weekend and day versus night. Who's shopping during a weekday? Well, it's probably, you know, a stay-at-home mom, or I should say a stay-at-home dad, now, it could be a retired person, an unemployed person, very different from the person who shops at night during the week and different from someone you'd see over the weekend. That you can control for. Now I get an invitation and might do it in the middle of the night. None of that is controlled for. And then there are a lot of things that, you know, I might give this to, maybe my wife will fill out the survey, maybe my kids will fill out the survey. So control is gone, but when control is gone, accuracy disappears. So there are a lot of things that you wanna do to create context, and it is, I think, more difficult today. But getting to the original question, how do we create context? I think consistency is really the answer. So it's not just methodology, it's consistency in your sample, it's consistency in the quality of your sample, it's consistency in the way you ask questions, right? The order of your questions. If I ask you purchase intent for a product right now, but then I tell you more about it and more about it, tell you, oh, here's some more benefits, and I ask you ten minutes later, I guarantee you're gonna get more favorable. Your interest is gonna go up because I've told you more about it, and you're talking about it. You're getting familiar with it. So consistency in everything, type of sample, not just demographics, but user graphics, quality of the sample, questions, time of year, time of day, where you interview in the country, all these types of things.
Stephanie - 00:14:22:
Do you know I was, but I did not know that you would pick up on that. So thank you. I have this question that I've been wanting to ask you about. I think your points about technology and just the sort of historical view are spot on. I really do. I think about these things all the time. I see it in my own research practice as well. But I also counterpoint, think about all the technologies that are out there to do just the things that you're talking about, like, it's a solutions-focused, right? So they're selling a solution, and they're saying, this is the way you should do your packaging test, this is the way you should do your concept testing, so that it's repeatable and programmatic, and you have a normative database that you can then look at. And it provides all those things contextually that I feel like historically you could only get at BASES. Do you see that too in the industry?
Charlie - 00:15:07:
By all means. And to be clear, I just wanna warn people about the downsides of change, but, obviously, there are wonderful things about technology. And I don't know too many people who are doing door-to-door interviews anymore or, you know, doing 50,000 open ends by hand or that type of thing. So, yes, one thing that technology helps us with is bias. And if we're gonna talk about certain areas, you can really have a lot of bias. And that's another word that we as researchers have always been concerned about is bias. So I'll give one example, forecasting. When I've done forecasting in the BASES or also on my own, I guarantee you that my forecast will always be different from an in-house forecast, and 100% of the time, my forecast will be lower, 100% of the time. And it's not because of different methodologies, it's because every forecast has some judgment, and it's gonna have some bias one way or the other, and that's okay. You have to put judgment into forecasting and into data reporting. You know, that's the human context. But if I put a forecast in the hands of a new product manager who is gonna receive a bonus on whether this product launches or not, I kinda have a feeling that forecast is gonna be really high. And every decision, every lever, every needle is gonna go towards the high side. So one thing that technology does help us with is removing that bias, and, yeah, it helps us with the consistency. And so sometimes removing a human element is good because humans tend to be kinda biased, you know, we're all biased. So it removes that, but at the same time, it also removes good human judgment. So I've yet to meet an AI that has perfect judgment. We're getting there. And we always said when we build a model, the idea is to keep replacing judgment with data. I have a hunch about this. Let's test it out. Let's do 50 parallel tests if that works. Then we'll create an equation, and that equation can be fed into a model, and then the bias is gone. So that's the good side of technology is it removes some human elements that we don't want in data interpretation. So in that sense, you know, technology can be very helpful.
Molly - 00:17:33:
Yeah. You want the only human element to be the people that you're actually asking questions about, not anything else intervening in that.
Charlie - 00:17:40:
That's right. You know, it was interesting. I was seeing a presentation on bots and artificial respondents, synthetic data. And I think we all, when we first saw the synthetic data, were all a little scared. And then once we got used to the idea, we all felt better. But when humans create synthetic data, there's a lot of judgment in that. So, for example, if you ask me a question, Do you like this product? Then I say yes, and then 15 minutes later, I say no. So we started testing bots, you know, synthetic respondents. And here's the interesting question, like, when we first started doing that in one company, we said we wanna get rid of all the the bias by making sure they were 100% consistent. But at the end of the day, 100% consistent is too robotic. So where's the sweet spot? Where's the sweet spot when you start creating artificial data? How much error do you want? It can't be completely flippy and random. You don't want that, but if it's too robotic and it's the same, exact answer with no thoughts or feelings and no room for bias or influence, you don't want that either. So, you know, we're at a point, we're trying to figure out how human our technology should be?
Molly - 00:18:59:
So you're talking about changes because of technology, changes of bias or lack thereof, moments where it actually is the market responding in a certain way, where the data that you receive, especially in new product development, is unclear or mixed and needs some interpretation and perhaps maybe more context, like you were just talking about. So what is your approach for guiding your clients through these types of situations when perhaps there is not a really reliable, straightforward answer?
Charlie - 00:19:30:
That's tough. You know, I've always said, I love my research to say this is a great product or a horrible product. That's clear. When it's gray, it's harder, but that's where humans do kinda come in handy. So the question is, how do you handle indecision or gray areas? You know, it's funny. The biggest clients out there, you know, you think of, like, the Procter and Gamble's and the Reckitt's and all that. One thing they're very good at is developing decision rules and coming out with systems, but it's the same issue that we just talked about with robotic versus human. Sometimes you gotta break the rules too. So the more rigid your system, the more you have to figure out when you can break rules or not. So here's a simple example: one client I worked with had a very hard and fast rule with purchase intent. It had to be in the, you know, the top 20% of the database. So that's good. But then they had an escape valve, and unless and the unless was unless it appealed to a subgroup that was very easily targetable, reachable, where it also had really high, high scores. And so if we could target them and if, let's say, they were incremental to the parent brands, right? So this line extension was offering something that the parent brand couldn't offer, but the line extension did, like, now with no this, you know, no sugar, whatever it might be, right? And so we have this target group that, you know, will not eat sugar, but would otherwise have the product. So that was one of the rules. Cannibalization was the rule. Like, if this were highly incremental or targetable or something like that, then we could break the rule about overall purchase intent because it's gonna give us volume, but in a different way. So it's a good question, Molly, you know, what do you do in those gray areas? And the idea, again, is to keep creating good decision rules, especially in big companies where it's harder to be, you know, a maverick. Things have to go through, you know, many, many layers of approval, etc., and it's very hard to move quickly. That's when it's good to create not only decision rules, but secondary rules, and that's where it's important. But the other part is just, you know, have that experience where you could kinda tell that something seems like it's good, but something's not working. I'll think of an example with a snack, a long time ago, that I tested, where the purchase intent was really high. It was the top, so if anyone was just like a one-number person, they would say, Go for it! The other thing that was really high was uniqueness. So now, uniqueness could be really good or bad, right? You could have something uniquely good or uniquely bad, right? And both of those things are real situations. But here was a situation where the purchase intent was really high and the uniqueness was high. So what do you do there? Almost everyone would say, Let's go. But something was bothering me, and I don't know if, you know, if AI was looking at the situation, they would see the same thing I saw. It just as I looked into the open ends, it just seemed a little funny to me, and I couldn't put my finger on it yet. But I started looking at the data, and one thing that I saw was, and this was just a concept, right? So it was a snack concept. So, you know, they hadn't tried the product yet. So sometimes it's hard to make a decision just about an idea, but that's what they wanted to do. They didn't, they were going too fast, and they were either gonna launch or not launch based on just an idea. So I said, you know, the idea is very broadly appealing, and it's unique, so it sounds good, but I was kinda, you know, double-minded about it. And so I looked, and one thing I saw there was claims purchase frequency, how often are you gonna buy it, was pretty low. And I said, you know, I'm getting a little worried. And then I started digging into the open ends. And I said, What do you like about this product? And they were saying, Well, it's really cute, it looks fun. And, you know, after a while of looking at it, I said, you know what, this is gonna be big at first, you're gonna get a lot of people to try it, but I don't think it has staying power. It's not gonna sustain. And they launched it, and that's what happened. So the first few months, you saw really high numbers, and then they dropped because there was nothing that could get people to come back and repeat. So, you know, I learned a lesson. I kinda taught myself the lesson that if your only salient feature is being unique, that might not be enough. It might if it's uniquely beneficial in the long term. But if it's just, hey, this looks like a really fun idea, let me try it. That might not keep someone coming back to try it again. So, you know, that was my suggestion. It wasn't to launch. It just seemed like it had too many, you know, negative factors. It looked like if you were just gonna hit the market and leave, like, you know, pumpkin spice lattes or whatever, you know, we're just gonna be in for a season, then we're gonna pull back, then it would work.
Molly - 00:24:47:
But pumpkin spice lattes are every season. I'm there.
Charlie - 00:24:51:
Right. And if it's that good, then maybe it'll have longevity. But this was a case where it didn't have the flavor staying power. It just didn't have it, and it was kind of a flash in the pan, so to speak.
Stephanie - 00:25:04:
I love just the layers of that, though, because I will say, like, I was tracking along, and, like, you were saying, you know, PI alone, not the best, and I agree with that. You layered on uniqueness, and I was like, yeah, there we go. Because I always think about, like, you know, when you cross PI and uniqueness and you make that quad chart, that you're really finding it's its white space, right? It's like, oh, well, here's white space to pay attention to. But then you went that third click further to say, there is something in here, right, that's, like, telling me that this isn't connoting white space. It's telling me that this is the novelty product, essentially.
Charlie - 00:25:39:
Right. And I still don't know if AI would pick that up or not. It might. And I'm sure over time it's gonna keep getting better and better. But those are the type of things to look for. And then other things, Molly, you were asking about, you know, creating context, and I was talking about consistency. Consistency in your promises, in positioning, can make a huge, huge difference, right? Are you overpromising or underpromising? If you're underpromising, it's probably better in the long run. If you're overpromising, you could kill your idea right away. So, concepts and research, you should have a certain amount of promise. Again, Goldilocks is a good example. Not too much overpromise, not too little, but just about right, you know? Because I've seen a lot of products that did very well because they underpromised and did horribly well because they overpromised. An example was a silly thing, maybe sounding silly, but a dishwasher detergent, right? Automatic dishwasher detergent. This was when I worked at another company; I didn't work on this project, but I got the free sample. So I took home the dishwasher detergent. I used it. My wife used it. I remember my wife telling me, Get some more of that product. That's the best you know, it's great. It's so good. Dishes are so clean. Okay. So after the project was over, I went to the guy who was running the project, and I said, I bet that did really good. He said, No, it did terribly. Got really bad product scores. He showed me the product scores. I said, Really? It was so good. It cleaned the dishes. I said, Let me see the concept. He shows me the concept, and the concept says something to the extent of, This product is so good. It even removes your worst stains, even burnt-on grease. Well, you know, the problem was I think people believed it, and they said, Oh, I'll buy it. You know, even stuff baked on grease, well, you know, usually, you have to get a hammer and chisel to get rid of baked-on grease, right? But that was the promise. And guess what? It didn't remove baked-on grease. It was great, but it wasn't that great. And you just gotta be a little careful not to overpromise. And it's like someone once said to me, he said, Listen, anyone could write a good concept, but if you might not be able to deliver. So it's just a long way of saying, you gotta have the right amount of promise. You're better off. I've done a number of food products that underpromised because they talked about health and health. You know, you can't eat nutrition. You can't eat numbers. You know, low fat and this and that. You can't live on that. Very few people will buy food products that they really hate. But I've had cases where the concept was all about the numbers, the ingredients, and the quality of the ingredients, and it looked unappetizing, but people brought it home and said, This tastes pretty good. I'll buy it again. And I've seen really great repurchase intent because you didn't promise me too much. I was basically buying the numbers. And when you're buying a food product based on numbers, there's a really good chance you're not gonna buy it a second time, you know? You're just buying it once because of the numbers, and that's a promise that, you know, it's kinda factual. So, yeah, I'll buy it, but if it tastes bad, that's it. That's one purchase out of, you know, a lifetime of purchases. So I've seen it go both ways, underpromising and overpromising, and both of those things can really flip your numbers at the end. But I think a big mistake is dialling things up too much at the concept level because the product is the same product, but it winds up really, really, really being very bad because people expect it too much. So managing expectations is important, and companies need to have kinda consistent policies with managing expectations. Do we really turn it up and promise a lot to get them to buy once? That's a great short-term idea. Long term, it's really gonna kill you. So I think usually the answer is somewhere in the middle. The Goldilocks answer, it usually works best.
Stephanie - 00:29:37:
Well, and that's so funny because the data's not gonna tell you that. right? The data's just gonna be like, this is great, but it's that experience like you're saying, and it bears out its shelf, which is the worst way to find something out, right? Like, that's not when you wanna find out that your product isn't gonna sell, gonna hit the numbers.
Charlie - 00:29:53:
Right. You really have to understand, like, what you're promising and what people are gonna do as a result, and only mildly related results. It's been there have been a couple of elections in the past number of years where the results were a little surprising. And part of that was, you know, going to the right sample and being able to predict things. And in the case of elections, it's likely voters. That's your right sample. So it's almost like two parts to that piece. One is who you're gonna vote for, and I think we're pretty good at that. But the other is predicting who's gonna go to the polls, and we need to do more research on that. I mean, I have my own opinions on who goes to the polls, but I think those are the type of things.
Molly - 00:30:40:
So you said so many interesting things about having to contextualize, so many things that researchers and end users of data are gonna have to balance, what the biases look like, what the context looks like, what these different forecasting could look like, so given all of that, how do you try and find which research method is the best to use to test a product or concept? How can you actually be sure that the methods are something that you know is truly resonating with customers?
Charlie - 00:31:09:
That's a good question. You know, we used to have an expression. It was, it's okay to be wrong as long as you're consistently wrong.
Molly - 00:31:17:
I love it.
Charlie - 00:31:18:
Yeah. Because it's true, you know, there are so many different research methodologies, but the truth is, if you're consistent and you can create a correlation between research and results, then that's the right research methodology to use. So consistency kinda trumps methodology in that sense, which sounds surprising because, you know, obviously, methodology is important, but you could have three great methodologies that are completely different and give you different results, and then you have no context. So context is still king. And so the other thing I would say along with that is consistency, but also know that you're answering the right questions. And what I mean by that is you need to understand which levers actually give you the results. So, you know, I'll go back to my forecasting examples. There are a lot of different ones. But, here's one, pharmaceuticals forecasting a new drug. How do you forecast a new drug? Who do you talk to? Well, with some drugs, you could talk to the consumer; there's a lot of advertising direct to consumer, right, these days, and the consumer could then go to the doctor and say, Hey, doc, I saw this ad for a drug that will help me with, let's say, baldness, right? I'm going bald, and this drug will help me grow hair. Well, it's probably good to go to the consumer for that kind of research, right? Because the doctor isn't necessarily gonna prescribe, you know, something for baldness to a patient who doesn't ask for that. So it's probably good to go to the patient, but that might not be enough because the doctor has to say yes. So you go to the doctor. But we found pretty early on, the doctor could be all in favor of it and the patient could be in favor of it, but your managed care says, no. That's not gonna be on our formulary. So it could be a great drug. And as a matter of fact, a lot of the new drugs we worked with never made it to formulary, or if they did, it would take a couple of years because managed care companies just weren't ready. So it could be a great new product, but I don't know if we wanna cover it. Let's see what other people do. Let's wait a little while. So you gotta know how to ask the right question. So, you know, what if I went back and said, Hey, consumers love this. Well, so what? That's not gonna drive your sales. I probably shouldn't have even wasted my time with consumers. I probably should have interviewed managed care professionals, the right ones, the people who make the decisions. So you gotta go to the decision maker. There are so many products like that, kids' products, right, where the kid loves it, but the mom says no, or, you know, so many products or here's one of my favorite men's colognes. About 70% of those products are bought by women. So, you know, you could be going to the wrong person. So, what are the levers? What is it that actually drives results? What are the decisions? And then your research should kinda match the decision-making process, and that's where context and history help. Well, how were these decisions made, and what drove the decision? So you gotta ask the right question, and you gotta ask the right people the right questions.
Stephanie - 00:34:33:
Makes a lot of sense. I think too, and I think about this in the context of, like, where it comes up for me is in a concept test where you happen to have more than one concept that's performing well across, you know, several metrics. And for the client, it becomes this decision. And a lot of times, I wanna say don't make the decision based on this because this evidence is not differentiating them, but, like, surely, there must be a difference in how much it costs to produce each of these, where they can be distributed, like, you have so many other things in your context that I don't have in mind that you're gonna need to draw upon in this moment to make this decision.
Charlie - 00:35:10:
Exactly. Yeah. Another example is, a lot of times, a product's success is almost a self-fulfilling prophecy. So if we in the company believe this is gonna be a success, we're gonna put our money behind it, and we're gonna put our collective energy. So I remember, one company I worked with had a promotion, and, you know, let's say it was around the Super Bowl. Well, you know, all the Salesforce put their energy in there, and, you know, guess what? It was a very successful promotion, and it wound up having staying power. That's because we all got behind it. And so the product itself wasn't that great, and I'm not sure the promotion was that great, really. But the energy behind it was great, and the push was high. And those are the type of things you can't necessarily predict, or maybe you can, but you gotta talk to the right people. So if all of a sudden your Salesforce is gonna be incented this quarter by selling this product, by making sure that this display is in the store, whatever it is. Well, that's probably gonna drive things more than anything else. So, yeah, just again, asking the right questions, but understanding what drives decisions. And sometimes, you know, what happens is we're asking the wrong questions, and the data we get doesn't matter.
Molly - 00:36:32:
That's so interesting. Never underestimate the power of sheer force of will to get it across the line.
Charlie - 00:36:38:
That's exactly right. So you really have to understand. And, again, history will tell you what it is that's really made something successful or not, and the right answer to the wrong question usually won't do that. So and that's where AI will get there, and people with history will get there is being able to look at history and saying, what's worked and what are the drivers? Understanding drivers in the whole decision-making process and the success factors, and also, for that matter, being able to define success when we make decisions. Like, what are we deciding for? Are we deciding for a short-term win or something that's gonna last two, three years in the market? So we have to build the right model of success and the right decision hierarchy model, and then the research will follow all of that, and then we'll have perspective. But it's a whole process, and research is just one cog in that wheel.
Stephanie - 00:37:36:
Exactly. Yeah. That's such a good point. I wanna switch gears a little bit. Still, you know, in the context of technology advances, AI, automation, I am curious how you personally, in your own practice, ensure that human interpretation and empathy stay central to how you do your work and understand your research results even though I'm assuming that executionally, you're as interested in leveraging those tools as much as the rest of us?
Charlie - 00:38:04:
It's very important. A lot of the research I do is quantitative with a qualitative piece. And, you know, oftentimes, the qual really tells you, gives you the color behind the data and really tells you the story that you don't get in straight numbers. And, you know, will you buy it? Yes or no? Is this a good value? Yes or no type of thing. So the qual is important. So far, what I'm seeing is that AI is kinda good, more at the big picture coding. I could probably go through a thousand, maybe 2,000 responses. After that, I get a little tired, you know, but if you're working with really big data, you have no choice, you have to use technology. And, obviously, technology allows us to do a lot that we could not do in the past, so that's great. But still, what I find personally is just for my own sake as a researcher, I can't really report results unless I've kinda got my hands dirty. And that was true 30 years ago, as it is now. I could never just report results if someone did the work for me and just handed me the report, and I said, Here's the report. So, you know, I kinda feel like you kinda gotta wrap your mind around what's going on. And so the best thing I've always found is open ends. That really gives me the picture I could and, you know, right now, we could tell positive or negative sentiment. So we could get a read on sentiment, but when you, like, read them yourself, and maybe you don't have to go through 50,000, a couple of 100 will do the job. But you could get a good sense for things like anger, disappointment, and just how deep it was. So, you know, your quant results will tell you, will give you a level, but the true depth can only be measured, I think, by looking at open ends. And so I do think AI is great in just giving you 50% said it was the color, 20% said it was the texture, and 5% said it was the taste. And it could synthesize a lot of data, but still, at the end, you're talking about that human factor. I think it's still a human business, and you do need somebody to kinda really get the feel for what's going on. And it's kinda hard to get that. You're still not getting enough of an empathetic read from AI to really, like, express the depths of people's disappointment with the product, or their elation goes both ways.
Stephanie - 00:40:36:
It is. It's so true. And I think, like, on our platform at AYTM, we have full content analysis that's automated now with AI. And it's great, right? So we'll take an open end and turn it into structured data in a minute, right? Wonderful. I still find in reporting, nothing is as powerful in your storytelling process as a representative quote that just really nails something that has come up among more than one person.
Charlie - 00:41:02:
Yeah. And, Stephanie, I don't know how you present results. A lot of times, you know, clients will just take the results and run with them, and no one has time for a presentation. And that's kind of a shame because there still is nothing like a presentation, and I understand you don't have time. You can't get everyone together in one room and all that. It's hard to do that. When I started, presentations were, like, an hour and a half. Then it was an hour, and then it was thirty minutes, and people would come ten minutes late, and, you know, just get to the bottom line. And so a lot of that's gone. And so, you know, my initial example with a cell phone, you know, with technology, you're really getting efficiency, but you might not be getting the depth of feel and, quite frankly, emotion. And at the end of the day, almost every product is bought based on emotion. I don't care what it is. There is some sort of feeling you have that helped you make a decision, and you get that from the human touch. So I think we all still have some value in this world. We can't be completely replaced. Not yet.
Molly - 00:42:09:
Yeah. And I'm glad that you brought up again the analogy with the cell phone versus the landline. Like, you mentioned that early in our conversation, and that has been resonating in my mind because I think that's a perfect kind of example of what we're talking about. Where's the balancing act here? People need to make decisions fast, but like you said, we keep going back to context, we keep going back to robustness, we keep going back to quotes and open ends. Where, I guess, is that Goldilocks zone? You know, we've mentioned it a few times. Where's the spot to make sure that your clients can go to market fast and capitalize on an opportunity, but also that it is reliable? Where's that spot?
Charlie - 00:42:50:
When I went to the, I guess it was a couple of years ago, right after ChatGPT came out, and so the quirks and TMRE, all the presentations, it was like, I don't know, 70% were on AI. And I would say, you know, a pretty good percentage of people attending these seminars were just there to answer the question, Will I still have a job in a couple of years? And I think the answer is the same technology answer. You know, for the past five hundred years, there's always been a question: will technology replace me? And the answer generally has been, no, technology hasn't replaced jobs. It's just shifted them. So when we went from, you know, the hand plough to the tractor, you know, those kinds of things, or when we went from hand manufacturing to factories, economies have done better, not worse, right? Most technology has improved economies. And, you know, when we went to the Internet, yeah, a lot of stores went out of business. But in general, I mean, the stock market went up like crazy when the Internet came out. You know, it's really hit, it’s called the turn of the century. Right? And so a quote that I really liked about AI was, you're not gonna be replaced by AI, but you might be replaced by someone who knows how to use AI.
[00:44:12] Molly:
Yes. It's the invitation to new skills. It's less about I'm gonna sit and I'm gonna write this article versus I am a master at prompting AI about how to write this article. It's a shift of skills.
[00:44:26] Charlie:
Exactly. It's a tool. And so the one who knows how to use new tools is gonna have an advantage. And so you know? And that's been true, you know, everything. When Excel first came out, the analysts who really knew how to use Excel and were doing pivot tables and all these things were way ahead of everybody who was just, you know, writing things down on a piece of paper. And that's gonna be the same with AI. People who use AI are gonna be able to get to the bottom line quickly, but still, so that's a skill. But, ultimately, it still is the ability to influence, the ability to understand things, the ability to put things in perspective and communicate that. That's still there, and that hasn't gone away yet. So, yeah, I think that the sweet spot is knowing how to use the tools, but also knowing where they break, and they do break. Knowing where they break, knowing where they could benefit you, and knowing how they could help you as a human to communicate and influence. That, I think, is the sweet spot. So we shouldn't be afraid of AI, but we shouldn't overuse it either because it's not gonna replace people. And the one example I think, you know, when I want customer service, like, AI will just get you to a certain spot, but if you don't switch me to a human, you're not really making me happy. Like, we all know that.
[00:45:49] Molly:
Listen. I pressed 2. I don't know how much more you want from me. I pressed 2.
[00:45:53] Charlie:
Yes.
Molly - 00:45:55:
Representative.
Charlie - 00:45:56:
Yeah. And then they'll say, for your convenience. No. It's not for my convenience. It's for your convenience. And the companies that are really good are companies like Amazon that are both high-tech and high-touch. Like, if I want a human, I can find a human at Amazon, and they'll actually return my call. I know that secret phone number. But their AI works great for a lot of things, like doing a return and all that. It's wonderful. But if I have a real problem, humans are really gonna help me out a lot, and that's, I think, in our business as well. AI is gonna be a big boost to our productivity, but without humans, we're really not gonna get to that next step of being able to influence and come up with good ideas. And truly, the big word in research is insights. We're not gonna really be able to communicate insights by machine. We still need human beings who understand emotions, who understand the depth of personality and character, etcetera. So, that's the balance, I think.
Stephanie - 00:47:00:
So then to switch gears and really kinda close this out, Charlie, and this has been an awesome conversation. Thank you so much. A question that we like to ask a lot of our guests is, Do you think about someone just starting out in market research, particularly in the context of interpreting data, building their insights? What advice would you give them to sort of grow their discernment and interpretation skills and avoid these sorts of common pitfalls that we've talked about today?
Charlie - 00:47:25:
Yeah. Good question. And I think it gets back to everything we've spoken about. When you're new, it’s good to have a mentor. Talk to people who've been around. I think one of the things that's probably helped me most in my career, something I alluded to, is understanding pitfalls. Understanding where technology just breaks down, where it doesn't work, that it actually misguides you, I think that's probably helped me more than anything, is just having enough experience to see something fail. So I would tell a new person, understand where things don't work and understand history and talk to people who've been around for a long time. And before you just take data and spit it out, understand where it's just not sensitive to reality. So when you're young, you don't have any history. And I would say talk to people who've been around for a long time, who have history, who really understand how technology can help you and how it could really burn you, and learn to gain perspective that way. When machines help you and when they don't, what works and what hasn't, that would be my advice, learn to have your own opinions and develop your opinions by looking back at history and understanding context.
Molly - 00:48:48:
I was gonna say, I feel like this was going right back to understanding context.
Charlie - 00:48:52:
Yeah. It's all about context and consistency.
Stephanie - 00:48:56:
Those are great takeaways. Well, Charlie, thanks so much for your time today. Again, it's been absolutely fabulous to talk to you.
Molly - 00:49:04:
Yes. Such really salient takeaways for any type of audience, so thank you again so much for your time and for sharing your thoughts.
Charlie - 00:49:11:
Thank you very much. It's very nice to meet you and talk with you.
Stephanie - 00:49:15:
That's it for today's episode. Thanks for listening.
Molly - 00:49:18:
The Curiosity Current is brought to you by AYTM, where curiosity meets cutting-edge research. To learn more about how AYTM helps brands stay in tune with their audiences, head over to aytm.com.
Stephanie - 00:49:32:
And don't forget to follow or subscribe to the Curiosity Current on YouTube, Apple Podcasts, Spotify, or wherever you like to listen.
Molly - 00:49:41:
Thanks again for joining us, and remember, always stay curious.
Stephanie- 00:49:45:
Until next time.
Episode Resources
- The Curiosity Current: A Market Research Podcast on Apple Podcasts
- The Curiosity Current: A Market Research Podcast on Spotify
- The Curiosity Current: A Market Research Podcast on YouTube


















.jpeg)


