Description
In this episode of The Curiosity Current, Kate O’Keeffe, CEO and co-founder of Heatseeker joins Molly to discuss the limitations of traditional market research, the persistent gap between what people say and what they do, and how live market experimentation is reshaping how modern teams make decisions.
Kate shares her journey from building a consumer brand to leading innovation at Cisco and BCG, where she saw firsthand how difficult it was to get reliable customer insight. That experience led her to build Heatseeker, a platform designed to test real customer behavior through live experiments rather than relying solely on surveys or interviews.
The conversation explores why surveys often fail to predict real-world behavior, how marketers can identify true buying drivers, and what it takes to generate insights that actually influence decisions. Kate also explains how experimentation and synthetic audiences can work together, and why the future of market research depends on combining behavioral data with real-time validation.
Episode Resources
- Kate O’Keeffe on LinkedIn
- Heatseeker Website
- Stephanie Vance on LinkedIn
- Molly Strawn-Carreño on LinkedIn
- The Curiosity Current: A Market Research Podcast on Apple Podcasts
- The Curiosity Current: A Market Research Podcast on Spotify
- The Curiosity Current: A Market Research Podcast on YouTube
Transcript
Kate - 00:00:00:
I would like to see the market research industry stop asking people via surveys, asking people what they would pay and whether they would buy. And what I would like to see it do more of is help marketers to use the insights that it's generated. I get too sad when I see really great quality research, great quality ethnographic, delivered by the true craftspeople, no leading the witness, beautiful interrogation of human frictions and needs. And then I get into a brand, and it's sitting in the drive somewhere, you know, and I can see the last time it was open, and that makes me really, really sad because, you know, that work should be richly applied, and it is the beautiful part of the work that we do when we're able to grab high quality work like that and put it in the mouth of a synthetic persona.
Molly - 00:00:53:
Hello, fellow insight seekers. I'm your host, Molly, and welcome to the Curiosity Current. We're so glad to have you here.
Stephanie - 00:01:01:
And I'm your host, Stephanie. We're here to dive into the fast-moving waters of market research where curiosity isn't just encouraged, it's essential.
Molly - 00:01:10:
Each episode, we'll explore what's shaping the world of consumer behavior from fresh trends and new tech to the stories behind the data.
Stephanie - 00:01:18:
From bold innovations to the human quirks that move markets, we'll explore how curiosity fuels smarter research and sharper insights. So, whether you're deep into the data or just here for the fun of discovery, grab your life vest and join us as we ride the curiosity current.
Molly - 00:01:38:
Today on The Curiosity Current, we are joined by Kate O'Keeffe, CEO and cofounder of Heatseeker. Kate has spent more than two decades helping brands make smarter, faster decisions, and she's developed a uniquely hands-on approach to gathering market insights along the way. At Heatseeker, she's building a customer insights engine designed for marketing leaders who need clarity at the speed of culture, using controlled experiments and AI-powered synthetic audiences to pressure test big decisions before real budgets and reputations are on the line. Today, we're getting into the say-do gap, why traditional research cycles are breaking down, and how continuous experimentation and synthetic insight are reshaping the way modern teams make decisions. Kate, we are so excited for our conversation today. Welcome to the show.
Kate - 00:02:24:
Thank you, Molly. Wildly excited to be here.
Molly - 00:02:26:
Let's just dive into this. Your career has moved through so many different worlds between executive advising, innovation labs, and now building Heatseeker. So, before we get into the meat of the say-do gap, tell us a little bit about your journey and, you know, starting your own business. The entrepreneur life is not for everybody, so especially shout out to you there.
Kate - 00:02:48:
Thank you, Molly. Yeah. I spent my 20s building a consumer brand. I was a shoe designer. I had stores in a couple of cities.
Molly - 00:02:56:
Oh, I love that.
Kate - 00:02:57:
Yeah. I used to make shoes predominantly for brides on their wedding day. Yeah. I used to design them myself and get them manufactured. I was wildly excited about that. But it did hit me like a ton of bricks at some point that, like, the whole retail and fashion life, you know, really wasn't for me. And I took a hard left turn and went into deep tech. So, I spent ten years in heads of innovation roles for Cisco in Silicon Valley, which was amazing. And it was a real change, of course, but they were looking for, like, entrepreneurs in residence that had, you know, like, an entrepreneurial background, but something really different than deep tech, someone you thought differently about consumers and customers. So, I spent ten years there. And in the B2B space, you know, getting customer insights is a nightmare. Like, you know, you can't assemble, you know, a panel or put a survey out to, like, a thousand chief security officers in Japan as an example. So, I used to get customer insights by building bullpens of end users in the middle of Cisco, and we used to do sort of live prototyping where we would put different versions of a product in front of our customers.
Molly - 00:04:18:
Mhmm.
Kat - 00:04:19:
And in a Cisco context, for those of you who don't know, Cisco makes, like, networking equipment and phones. It's not easy to prototype, you know, a networking product. “You over there, you're a firewall. You over there, walk over here, mirror packet of data.” And I guess what that process really taught me was the importance of prototyping. You know? Like, language often gets in the way. You know? Your understanding of what I mean by a brand as an example is completely different. So, by focusing on, like, putting a real prototype in front of, like, an end user, you know, we can find out, you know, such different things about what you do and don't want, what you do and don't respond to. So, yeah, I had this fancy technique, live users in the room, live prototyping, and it was more or less for that technique that BCG, Boston Consulting Group, hired me as a partner to, kind of, bring that technique. You know? Bullpen's in the room, you know, 40, 50 users will do live building with them with big brands, and we'll move faster. But then COVID hit. Like, two weeks into that job, COVID hit.
Molly - 00:05:27:
Great.
Kate - 00:05:28:
Yeah. And so my whole, like, what I was famous for, you know, live customers in a room responding to prototypes and stimulus, you know, was dead in the wire. So, how are we gonna ship prototypes in front of customers at scale, really, really quickly, in an environment where we were at home, you know, we couldn't have a customer in the room, and maybe we wouldn't be able to for years on end. And so that's really what led me to the live market experimentation technique that drives a lot of Heatseeker now. So, you know, we had a team that used to, you know, when we would be building a digital attack a brand for bigger brands, which was my job there in their their digital ventures business, you know, when we were kicking off the process, like, when we were deciding which customer to serve, you know, like, what is the most important job to be done? Pain point? What's the most important buying driver or value prop in this space or this customer? And we could get all of those answers by shipping panels of ads live on either Meta or LinkedIn. It's, I guess, it's like taking that time-honored A/B test that we all use when it comes to creative testing, but apply it to deep, you know, ethnographic customer understanding. And so that technique, like, there was a whole team that used to do that. It was a whole lot of work exporting the CSV files out of Meta, working it all out, making sure the algorithm hadn't favored one variant at the experiment over the other, you know, really, really fiddly. And so when I left BCG, tired and burned out as every strategy, you know, good strategy consultant is when they’re, I was like, somebody needs to build a platform that allows us to run experiments at scale. So, that was the first gen of Heatseeker. It was, you know, it's basically my answer to why can't brand builders, marketers, and product builders instantly get a view of the customer's world end-to-end in the same way that, you know, the finance department has a perfectly, statistically significant, digitally backed view of what's going on in finance. The engineering department knows exactly what its uptime is. But for us who live in the customer world, we don't have a representation, really, of exactly what is the most important customer job to be done, pain point, buying driver benefit, or offer that suits that segment or persona at all times. And so the first gen of Heatseeker was to build that through experimentation, the second was to, we worked out that there's a stack of experiments that do an amazing job of building a person, building a synthetic view of a person, a clone, if you like or a digital twin of a market segment. So, that was the second generation is your ability to talk to your segments for the first time, ask some questions, ask them why they didn't buy that great thing you shipped six months ago, you know, in a way that's, you know, free and easy and accurate. So, we build those digital twins. And finally, now we're able to put those synthetics to work when it comes to supporting you through this new agentic AI-driven go-to-market that we're all beginning to live. You know, we're all already live with workflows and delivery, but what is the mechanism by which we're validating that? So, Heatseeker is covering that end-to-end.
Molly - 00:08:54:
Wow. That's so much. And thank you so much for taking us through that journey. I'm curious too about, you know, in your work at BCG, there's obviously always a moment in a founder's mind where they've said that's the gap. I'm gonna go fill it. Was there a moment that you had where you said, this is the problem? No one is solving it in the way that it should be solved for the end-to-end client, and that really drives, I'm gonna go, I'm gonna go out there and build this. Did you have a moment where you were kind of like, I need to do this because nobody's fulfilling my need?
Kate - 00:09:29:
Yeah. That's a great question. I mean, I think during my time, BCG had, I don't think they have it anymore, they had this sort of incubation engine called Digital Ventures that used to make brands, little brands for big brands, digital attack brands for bigger brands. And so when I went through that process, I had to deliver from soup to nuts new brands, new products, new offers to the marketplace, many, many, many times in the three years that I was there. And they had a very formulaic process of going through that process. And so every time I did that, like, every time, you know, I met my new bank that I was gonna build, you know, a new product for, or like, I met an insurance company, I was gonna take them direct to consumer for the first time. Like, every time I went through that process, it would hit me like a ton of bricks that there is not a great platform out there that will go out for me into the market, interrogate and segment, you know, who I should be targeting and, like, an end to end for that customer, you know, what is their pain points, jobs to be done, like, and to a level of stat sig, you know, we would build, BCG would do a beautiful job of running ethnographic research, and we would put on a slide, we would put, here are your three personas, and it would be based on, at best, 12 conversations that we had in the market, you know? Here is cautious Carol, and here is studious Samantha, and we would have, like, two verbatims and a couple of dot points about that. And those personas would enter the nomenclature of the brand about who we were serving, and that slide would be recycled and rehashed. And so, in a world where, you know, I mean, they are a strategy consulting giant. There's so much rigor behind every aspect of what they did. So, I mean, they are one of the best in the world at building strategies for huge companies and $100,000,000 digital transformations. And here, something so important, who we're serving, was constructed of, at best, two or three conversations each across a 12-conversation ethnostudy. And so that was something that really stuck in my craw. It's like, why does everybody else, why does every other department in a company get first-class, real-time, take it to the bank data about what we're doing and who we're serving? And yet for those of us who are building brands, serving brands, delivering new products to the market, you know, delivering new offers to the market, we don't have that. And so that really stuck in my craw. Like, that was something I wanted to fix. Like, I believe that those of us who serve customers, those of us who are in marketing, like, we deserve to have first class system of record of who our customers are, especially now that we're all rolling out these agentic AI workflows. It's so risky. It's so risky. When we do that at scale, you know, if they're counting on what is the representation of our customers that they're counting on?
Molly - 00:12:57:
For sure.
Kate - 00:12:58:
Like, what is the go/no-go in that workflow that says, you know what? Like, I ran a synthetic validation sequence on this workflow, and this workflow needs to stop because we're serving the wrong segment with this message. I think that, you know, as agentic empowers us in the customer spaces so beautifully, it also puts us at risk if we're not representing that customer appropriately at all times.
Molly - 00:13:23:
Yeah. And the idea of ensuring that the customer has an accurate thing to tell you, I think, is also really important, too, that they are telling you exactly how they would behave and not saying this is how I would like to behave versus how I actually behave in market. And those two things, the say-do gap about what they tell researchers they're gonna do versus what they actually do, is completely different and can be completely different. So, take us through a bit of what's going on underneath that, and what's sort of the mechanism for solving that and closing that, so the predictions that we are making, that these huge brands are making big financial decisions on, are reliable and predictable.
Kate - 00:14:07:
Yeah. It's such a great point, you know? So many of the tools that we're currently using to get this information, you know, surveys and interviews, you know, anything where we're incentivizing the customer to tell us something and anything really, where we're looking at what they say rather than what they do. There's been so many great studies about this. There's something like a 20% to 30% correlation between how people say they will behave and how they actually behave.
Molly - 00:14:40:
Mhmm.
Kate - 00:14:41:
And there's a lot of reasons for that. Nobody really believes to lie, like, they never know, they don't really believe that they're lying, but often, they just don't know who they'll be later on in the day. I don't know, Molly, if you've ever bought yourself a movie ticket in the morning and come the afternoon, you're like, you know what? Can we just do Netflix? You know? Like, we often just don't know, like, at the point of action, you know, unless we're right there at that moment. The other thing is, especially when it comes to, like, long-form interviews, which I still love and do, actually. Like, I can sometimes sound like I'm throwing high-quality ethanol under the bus. I absolutely am not. I think your ability to interrogate, you know, a person on the five whys of their drivers as to why they do what they do can give us great answers and can add color to other techniques, you know, like experimentation on Heatseeker. But people wanna be a good interview subject, you know, they want to, you know, they know that this is your beautiful product that you love so much. They wanna tell you that everybody's so kind. Yeah. Oh, yeah. I get why I would, I've got a cousin who would love it. I mean, everybody wants to help you on your journey
Molly - 00:15:52:
Mhmm.
Kate - 00:15:53:
When sometimes what you need on your journey is for somebody to break your heart and say absolutely no way. I think a great example of that is Heatseeker serves some really large brands that are in the food delivery business, and they had a really big partnership with a big box store. And we ran an experiment in market about, you know, would you, you know, we ran the order from the big box store, delivering to your home from this big box store, outperformed against get 30% cash back on your order from that big box store. Yeah.
Molly - 00:16:33:
Really?
Kate - 00:16:34:
Yeah. For the ladies and gentlemen, for those of you who are audio only, Molly has a shocked look on her face. So, that is an example of, like, whoa, that is, like, shockingly counterintuitive.
Molly - 00:16:49:
Yeah. Makes no sense.
Kate - 00:16:52:
If you think about what, and we assume, I don't know for sure, but we assume that when people see, like, cash back and, you know, we assume there's a lot of hassle and bullshit associated with getting an offer and maybe just, oh my God, I didn't know that you guys served, you know, them. Like, yeah, yeah, I wanna order that. And it's an example of something where, you know, for us as marketers, first of all, we're up against being told by surveys, like, the wrong answers about what people want. Second of all, our own gut feeling can often not represent what happens in the real world. You know, when we run a live experiment for 100,000 people, and we ship two ads that look identical, we strip out the algorithmic influence of that, and we just put them head-to-head. How's it gonna perform? And unfortunately, for us as marketers, we're often really not, and really expensively not correct about how the market will respond. And so that's what we're looking to solve for here. So, Heatseeker solves that by a couple of things. First of all, we train synthetics for you using live market experiments, and we usually do that between you and me, Molly, on, like, a live launch that you've got. So, you got a live launch, you know, lots riding on it. And so we'll run our stack of experiments on what you're doing for that launch. And it just so happens that what we learn through that process is enough, you know, it's usually 5 or 6 live experiments. You know, the media buy is usually super small. Usually takes us about 4, 5 days to run this stack live in market, and we usually grab a problem you're already grappling with. And at the end of that process, you have trained synthetics that have responded to you at a level of stat sig you can trust, like, what is the customer pain point, buying, driver, benefit, feature? What’s the first thing that they're gonna respond to? You know, when they think about, like, meal delivery late at night, is it, oh my God. It's 5 O’Clock, and I can't be bothered? Is it, you know, I wanna demonstrate that I'm the smartest mom with my mom hack, you know, and I'm not gonna hit any fees. Is it, you know, like, I want something different, and I'm sick of everything in my fridge? Those are three really different jobs to be done, and what is it that causes each segment to respond? And, you know, really getting granular and so we can be personalized in those messages is super, super important. So, we run experiments live to train your synthetics. We often ingest, you know, other market research that you've done. We give you lots of credit for everything you've already done and learned. We will also plug in your performance marketing data so we can see what has and hasn't been responded to by that segment over the last twelve months, and that gives us this incredibly clear picture of what people will respond to so that you can just chat in real time. Is it this or that? Write me three headlines. Write me my creative brief. Oh my God, my creative agency gave me 30 pieces of creative for this next campaign. Pick the 6 that will kill so that I can save time, and I don't have to spend a fortune, you know, training, you know, the algorithm on my campaign. Just can we just skip to the good part? Can we just skip to the bit where everything works? And that's what we're aiming to do.
Molly - 00:20:19:
Before we dig into the training of the synthetics, which I absolutely want to get into, I wanna pull back a smidge and talk about something that you had mentioned, which was this idea between doing experiments to get you where surveys can't. I feel like there's a lot of research in the blended qualquant space where there's the application of quant methods that are used to pull from that qual data. So, you know, when you're thinking about this whole insights engine that you're looking to produce, what's something that maybe you find in an actual experiment that is something that is not captured in a survey and kind of how those interplay together?
Kate - 00:21:04:
I mean, the first thing that happens in an experiment that does not happen in a survey is the say-do gap, is, like, the level of inaccuracy that comes from a survey. That's the first thing that's missing. The next thing is my favorite experiment to run on Heatseeker, and by far the most popular still, despite the fact that we have pricing and we have all this other shit that people like. What they really love is what we call a buying driver test. Like, push comes to shove when you're doom scrolling next to your partner at midnight in your jammies, when that segment sees this value prop versus that value prop, what causes them to click on an ad and buy? And no survey can give you that because no survey is an ad that connects to a buying journey. And so it's an example of something that an experiment can give us. Like, push comes to shove, you should know that when you express your value prop this way or when you talk about this feature or when there's this offer, it does or does not cause your audience to buy. It might be interesting. It might help you with awareness, but does it actually cause them to buy, which is what we get when we actually run a live, you know, ad-driven, stimulus-driven, you know, market experiment. So, that is an example of something that we just can't get from other forms of, I mean, we, but there are also things, obviously, that the experiments are not the full picture.
Molly - 00:22:42:
Mhmm.
Kate - 00:22:43:
I still love long-form ethnography. I still love talk to me about the frictions of your day, you know, I still love, you know, I built the, I was on a project at BCG. We built the first-ever fitness app for women in Indonesia that was, you know, super thoughtful and careful about the modesty tradition of that market. And we were, you know, that was an example of something where, you know, had I just gone with a stimulus option, would I have missed the subtleties of the kind of exercises that you like to engage in when you need to feel like you're being modest around your house. So, that's an example of something where bringing in some of your other data sources, you know, the existing qualitative interviews that you've done, to complete the picture. And to your point, you know, listening to those interviews as a source for your experiments to run the call is super, super powerful.
Molly - 00:23:46:
Yeah. And what you were saying earlier about the doomscrolling in bed next to your partner, how we behave even as human beings in a survey environment versus being in that situation. The say-do gap is something that maybe even, to your point, about not wanting to lie that they're not even aware of, because they are not in that mindset, right? Like, they're not in the mindset of, I'm taking a survey. Of course, I would respond to this, but then I don't, my brain turned off at midnight. I don't wanna think about it. I want flashy colors. I want funny things, and it's hard to report on that.
Kate - 00:24:20:
Totally. People think they take vitamins for longevity. They'll tell you that all day long. I take my vitamins, so I live longer. They don't. What we have discovered through our experiments is what actually makes them buy vitamins is that they wanna look pretty. The longevity is important, but push comes to shove. I wanna look healthy. I wanna be fit and hot, so I'm more attractive right now. And I tell myself I'm doing it for longevity reasons, but, really, I'm doing it because I wanna look hot. It's like I'm saying I wanna lose weight because I wanna be there for my family, and I want more energy. Push comes to shove, we do it because we wanna look really, really pretty. And we tell these lies to ourselves. We tell these lies to ourselves.
Molly - 00:25:10:
To ourselves. To ourselves. Yeah.
Kate - 00:25:12:
To ourselves. But push comes to shove, you know, are you gonna have a better quality of life at 86, does not make you click on anything at midnight. But, you know, am I gonna be more successful, you know, next time I'm on a date? We'll absolutely make you do that in the short term. And if we think about that difference, you know, and there's so many examples of that. If we think about that difference, like the there's say-do gap, it's this cliff that we're all falling into as marketers. Like, I just need the truth. I just need what actually makes people respond. Like, I need to know that. I need to know that now. I need to know that every time I'm live in market. I needed a level of statistical significance I can trust, especially now that I've got AI go-to-market going down, and I've got workflows that are pulling product, you know, shipping it straight into performance marketing, you know, when I'm driving my consideration marketing. Like, what is the process of validating that aligned to those questions so that I can count on the answer hitting the right customer at the right time, and I'm not just engaging in, like, more AI slop?
Molly - 00:26:20:
How do you think about that in terms of building synthetic audiences? Is synthetic data able to make the distinction of those nuances? And where does that kind of blend together? There's still the need for the human because we're complex, weird creatures. And to your point, we lie to ourselves, and that dictates the true feelings that we have about behaviors that truly we may not even know. So beyond, if we don't even know ourselves, how are we training synthetic data to predict for those things?
Kate - 00:26:49:
Yeah. Great question. Okay, so live in the market right now, synthetic data, synthetic customer data, synthetics is really driven by, like, two sources. One is, and the most common is that we train our synthetics on survey data. And it'll be right there on the website. They'll say we are 80% or 90% or 95% correlated with your synthetics to survey responses. And we are a massive survey market research giant in the marketplace. We do more surveys than anybody else. We do 7 trillion survey responses a day, so you can count on this. And as you can tell, I don't think we should be doing surveys to make important decisions, let alone synthesizing those surveys at scale. So, there's that, and there's, you know, I could talk all day about the fact that an MIT paper recently estimated between 40% and 50% of survey respondents now are, like, bots from the LLMs? It's like, well, you know, if there are bots in the surveys, it doesn't surprise me you've got great correlation on the bots because now the LLM is doing the synthetics, and it's responding to half these surveys. No wonder it's so perfectly correlated. Congratulations, everybody. So, a lot of synthetics are relying on surveys. A lot of synthetics now are saying that they will pull their customer data from your data. So, they will listen to your first-party data and then use that data to construct your synthetics, which is really smart. Because if it's first-party data, it is behavioral. So, it is based on how your customers are currently responding to the stimulus you're putting in the market. The problem, the gap there, is that your first-party data is based on what is currently on your truck. By its nature, it is customers responding to what you already have in the marketplace or what you had in the marketplace in the past. So, your ability to get great answers on your new launch, your new product, and the space you haven't been in is zero. It's very, very limited. And the minute we start getting into, like, speculating about spaces where we are untrained, like, we're back to zero when it comes to level of training, you may as well riff with ChatGPT. So, that's where a lot of the synthetics are coming from at the moment. So, this is the space that we've stepped into. It is the need, synthetics is really important, it's actually critical. It's more than just, you know, low cost, and it's the fact that your synthetic is available in real-time is a real game-changer, Molly. It's like you and I are arguing, you know, the launch goes out in the morning. You know, should it be this or should it be that, your ability to get a real-time answer is a game-changer. You know what I mean? It's not just about, oh, you know, it's lower cost or whatever. It is having real-time answers at your fingertips, especially in, like, you know, like, an agentic, you know, marketing universe, is critical. And so we believe it's the blend of your first-party data and experiment data, like quant-based, qual, you know, our ability to bring in that qualitative aspect from a quant perspective, segment by segment, so you can drive a truly personalized response to your customers is critical. So, the quality of your synthetic training, you know, and you should have someone on your team who is responsible for keeping your synthetics correlated. So, every time our customers run a live market experiment, we quietly run a synthetic in the background so they can see how correlated it is. It's not a black box. You don't have to trust anybody, you know, you don't have to, like, oh, look at the website. How correlated is this? I know I can count on it because I ran a live market experiment, you know, last week, and I could see that the answer I got was, like, 95% correlated to that answer. And when it drops down, great, I need to do another data hookup to, like, you know, another, you know, I can like, to my call center or do I need to run a couple more, like, an extra live experiment or two, which will take me a couple of days?
Molly - 00:31:20:
Yeah. That's always my concern when I think about the implementation of synthetics is if we're relying on past market data, how are we able to think about analytics for the future? If we have a brand new trend, I mean, it's not a trend, but COVID is an instance of no time in our lifetimes we experience something like that. There was a lot of unknowns about we have no idea how humans are gonna react. So, there was a lot of, you know, hoarding toilet paper and figuring out how to, you know, bake sourdough bread. You know, there was a lot of things that I didn't have on my bingo card for how humans would react to, you know, a worldwide pandemic. So, I'm always curious, and it's fascinating that you say you use that data to continuously train the synthetics. But is there, how are you thinking about utilizing synthetics when talking about capitalizing on, let's say, a new trend? If a brand is saying, there's this TikTok trend that's happening. I wanna capitalize on this as fast as we can. I wanna have a messaging of how I respond to this. I wanna figure out how we position products for this micro trend. Synthetics tell me how to do that. Is that reliable? Is there more that's needed? Is it kind of just figuring out what works?
Kate - 00:32:35:
Yeah. I mean, the short answer is you shouldn't. So, my favorite part of Heatseeker right now is that routinely, all day long, Heatseeker will say, I don't know. You know? It will say, my best guess based on everything I know about your customer so far is this, but you should run a live experiment. It'll take us four or five minutes to set up together. I'll publish it live for you on TikTok if TikTok is where you're running this campaign. And in three days, we will have an answer to a level of statistical significance that you can trust, and then that is part of the overall training data, and everybody in your organization can get a better view of that customer and use it. So, the most powerful thing is often saying, I don't know. And, you know, anthropic Claude won't do that, and ChatGPT won't do that. They'll make something up, but yes.
Molly - 00:33:26:
So confidently wrong.
Kate - 00:33:28:
Yeah. And it'll tell you how smart you are. Oh my God, Molly. You are so smart and handsome today with that answer. Oh my God. And let me give you something. That is complete nonsense.
Molly - 00:33:39:
Or, like, my favorite is always the, actually, Claude, you are, like, super wrong. It's actually this, and it's like, yeah, you are absolutely a genius to point that out.
Kate - 00:33:49:
Thank you for pointing that out to me. Yeah. So, you know, having an AI that is programmed not to hallucinate, and, when we give an answer, we'll give you the provenance of that insight. So, for every answer you get from Heatseeker, it will be, here's the winning variant from the last three experiments that touched on this question. Here are, you know, two ads that you've run that were successful that hit that same pain point. Here is an example of a competitor ad that hit that same pain point.
Molly - 00:34:20:
Mhmm.
Kate - 00:34:21:
And here are two qualitative interviews, you know, with Susie and Anna from last month that you uploaded, where they touched on. Here are the verbatims that you should give to the agency if they're touching this pain point, because they really explain the stories in their lives that touch on that pain point. So, being able to give you the provenance of the insight as well as say, I don't know, I don't have enough data. This is a new partnership for you. Partnerships are a really good example where your synthetics, if it's new, you know, like, you can't, if it's a brand new partnership, you know, you have your customers, they have their customers. Who the hell is this customer that wants both of you to have a baby? You know? Like, who's coming to the baby shower? Right? Like, who knows? So, that's an example of something. Well, let's go out and get you that data. It'll take us three to four days to run, you know, four or five experiments, and then we'll have that picture, and then you'll be able to, like, interrogate that persona in real time.
Molly - 00:35:20:
Yeah. And then that's something that you add to the data warehousing, too. You say this is now another reference piece if you encounter something in the future. I feel like the idea of real-time synthetics is super new just in general, and it's still learning what human nuances are, but it's only going to increase the more data it gets. And so I feel like what you're saying, too, is that we're at this point where it still needs more input. It still needs that human element of specifically qual in order to contextualize the data that it's trying to give you.
Kate - 00:35:53:
Yeah. Or live market experiments that find a way to ask questions of the marketplace, you know, quickly at scale, so that you can get those answers, quant-based answers to a level of stat sig that you can trust.
Molly - 00:36:07:
Absolutely. Well, Kate, I feel like we can riff about this all day, but we are getting close on time. I appreciate your transparency in talking about where things are, what things don't know, like what we don't know about the future, what systems don't know, and how we're working to fill those gaps as an industry. I wanna switch gears a little bit to a reoccurring segment that we have on the show called Current 101 where we ask all of our guests the same question, which is what is something that you would like to see the market research stop the market research industry stop doing, and what's something that you would like to see more of?
Kate - 00:36:47:
Yeah. So, I would like to see the market research industry stop asking people, via surveys, asking people what they would pay and whether they would buy.
Molly - 00:37:00:
Mhmm. Mhmm.
Kate - 00:37:02:
And what I would like to see it do more of is help marketers to use the insights that it's generated. I get too sad when I see really great quality research, great quality ethnographic delivered by, like, true craftspeople, no leading the witness, beautiful interrogation of human frictions and needs. And then I get into a brand, and it's sitting in the drive somewhere, you know, and I can see the last time it was open, and that makes me really, really sad because, you know, that work should be richly applied, and it is the beautiful part of the work that we do when we're able to grab high quality work like that and put it in the mouth of a synthetic persona with the stories from the qual, you know, with the spicy, you know, you know, nuance and and language that we can sometimes hear from customers. I love it that we can give that to synthetics, Molly. So, those are the things that I'd love us to do more and less of.
Molly - 00:38:02:
Yeah. Absolutely. And you mentioned something too that a lot of teams don't, I feel, utilize the true value that is a rigorous research for the leader or somebody who's just not listening to research being essential to the process and more is thinking of it as a nice to have, how do you kind of earn table stakes with them and kind of show them that, no, this is something that's essential to your decision making? How do you work on that kind of conversation?
Kate - 00:38:35:
Yeah. We've never rolled out Heatseeker. I mean, the beauty of Heatseeker is that we deal in what's real, you know, we deal in studies of hundreds of thousands of people that we ship live on social media. And it's impossible to argue with those answers. You know, often, I find, you know, when we run a survey, or even with high quality ethno, the first thing you hear from the board is, well, how many people did you speak to? And even if you spoke to a lot, even if you did 30 or 40 interviews, you know, is that representative? Is that really representative of everybody? So, you know, I always say, you know, you don't wanna show up with a knife to a gunfight. If you're gonna talk to a marketer that feels, you know, they're in that role because they feel passionately about the brand, you know, and they've got really strong views. And so if you wanna pivot that, you know, as an insights person, if you wanna pivot that, like, let's start with what will be compelling enough for the CMO, the CEO, or the board to pivot what they're doing? What is the kind of evidence that I would need? You know? I mean, one of the very first banks that Heatseeker served, they were building the whole brand around this idea of human sentence support. Like, everybody's sick of bots. Everybody's sick of, so, for our brand new, NeoBank, human-led, human customer support is gonna be what we build this whole brand about. And this bank has a very passionate founder. He loves people, you know, really fits his personal culture, and he had a brand new CMO that wasn't there when they made some of those decisions. And that CMO was like, oh, shit, I bet that human-led customer support is not gonna win for them or for me in my new role. And so he's like, what am I gonna do? You know, how am I gonna, so like, this launch is important, and I love CMOs. You know? They're never that long in role. You know, they've got 2, 3 years to make it big and, like level up. So, you don't have time for, like, a launch, especially if your whole bank that bombs. And so what is the quality of evidence that you need, you know, to pivot, you know, like, a tanker like a bank, you know, like that? You know, he was staring down the barrel. It was, like, 4 or 5 days to launch. They got a launch around…
Molly - 00:41:05:
4, 5 days?
Kate - 00:41:06:
Sorry, 4, 5weeks to launch. Yeah. They're going out with this baggy prop. What the hell am I gonna do? And Heatseeker was able to run an experiment in, like, 5, 6 days that demonstrated that human-led customer support against 12 other buying drivers came second or dead last in every market they were entering.
Molly - 00:41:33:
Oh, no. No.
Kate - 00:41:34:
Yeah. And well, yeah. But they were able to pivot, they launched instead with, they had time, but they launched instead with “Open an account in less than ten minutes.” It was wildly successful. But if you think about, like, that conversation that CMO had to have, this is why I really feel for the role that we have as marketers. Like, he had to look at this super charismatic founder and say, sorry, sweetie, you know, like, no. And he wouldn't have been able to go in there and say, “Look, I did a survey.” He went in there and said, I did a live study head to head, where look at this ad, where we emphasize opening an account in ten minutes. Look at this ad that's all about human-led customer support, and this one came dead last when I ran this to hundreds of thousands of people in our exact target market, you know, small to medium banks, you know, versus this one. And that is why I have to make this huge change in the direction, the strategic direction of this bank. And so that's the kind of data that people need in real time.
Molly - 00:42:36:
Yeah. It's that, I think this is the right thing, and I'm super passionate about it. And my gut says this: when you run into hard data, it's always a challenge. Like, it's always a challenge to say, actually, I know you're in love with this idea. However, nobody else is. Like, that is never an easy conversation.
Kate - 00:42:55:
Exactly. And they always attack us as Insights people, right? They're always like, your method was wrong. I don't believe it. I don't believe who you spoke to. You didn't speak to enough people. So, a technique that is like, we shipped a live prototype into the market. We did this micro study. You know, 30,000 people saw it, and this is how they responded to a level of statistical significance that is irrefutable. Like, that's the kind of data that marketers need to move faster with confidence.
Molly - 00:43:23:
Yep. Absolutely. Absolutely. Well, thank you so much, Kate, for your time today and the passion that you bring to your work and the passion that you've brought to our show today.
Kate - 00:43:32:
We deserve better.
Molly - 00:43:33:
Oh, absolutely. I mean, there are so many things that you touched on here that I think is incredibly valuable for identifying gaps when you're going out there and wanting to start a business. The way that the industry is moving to, I hope, not an overreliance on synthetics that is going to prevent that human-centered element from continuing to shine through, which is what makes our work so fascinating and interesting, it's this intersection of marketing and psychology and human behavior and sociology that makes every day unlike the previous day. So, again, thank you so much, Kate, for joining and for your time and for sharing all these cool nuggets with us today. I had a great time with you.
Outro - 00:44:15:
The Curiosity Current is brought to you by aytm. To find out how aytm helps brands connect with consumers and bring insights to life, visit aytm.com. And to make sure you never miss an episode, subscribe to the Curiosity Current on Apple, Spotify, YouTube, or wherever you get your podcasts. Thanks for joining us, and we'll see you next time.


















.jpeg)


