Description
In this milestone 50th episode, Shanon and Lev take a step back to reflect on the journey so far: what it has taken to build the podcast, what they’ve learned from their guests, and why the project has become one of the most meaningful initiatives they’re involved in.
Across 50 episodes, the show has evolved into more than just a content channel. It has become a platform for thoughtful conversations with founders, marketers and leaders who are shaping their industries. Shanon and Lev reflect on the consistency, commitment and collaboration required to sustain a podcast over time, especially in the B2B space, where attention is hard won and trust is built gradually.
They discuss the importance of giving guests a space to share real insights rather than surface-level commentary, and how those conversations have influenced their own thinking. Patterns have emerged across episodes: the value of long-term strategy over short-term wins, the power of clear positioning, and the role of meaningful storytelling in building brands that last.
The episode also shines a light on the team behind the scenes, acknowledging the production and coordination effort that underpins every release. For Shanon and Lev, the podcast represents more than marketing output. It’s a long-term commitment to dialogue, learning and community-building.
As they look ahead to the next 500 episodes, the focus remains the same: continue elevating expert voices, deepening conversations, and creating content that delivers lasting value.
Episode Resources
- Lev Mazin on LinkedIn
- Shanon Adams on LinkedIn
- Molly Strawn-Carreño on LinkedIn
- aytm Website
- The Curiosity Current: A Market Research Podcast on Apple Podcast
- The Curiosity Current: A Market Research Podcast on Spotify
- The Curiosity Current: A Market Research Podcast on YouTube
Transcript
Shanon - 00:00:00:
In a lot of ways, it really is a return to that empathy and storytelling of, you know, where the humanity and it really makes quite a difference. So, it's spending your time in different ways on higher-value tasks, some things that only a human being can do with all that context and experience. So, I think that's really exciting. I mean, being, like, an operator in a business, having been a marketer, I spend so much of my time inside of numbers now instead of creative projects. It's incredible to be able to free up time for creative work, to free up time for storytelling, for narrative, for thinking beyond what is just, that's the number, but why is the number that number?
Molly - 00:00:42:
Hello, fellow insight seekers. I'm your host, Molly, and welcome to The Curiosity Current. We're so glad to have you here.
Stephanie - 00:00:50:
And I'm your host, Stephanie. We're here to dive into the fast-moving waters of market research where curiosity isn't just encouraged, it's essential.
Molly - 00:00:59:
Each episode, we'll explore what's shaping the world of consumer behavior from fresh trends and new tech to the stories behind the data.
Stephanie - 00:01:07:
From bold innovations to the human quirks that move markets, we'll explore how curiosity fuels smarter research and sharper insights.
Molly - 00:01:16:
So, whether you're deep into the data or just here for the fun of discovery, grab your life vest and join us as we ride the curiosity current.
Molly - 00:01:27:
Today on The Curiosity Current, we're doing something a little different. This is our 50th episode. And instead of bringing on a guest, we're bringing on two very special cohosts, our CEO and cofounder, Lev Mazin, and our president and COO, Shanon Adams. Over the past year, we've talked with researchers, innovators, psychologists, and strategists who are all wrestling with the same fundamental question: how do we stay human in an increasingly AI-powered world? We've heard about AI shifting from tool to strategic agent, data quality becoming an existential crisis, insights teams finally earning their seat at the table as growth engines. But we also heard something deeper: that the pendulum is swinging back from big data to deeply human insights, that storytelling matters now more than ever, and that empathy is the new competitive edge. Today, Lev and Shanon are gonna walk us through the 9 major themes that defined us for 2025, talking about what surprised them, and discussing what this means for the future of insights' work.
Shanon - 00:02:28:
Great. Thank you so much, Molly. So, wow, it's been quite a year of The Curiosity Current. 40 plus amazing guests. We've had so many incredible logos. I really wanted to shout out to Molly and Stephanie for their hard work over the last year. We've had such insightful conversations with brands like Starkist, and Microsoft, and Helion, and Coca Cola, LinkedIn, and Visa, to name just a few wonderful marketing influencers like Rand Fishkin. So, it is an honor to be within those ranks to hear and have a wonderful conversation with Lev about what we've seen happen this year. I wanna thank everyone who has been a part of helping contribute to this podcast over the last year and the success of it. So, let's jump in, Lev. Big picture takeaways from the year. So, what stuck with you? What surprised you in 2025?
Lev - 00:03:19:
It really feels, Shanon, that 2025 felt differently than 2024 in a sense that the industry not only went over the confusion of what AI is doing to our industry, but started looking at it more optimistically and with more ways they can integrate it into their daily lives with more applicable recipes being shared and applied to the daily job of consumer insights.
Shanon - 00:03:55:
I agree. I mean, I think it's so interesting when you gain that context in your real life when you have that light bulb moment of, like, this isn't just a piece of technology or a tool that someone's telling you to use, but it actually has that impact. You know, I often think about that moment that happened for me when I realized how much I could automate, how much data I could correlate together, and get to that, at least that place of synopsis, so quickly. So, being able to unlock that kind of speed and efficiency, I think, really is kind of that spark. And I know you and I have talked a bit about the spark. The thing that gets somebody to finally go, oh, okay, this is something real that I could use. I have context for this. And then they start really coming up with use cases. So, it's been wonderful to see that progression over the last year, both in people's professional and personal lives and unlocking that value and having those light bulb moments.
Lev - 00:04:51:
Many people that were on our wonderful podcast and many people in my network and within our company had this precious first experience, whiteboarding or feeling, being accelerated by the technology in ways that wasn't really available before. And I had that whiteboarding, like, I've been bitten by this whiteboarding bug, and it really is a fantastic tool of self-expression to get the often confusing and contradictory ideas from your head and see them realized in real material. One person said that it's a godlike feeling.
Shanon - 00:05:34:
I think back to, like, when I was building websites a very, very long time ago, and I know you were doing the same thing. And when you were able to, like, code HTML or you had, like, Dreamweaver to be able to have, like, UX overlay, and you felt like, wow, I can do this thing that felt either really intimidating or overly technical to just code, but do that in a user interface that was really, you know, accessible. And I think that's one of the most incredible things about a lot of the AI tool sets and things like Claude with the accessibility and the democratization of these technical tools. And they're so easy to use without having to have that sort of developer-level knowledge, that engineer knowledge. And I think for creative types like you and I, that is really quite a big unlocker because getting it out of our brain onto the screen is a whole piece of work unto itself. And then there's the piece of how do I get it to work and function, and how will that feel to be able to, like, iterate that quickly and produce the things you've produced this year? Just being your partner in this has been incredible to be able to realize those visions so quickly. So, that speed is so incredible. And also, just the quality of what comes out of that is cool. And to do it without having to necessarily code is pretty incredible.
Lev - 00:06:54:
Sure. And thinking wider than just, like, coding, I feel that there are two major categories in how AI is changing the lives of people in our industry. One is to take care of the chores, like on many conversations that I had, I see the fear of just being drowned by the volume of transcripts to read through or open ends to go through and code or even wrap your head around. And that part of the job is very native to large language models. They do an absolutely wonderful job condensing text or categorizing it or helping to quickly help humans to interact with it and wrap our heads around it with some, you know, fair amount of hallucination, but still not more than how the human brain often imagines things that are not there after a certain time. But this whitecoding and seeing AI systems being able to do what even didn't occur to us as something we can do ourselves. I put it in a completely different category, right, versus accelerating obvious things that we can do, but they're just taking a lot of time. When you feel like you can do something that you never could before, or you could never do before, without a whole team of people who are specialists in it. That is something magical, I think, that more and more people are discovering these days. And I feel so grateful that we live in this period of time when more and more magical things are appearing almost every month.
Shanon - 00:08:36:
So true. And I think as we, like, kind of think about that through the lens of our customers, thinking about how that's changed their jobs and their day to day, how it's changed what researchers do, how it's changing what marketers do, how they work together, and the kind of jobs that they're doing, watching that change as well alongside with not just in our own use cases here internally, but also with them. And really, it's kind of incredible when you work across many different types of clients in many different industries. You also get to watch which ones are embracing AI, which ones are kind of, you know, going into that fearless way of approaching it, and others that really do work through the lens of, like, I don't know how this is gonna change day to day and maybe going about it a little bit slower. I think some of the most interesting stories that come out of the research side when, especially when marketing and research start partnering together and realizing their superpowers that they each have, both in this sort of approach to research, knowing what are the right questions to ask and when to ask them, and then being able to take that data and use it strategically for storytelling and, you know, working more in tandem as partners than necessarily inside the technology together rather than kind of being over that walled garden of research produces this output and then marketing consumes it. I think breaking down those barriers is so exciting, as being a marketer myself, it's like the ability to be able to go, you know, to be more empowered to do those things and to get to those insights myself is really valuable. And for researchers watching those that have really embraced it, people that have that time, not programming surveys, but actually thinking about what are the most important questions to ask? What are the things that really matter? And, you know, having counterparts that can then take that and move that further into the organization as the kind of people that wouldn't be able to do that work because they're maybe busy doing other things previously. So, I am really curious to hear more about, like, your perception of how that's changing the dynamic inside of our customers and how best we can help them with that transition here.
Lev - 00:10:59:
I think that there are a couple of trends that I'm observing that we can help with, and other companies like aytm can help. One trend is clarification of terminology, clarification of what it is that we call AI. There are so many different perceptions of what it does and how it does it. There's a lot of magical thinking around it, especially when you get into the territory of synthetic data. I love a number of episodes that we produced here at this podcast that went into great detail on what you should trust and what data you shouldn't. Because everything sounds and looks so plausible and deliverable, and is expressed with such a confidence, you gotta admire that confidence from those agents saying something straight and convincing with a lot of justification. And then when you point out that it can be, they're like, you're absolutely right. It can be. I love the simple experiment that one of our guests suggested, and that in order to check the validity of any source of information, you can ask several times. And if you're getting different results, you should pause and see what is causing it. Or if you ask the same question in a different language, your out-of-the-box language models will produce completely different answers because those answers are rooted from different dataset spoken in that language. But I see that on that first trend, we are, as an industry, going in this transition as in the wild west of new territory, new technology, are getting a little bit better. I think last year, we saw that. We're getting a little bit better at educating more people in what you should pay attention to, how you should use those tools to be successful and not jeopardize the validity and quality of your data, not jeopardize your own personal reputation. And it's so interesting. It's happening in other industries. I know a few stories, including paralegals and attorneys being fired because they provided the brief to a judge with references to cases that never existed. And in our industry, it's also very easy to, you know, make a claim that has no substance behind it and get in trouble or make your organization make wrong choices. So, I think that that's great that we're getting more focused on what is true, what is not, what are different types of synthetic data, how are they produced, what can theoretically be predicted, and what cannot be predicted from the data that you're going on, how do you increase the quality of that source data? And all those things are I'm learning a ton from this podcast. In fact, every time I'm listening to an episode, I feel like I find myself in the same state as when I'm flying back from a conference where I'm remembering things that I learned, and it's just such a high quality and even better format for me because at conferences, usually the presentations are shorter, majority of them are salesy, and you really cannot go into depth of any particular topic in such conversational way that podcasts are offering. So, I'm such a big fan of this initiative and how well it's flourishing.
Shanon - 00:14:48:
I think what you're, like, referencing reminds me of something that David Evans said from Microsoft, that he coined that term ‘research theater’. So, using AI to generate synthetic responses really with the idea of trying to move faster, which is, you know, something we all wanna do. You know, it looks like research, but in reality it really destroys statistical validity. And I think that's really where it kind of comes down to the research persona and the role that they play in making sure that we're still using solid survey methodologies behind the things that we're doing, that, you know, things are set up in the right formats, that this is gonna pass statistical muster. And also putting in rules for how we review that data to make sure that we're not going down the wrong path purely to unlock speed. Because it has to really be that's how you head to that risk with AI is through human judgment. I mean, the humanity of it all is the measure that keeps us from making dumb decisions or, you know, assuming things. So, I mean, there can be that speed, but there's also a really important, I wouldn't say slowness in it that's key, but a humanity in it that's so key. And that's the bit where we're sort of keeping the check and balance in place, and you know, it can't all be that we hang our hat on. We got the quickest answer in the fastest possible way because that's not always necessarily the right way. And, you know, I think model training is all about it's like training anything else, making sure it understands the parameters and what's important to you and then revalidating that those things are accurate so that you can trust the output. But just putting those measures in place to make sure that we're not just trying to be fast, we're also trying to be accurate.
Lev - 00:16:39:
Absolutely. I feel that there is good job security in that because the more it becomes similar, we're going to progressively live in the ocean of insights that all look plausible and all look similar, but it takes a lifelong experience to tell apart what is true and what is not. And I think that many of the consumer insight professionals with that experience will find themselves at more important junctures and tables and discussions at their organization, and there will be new waves of professionals coming into their orgs looking for that guidance and that safety in their experience.
Shanon - 00:17:26:
In a lot of ways, it really is a return to that empathy and storytelling of where, you know, where the humanity and it really makes quite a difference. So, it's spending your time in different ways on higher value tasks, some things that only a human being can deal with, all that context and experience. So, I think that's really exciting. I mean, being like an operator in a business, having been a marketer, I spend so much of my time inside of numbers now instead of creative projects. It's incredible to be able to free up time for creative work, to free up time for storytelling, for narrative, for thinking beyond what is just that's the number, but why? Why is the number that number? You know? And I think that return to storytelling is such an exciting thing for really anyone in this day and age working in technology, but especially for researchers who really do own that part of the process, and that's where they provide the greatest value. I think the greatest value was in the ability to program us and launch a survey. It was always in knowing the right questions to ask and then doing something really interesting with it. So, how do you think teams are kind of balancing that empathy and depth with the fact that these businesses are demanding speed? And that's where, like, it's the trap, right? Like, as soon as people get used to instant gratification or being able to get to the data quicker. I mean, we've seen it as we were seeing clients come over into doing online research, they were just absolutely floored with how quickly they could get to their responses and also watching them come in real time. And to the point where we get kind of hooked on speed and don't stop and think, like, is speed always the right answer? And so I think, you know, what I always wonder about is that balance of making sure that we're still doing important complex work, because it should be complex, and not just trying to simplify everything just to lean on speed. What's your thought on that, as the demands inside of organizations are, like, why can't you just go clear and give me that answer right away? Just ask the bot. Just ask the bot. The bot will give us the answer. How do you think we balance that?
Lev - 00:19:31:
I think we need to get smarter at what use cases we should apply to what tools and situations. For example, sometimes there's nothing wrong with asking the GPT or Perplexity or Gemini a question and feeling reasonably good about the validity of that answer, especially when it's something very well researched, well known, and, you know, I'm not going to question my calculator when I'm doing some math or con converting miles into kilometers. On the other hand, if I'm asking those generalist agents something to predict the outcome of a survey, even if I ask them what do you think people in this or in this specific target audience would answer on those questions, I should not be trading the quality for speed. Because, yes, I will get speed, but those answers are going to be so generalized and so variable that I'm not likely to get smarter or better off putting any importance into those answers. Also, what was fascinating about predictive power of those models was that you really need to take your time and to develop those things and to be very specific and intentional about the dataset you're training it on. Because if you train it on things like Reddit or Wikipedia or Twitter, you will get the general sense encapsulated from that period of time of the chatter of various people, sometimes with some strange winds of prevalence and bias of certain communities that went into training of that model, right? But if you're really trying to predict the flavor of a savory food, you cannot extrapolate from a database trained on just any food or on sweet food. You need to be very purposeful, very patient, and discerning in what needs to go into the the model and how to verify that it's doing well and move incrementally, expanding the set of situations where that technology works and you know that it works because you test it and not ask for something that cannot do a good job out of the box because it wasn't designed for it.
Shanon - 00:22:15:
You know, it brings up something for me that we do talk about a bit, which is, people have so much data. Customers have so much data. They're trying to figure out how to mine off this data so that they can essentially build an answers engine. They don't wanna have to, you know, explicitly ask the same questions over and over again. And I think there's something really incredible in that, like, being able to unlock that huge asset that they have, all that intelligence. But also, how do you continue to layer on additional data and make sure you're revalidating it? Consumer sentiment changes at such a rapid pace now. You have new challenger brands coming into market all the time, you know, what people value changes constantly. Being able to keep up with that evolution while mirroring it with the assets that you've created, I think, is really, really the golden goose. It's like, there are questions you shouldn't have to ask as frequently. But there's also helping us figure out what are the right questions to ask by looking at past data and continuing to iteratively feed new interesting and contextual examples into that model, rather than just being like, wow, I have this massive amount of data. It doesn't necessarily mean that all the data is actionable to you, still useful, still valid, still accurate, or ever was, you know? So, being able to sort through all of that and then continue to build towards it by staying current, because there's definitely a freshness and an aging to data.
Lev - 00:23:40:
Absolutely. There are a couple of things that come to mind. One, recently, I think this or last week, we had a question in our internal Slack about the specification of our APIs and what can we do and what can we can't we do, and while waiting on several teammates who are specialists and that could provide a good answer, I decided to query our Slack chatbot and find answers to those 10 questions. And first of all, Slack refused. Slack said, “No. It's too much information, too much work. I'm not going to.” Then I said, “Okay. I understand. I'm conserving calories. You're conserving tokens. Well, humans, that's understandable.” So, I broke it into smaller chunks and asked, “Okay, can you help me with the first two privy, please, with the cherry on top?” And it went on and found some answers, and then continued and continued until it provided answers to all 10, and then I asked our technical folks, and they're like, oh, this looks almost correct, and many of the answers were correct four years ago. And that's what you get when you ask for all the depth of information in a database going back years. There will be answers, and there will be some of them will be correct for the time when something like that was said or answered in the past. So, it's very important, I think, as an example for querying our corporate data lakes and data warehouses, when we are getting answer to take the time, follow-up, and see where that answer was generated, what's behind it, is it still relevant, are we comparing things that should be compared or not, and what is the human who could corroborate on those things if time allows for that.
Shanon - 00:25:34:
Yeah. Definitely validating that by going back out into market and seeing if that sentiment or that selection would be happening. I think like a new age of experimentation where you're experimenting to see, like, how similar does it exist like, current respondents respond to prior responses for the same demographic. And I think it'll be such an interesting study to see how sentiment shifts like that so quickly. And what are some things that are evergreen in data? And what are some things that are just constantly changing all of the time? And I do think that researchers need to use that asset as wisely as they can and validate it consistently, and feed their models with new data all of the time. And I think, you know, that's what I think about what researchers should be looking for in market to do their jobs better using AI. I think it really is all about being able to do all of those things in a really smart way. You know, being able to use your past data effectively, validate that information with new data and constantly feed it with many different data sources. And I think it was Michael from Visa who talked a bit about the idea of, you know, that sort of 360 view, that sort of full view of data. And that, you know, really is quite meaningful. And beyond just what people say they're going to do, like, what their intent is, which is a lot of times what a person says inside of a survey when they answer a question. Like, yes, I would buy that, you know, that kind of Charmin or whatever. I don't know if I can say Charmin on the spot. I will buy that kind of toilet paper. But did they? I think that's also just a really interesting question. So, marrying that transactional data along with, you know, survey question answers and then seeing how that changes over time. It's such an exciting time in data. It feels so much more accessible than it ever did before for us to move beyond, like, the theory that something might be that way and into, like, very quickly into knowing that it is the right answer by marrying all these different data sources. And I think it's just so exciting. Like you said, it's a very magical time to be in research and tech in general.
Lev - 00:27:57:
I think the trio that we see as through line in those wonderful 50 episodes that successful consumer insights are preaching and using in their daily life are curiosity, technical savviness, and critical thinking. Those three things can unlock so many doors and make their research really shine.
Shanon - 00:28:23:
So absolutely. I mean, we're in the business of curiosity here, and I really think it is such a superpower, and that is the most human element of all events. You know, models can, like you said, they can correlate data, they can help you build this incredible picture, but it really is human curiosity that drives it all. And I think that's really what sparks all of this, and it is really a superpower.
Lev - 00:28:47:
Speaking of corporate data lakes, what's interesting and ironic about the technology of the day today is that language models are very good at querying and summarizing, and answering questions rooted in unstructured data, such as a collection of PDFs or transcripts of in-depth interviews, and open ends and literature in blog posts and forums and tweets and websites. But it's unexpected that this particular type of artificial intelligence really struggles with structured data. It's counterintuitive, right? Because you would think that humans excel at unstructured associations and intuition, and computers are designed to be native and working with databases and numbers, but it turns out that to get some structured data properly queried and analyzed on the fly is a tremendously more complex technical challenge than to just read a bunch of verbatim and get an answer. That, I think, is the holy grail of the next generation of systems that many companies, including ours, are busy at work solving.
Shanon - 00:30:16:
Yeah. I think that's a really good point. Let's talk a little bit about data. I don't think we can talk about AI without talking about data quality. I mean, when I think about I think, if there was a word cloud for our Slack and for our work over the last year, I see data quality being in such large bold letters. And not just for us, but for so many of our guests that came on. I saw that we had multiple guests that reference bad data rates, hitting 30% to 40% in some studies. And I think it's, we do see oftentimes of, you know, customers coming in and talking to us about, you know, what they view as bad data. And being able to define, you know, what is good data, what is bad data, through which lenses, and what in context is going on there. Because certainly, I don't think it's all bad respondents, I don't think it's all bad survey design, I do think there's, it's such a complex mix of things that lend itself to poor quality that some things humans play a big role in, other things to a greater extent, systems and tools and anticipating, things do as well, too. It's been framed as a crisis. And maybe there's just this part of me that has lived through so many of these so called crisis, the crisis that I don't really ever look at it that way. I just see it as that evolution. We just have to be smarter about how we think about it. We have to invent new, better ways to identify it. We have to learn to adapt. And I don't think of it as much as a crisis. What do you feel about that word? I know we tend not to try and be too defeatist in things or absolute about things because I think there's just so much nuance in it.
Lev - 00:32:10:
Yeah. I can see how it is perceived as a crisis because it's it is a reality for a lot of our colleagues and my friends in the industry who are struggling. They have to spend extra amount of effort and time to weed out that 40% plus of bad responses and replace them, and it takes real cycles to do that, and headaches and reputation costs, and it is a problem. It is also more of framing for ourselves, how we look at it, and how we solve for it, because when you see that everything is on fire, it's very difficult to be creative and to find solutions and to create something better. So, I feel blessed and grateful for the opportunity to look at new angles of attack as an antivirus company would. Antivirus companies wouldn't exist if there were no viruses, and their job is to be on the lookout and to understand the new threats and figure out protections, sometimes before they even become available and possible. So, I feel that while it's totally real, the crisis and the proliferation of bad actors, AI-supported, AI-driven often, is manageable if you put enough creativity, curiosity, and effort into managing it. And I love what we're doing with data quality reports that we're putting together, with being more transparent and continually thinking about data quality, not as a toggle when it's either good or bad, but as a process along with our clients and partnering and figuring out what data quality looks like today. It's definitely different from what it looked like yesterday. What we all need to pay attention to, and what are the consequences, and how we are staying ahead of the curve and ever changing world from this perspective, because the stakes are higher. You know, they're always a risk of making poor business decisions based on a survey or a series of surveys that were filled out by less than genuine respondents sharing their life experiences. But now there is another risk that you will take all that data, and you will put it into training models, and it is incredibly difficult to untrain the model. You have to start over, and we know that computationally, it's super expensive to do that. So, new risks, new angles of attack, but also new solutions, new ways to spot it using the same technology to make the data better.
Shanon - 00:35:16:
And it's never done, right? The technology's never done being built. It's in a constant state of evolution, and learning, and growing. I think also just like, I think your parallel that you made to the antivirus world is so apt because, you know, the job is ever done. There's always gonna be new interesting things that are going on in market, new ways to manipulate it. Technology will accelerate. In some cases, human creativity. It's amazing what people will invent when motivated to do something. It really has more to do with how do we put ourselves in that mindset to anticipate what they might do? How do we build technology that is able to adapt in that really quick way? And then I think the other part of it, which we all saw with all kinds of things make it easier for these fraud and for bots and for other things to work. You know, I think there's some things that, and let me pause on that thought, where I think the human being portion of this is we do play a role, a, in making sure that there's diligence and good tools and we're vetting those tools and making sure that we feel comfortable through transparency with our providers, like Lev said, we're interested in being transparent here. We wanna be able to show evidence that we have good quality, we want to be able to have incredible technology that catches things that no one else would, but it's a constant learning in a partnership that we do with our customers. It's not a, you know, one-and-done thing, it's not a one-size-fits-all thing, but the other part of this, too, that we do see, especially as we're digging into, you know, data quality, but those issues are flagged up is really the impact that design has on quality, on respondent experience. Years ago, Lev did make this decision, which I thought was really kind of interesting when I first got here was we built this panel, and it's this really high-quality panel, but the only place that anyone can buy it is through our platform. And that was really something quite unheard of. And the reason for doing it was because we wanted to preserve responded experience. It started from a place of data quality. The way the entire platform was built was through the lens of how do we create great experiences because great experiences create great outcomes, and create good data, and I think that still sits with us today with a lot of other pieces of technology layered in. But it really does kind of start with how do we help researchers build better surveys, ask the right questions, make better respondent experiences, have better outcomes and better data, asking respondents to slag through really, really long surveys and wondering why they're not passing attention checks, and they're speeding, and they're doing all those things. It's still the human condition at play. And I think if anything, in a world of instant gratification and distraction, people's time is increasingly becoming limited. And I think we're gonna continue to see that condensed, and so we have to also design great experiences for respondents. And the reason why our panel has been able to be such a high quality is because our respondents can count on the fact that they're going to have good survey-taking experiences here, and I think it's been fundamental in such a just such a major difference between us and in many other places. In a lot of ways, I was walking away from monetizing that panel in other places, but was done through the lens that we cared about the quality in that experience first and foremost. And I think researchers have to care about that, especially as we have new personas of users as marketers who are designing surveys who've never designed surveys before. We've built that into Skipper, into the platform, to help them make really good choices and understand that the way they're formulating questions or wanting to go about an approach is going to potentially produce poor data quality. Or I think there is an onus on us, and on them, on the human element of all this, of just how to think about getting the best data. And that starts with how we design those environments that human beings interact with as respondents. And, you know, I think sometimes it's easy in our industry to use, we use all kinds of phrases to describe human beings in an unhuman way. You know, respondents and sample and panel and all these things, but they're human beings with opinions. And they also want to save time, and they also wanna have good experiences and be able to spend more time doing creative things. And so designing better experiences for them, whether they're using our technology together with us, you know, learning how to do that better will ultimately also produce great data quality when you combine that with, you know, incredible technology and other things too.
Lev - 00:40:11:
Applying empathy. Apply empathy to the respondents in a similar way you're applying empathy to the consumer behavior you are researching, right? Look at the survey through their lens. Would you ask your mom to take the survey, especially when it's 30 minutes long and full of abbreviations and technical terms that are so native to you but foreign and strange to them? The good news is that Skipper and other similar technologies can convert your intention, your research objectives, your research questions into something that would translate into easy-to-answer questions for the target audience. And that translation can be literal in language, which used to be really expensive, a slowing factor in your study and really a pain in the neck. It transformed now to something that is impolite not to include because it costs you little, it costs you nothing in time, you can prepare a survey in your native language, translate it to all possible language that your target audience would appreciate taking it, they will they will like it, they will open up, they will be more clear in sharing their opinions, and the same technology can translate their responses back into your research language. But also translating things from one generation lingo to another, you can ask systems to rephrase this question or entire questionnaire in a way that would resonate with Gen Zers, Millennials and other generations, baby boomers will exist in the same language sphere, but we are speaking quite different languages starting with our use of emojis, ending with how we answer open ended questions, and how we act online. There are so many tools to be more intentional, more empathetic, and improve the data quality. And we are doing our best to keep an eye on the successful patterns and collect the feedback from respondents. We are working on survey friendliness metrics that we will be sharing with our users, and it will take all of us to improve the ecosystem and make it healthier, friendlier and more fun, and in the end, to improve the data quality. But empathy goes a long way, right? When we are fearful of data fraud, and we're saying that this is the worst thing that can be, these are not humans. They're fraudsters. They're survey farms. Yes. There are survey farms, and, yes, there are fraudsters, but there are also people in less fortunate countries that for them to take a survey pretending to be someone else means that their kid will have a meal that day or not. So, every time, when you're struggling through data quality, think abundantly, think about those people on the other end. And how do you use technology, use your humanity to get through the obstacles to the answers to your research objectives or research questions with grace and with kindness.
Shanon - 00:43:30:
So, I wanna talk about a topic that I really have always been very passionate about. We're really lucky here. We've got incredible educational partners. I believe Marcus from UGA was on the podcast. You know, we work with Michigan State as well, too. One of my favorite things was going on in meeting, like, young researchers who were, like, really ready to jump in, digital first, tech first, AI first in market, embracing tools. I think it's so interesting to watch the transition over the last year of that sort of researcher, in who are they and what sort of jobs are they doing are very different. And what I found that spark, especially in the next generation of researchers, is they're not necessarily straight out of school. They're not necessarily that. They're people coming into research less as a function and more as a job to be done, more as a component of their job. They're moving into research more as a component of their job. And they're really coming from outside that sort of traditional research background and discipline. I think, you know, one of the things they bring into that mix should be like creativity, curiosity and empathy. The way that they're thinking about solutioning for stuff is so different because it doesn't come from the, this is the way that we've always done it, it comes from the way of I'm really trying to get to this answer. What's the best way to get there? What are the tools at my disposal? And I think we're gonna see, like, such interesting and novel ways of approaching solutions, working with this next evolution of researchers. And we use these words here at aytm. We call them insight seekers because an insight seeker isn't a person who has a research title. An insight seeker is anybody who is seeking an answer, seeking an insight. And they can come from so many different, you know, places within an organization, bringing lots of context, lots of creativity, lots of curiosity, and loads and loads of empathy because they just are interacting with different parts of the life cycle. And I'm really so excited to see how these insights take or use great technology like what we're building here to just do something really novel and really, really cool within this year. But it's been such a story arc over this last year of watching that democratization happen and that transition. And also watching researchers evolve, people who have been in research and insights functions. So, let's talk a little bit about why hindsight and real-time insight are no longer really enough.
Lev - 00:46:11:
I think it's a hallmark of the time we're living. Everything is changing so quickly that full intents and purposes, the future is here is just not distributed equally, and we cannot think about insights without being futurists, all of us. We have to imagine what the next form of our society, of our technology, of our industry will look like because it's happening so fast. It's happening in front of our eyes. Two years ago, when GPT debuted, it feels ages ago, right, with all the things that happened since then and how it affected all the industries, all of our organizations, and our clients' organizations, and it's only accelerating. So, foresight becomes the center of gravity of the other two, I think, more and more. And it becomes harder and harder to plan for 5 or 10 years because it's anyone's guess what will happen in the next year, let alone 10 years from now. It's super intriguing, super interesting, and makes our job more complex, but more exciting.
Shanon - 00:47:24:
I think you really do have to be kind of courageous in this new world. You have to know how to gather as much data and insight as you have and be able to act as quickly as you can. And even when that doesn't feel as complete as it might have felt previously, because things do move so very fast now. And so I think enabling people to be able to move courageously through this market dynamic is so vital. And I get that is what helps our customers and clients really move faster, feel that level of confidence as needed to be courageous? What do we believe is the biggest barrier to building foresight capabilities today? Because I think it's really easy to be like, everyone needs to be a futurist, and everyone needs to be courageous. But then you're also, you know, trying really hard not to make bad choices and bring great products to market and not waste money on marketing campaigns. I mean, how do you really embrace that futurist way of thinking with foresight in that reality of the environment that they're in?
Lev - 00:48:34:
I don't know. It might bring us on too long of a tangent.
Shanon - 00:48:37:
So Lev, I mean, you being a futurist at your core, if you had to pick one of these themes we've talked about today that you think is really gonna deeply define the next five years of insights, which would it be and why?
Lev - 00:48:50:
I would say that the future, in my mind, is about many miracles. But for our industry, for more immediate future, it's about data, past data, as a prerequisite, not as luxury, not as something that we need to request our colleagues to get, but as something that will be more and more accessible with greater level of clarity and rigor, greater level of dependability that we can ask a question from everything the organization knows, everything I know, and get the answer like that. We saw a little bit of that with our photo archives, with our photo libraries. Before it was, you know, creating longer and longer, larger and larger hard drives, and it was very difficult to go and find that photo from that birthday that happened 10 years ago. Now I can just ask Google Photos or my Apple Photos, and that photo will be found in a matter of a split second, and I will be able to show it to someone. And that feels like magic and feels like a superpower. Now, if you use it as an analogy for finding any data in our organizations, any answer to a question rooted in real numbers and real observations, and what matters, that, I think, is going to be a game changer for us to make better decisions, to ask better questions, to better advise the research so that we're not wasteful. We're not going in circles, paying money for something we already know. Getting that data into insights, making it sing, making it reusable, and finding the gems of the future that are so needed and so needed in a timely fashion for us to be futurists to see what's around the corner, because what's around the corner is rooted in what we already know, what already happens.
Shanon - 00:50:51:
And then also in that, you know, in that embracing of foresight and learning from all of that and being able to do something really actionable with it. And, you know, for me, I think that theme of where human power is most valuable, the superpower of curiosity and empathy and storytelling, I think it's incredible to see, like, just the growth of humanity in this and sort of getting back in touch with some really core human functions that we had adapted into these very technical things over the last 30 years to get back into that space, I think is really valuable. I think we're gonna see better products coming to market that meet better needs for consumers that are novel and interesting and exciting and creative. We're just gonna watch that innovation life cycle speed up in such an exciting way, and just hopefully flourish as, you know, as people in that environment, growing in our careers professionally and doing interesting things and also being well served by it as consumers is such an exciting thing to think about over the next 5 years. And I do think what you said about that accessibility of data and all of that plays into that ability to have just that better output, that better outcome that should hopefully have a really positive impact on just humanity, consumers, our customers, ourselves, because we are all just people.
Lev - 00:52:20:
Yeah. We are. And I think that it's very natural and very expected for something handcrafted, for something human, for something real to be in higher demand the more we proliferate the technology. The more customer support will be replaced with AIs, the more precious it will be, oh my God! I'm speaking with a real person. Are you real? Are you really taking a piece of your life to help me with my problem? I'm so grateful. I'm delighted. In a similar way, the better we are with building synthetic data, human opinion will be more and more in demand. Authentic, real human time will become a greater commodity. And I think that the trend that we saw with the sample prices feeling the downward pressure will come to a U-turn and will become in greater and greater demand as we are solving the data quality problems and augmenting less critical research studies with faster synthetic fillers.
Shanon - 00:53:28:
So, what do you think there wasn't enough conversation about this past year?
Lev - 00:53:33:
It's hard to see what is not there.
Shanon - 00:53:36:
Very insightful. We don't know what we don't know. I mean, I think it's wonderful how the industry, you know, going to conferences, and people get up on stage, and they talk about AI in some abstract ways. They talk about it in some tactical ways with some use cases. But we also kind of speak about it in these terms. It doesn't feel 100% realized, you know, it all feels like to me, it feels like I'm looking almost through, like, you know, the what is it? Like a peephole on a door. I'm looking through, like, I can only see so much. I can't see what's in the periphery. And I think, you know, that ambiguity of embracing that ambiguity and sort of living with it head on, realizing that that is a part of life now, the ambiguity, because things are gonna move so fast. We don't always necessarily 100% know where they're gonna go. We don't have the time for our brains to catch up as much as we used to anymore. They haven't heard as much in the industry of people getting up on stage and going, you know, I just don't know. I really don't know. Like, I have a theory. I think this might happen, but I don't know. And I could come back in six months and say that what I thought was gonna happen was completely wrong, and that's okay. Like, that's an okay thing. It's okay to not always know, to work through that ambiguity and instead use them, not the muscle of knowing, like, I come from this industry and I know what's gonna happen. And I can anticipate it with the “I have theories, and these are my experiments, and this is what I observe, not what I know.” And so, like, using that language just feels so much more realized to me than people getting up on stage and being like, AI is gonna change everything, and it gets, you know, it's just like, it's gonna change some stuff and other stuff it's not. And it's gonna change things we couldn't anticipate it was going to. And that's okay. Like, it's totally fine to embrace that ambiguity, and you just have to figure out how to thrive there. And we're all looking for answers just because that's the kind of people that we are, but it's, you know, in some cases, we're not gonna know where things are gonna even be in a year or plus five years.
Lev - 00:55:51:
Thank you. I think that what maybe hasn't been said in a very clear way last year was that realization that it's changing so fast that my job, your job, any job is going to transform and be something completely different, not in the next generation, not in 10 years, but much sooner. The realization and coming in comfort space with that realization is something we need to embrace and talk more about, I think. I like this thought experiment when if I had seven lives, what other professions, what other things, what other lives would I live? Well, guess what? We are all going to be given an opportunity to live completely different lives in the near future because of such quick pace of changing technology.
Shanon - 00:56:48:
And I think there's, like, an onus as, you know, operators of businesses to look at our people and our teams and look at the work that they're doing and figure out how do I also help them evolve and transform because human beings inherently aren't used to having to transform this quickly. They're not used to this kind of pressure for evolution. And I think enabling that is such a key thing in helping people pass into the most valuable things that they could be spending their time on, really is quite universal across the board for any business, whether it's ours in technology, whether that's our clients, and, you know, how they're trying to run their businesses. And really think about what are those gaps? What does that next phase look like for someone? Some of the pathways that we think about here at aytm is, you know, people can move off into completely different kinds of functions where their foundational knowledge is best utilized, but maybe there's just a portion of the work that would have filled 40 hours plus hours a week no longer is necessary because a lot of it is automated. But instead, they're going off, and they're contributing in new ways, and it sits with us to be able to figure out how do we help that person adapt, what skills do they need. Be mapping those gaps and really just seeing them head-on and creating a pathway for that instead of just sort of letting it happen. Because there are gonna be folks who really do need more of a guided movement. And I think there are brilliant researchers out in the world hungry for what comes next for them and, you know, eager to figure that out and looking for that bit of enablement. And I would just love to see the industry embrace that so much more as a reality of it all. You know, there's certainly lots of shifting going on, teams changing, way-offs happening. That's just the realities of the environment. But it's thought that there's less work to do. It's just that the kind of work to be done is changing. If you want people to be able to do that different kind of work and contribute, you know, help us rapidly evolve and really move so much quicker, it takes some investment in helping them make that transition. And I don't think that the industry, or really most industries, are quite as prepared for what's needed to help that evolution along.
Lev - 00:59:05:
Indeed.
Shanon - 00:59:06:
So, what's one unresolved question you're carrying with you into 2026? We only get to pick one, which just does not seem really fair at all.
Lev - 00:59:14:
Well, 2026 is going to be gone before we can say December. I feel like our road map is so packed with amazing things that it's not going to be enough daylight to realize all of them, but I don't know. I have more unresolved questions than I can count, but each of them is such an interesting intellectual challenge to be had. I recently heard this very simple formula that every time you're doing something that you would rather not think about, I get to do it rather than I have to do it, right, and from that perspective, I will have all the challenges and problems and unresolved questions because I have an opportunity to try to make them better and to participate in solving them. I think that will be true in this year and the years ahead.
Shanon - 01:00:12:
Oh, yeah. Absolutely. I think for me, it really all has to do with moving from this is speeding things up and creating so much capacity, it's like, but how much? Right? Like, how much more can we be getting done? How much more can we produce? You know, really, I think, just meeting that point of, like, unlocking that maximum value. And I'm really eager to kind of see that moment happen and for it to start to be something that we can anticipate. And, you know, because we plan things like our product roadmap and our strategic things, we're gonna work on just like our clients, you know, played out product innovation and R&D and things that they're gonna work on. But it's not fully resolved to me of, like, can we do two road maps worth of things in one year? Can we produce, you know, 2X, 3X more products and bring them to market faster? Can we reduce the life cycle from, you know, 3 months to 1 month to a week? How fast can really things move so that capacity opens up to get a sense of what that new normal is? What is that new normal? And I think, hopefully, over the next year, we'll get a far greater quantitative instead of qualitative sense of that because I think that's unlocking tremendous amounts of opportunity.
Lev - 01:01:33:
Well said.
Shanon - 01:01:34:
Sure. Well, thanks so much, Lev. As always, it's just such a pleasure to chat with you. I'm really lucky because I do get to talk to you pretty much most of the day. And it's great to get on the podcast here and share some of these thoughts that you and I talk about a lot together. And thank you again for your great insight and just being such a great partner.
Lev - 01:01:55:
Thank you, Shanon. The pleasure was all mine, and I wanted to shout out again to all our guests in the previous 50 episodes and the following 500 episodes and to Stephanie and Molly for doing such a wonderful job. I think it's one of the most meaningful projects that we have going on.
Outro - 01:02:15:
The Curiosity Current is brought to you by aytm. To find out how aytm helps brands connect with consumers and bring insights to life, visit aytm.com. And to make sure you never miss an episode, subscribe to The Curiosity Current on Apple, Spotify, YouTube, or wherever you get your podcasts. Thanks for joining us, and we'll see you next time.


















.jpeg)


