Description
In this episode of The Curiosity Current, Stephanie and Molly sit down with David Evans, a social psychologist turned research leader who has spent more than a decade shaping Microsoft’s approach to AI, brand strategy, and user experience. David entered tech as social media was reshaping behavior and quickly saw how teams were obsessed with what technology could do, while he remained anchored in what humans needed. That tension still drives his work today. David talks about the “bottlenecks” in human attention, memory, and motivation and why these limits are not obstacles, but protective filters that help people focus on what matters. He shows how this lens guided real product decisions at Microsoft, from easing the transition from Windows 10 to 11 by addressing user fears head-on, to designing Together Mode in Teams to strengthen belonging during remote work. David draws a clear line between using AI to accelerate valuable work like surfacing insights in historical data and deepening qual analysis and using it to cut corners. He calls out the rise of “research theater,” especially when teams fabricate synthetic survey data and attempt to generalize from it. They further examine how conversational interfaces shift UX into a social space where memory, tone, and continuity matter as much as UI patterns. David also broadens the conversation on privacy, emphasizing that protection must include behavioral traces and predictions, not just what people choose to share. He posits that AI can support thinking, but it cannot replace the human work of reasoning, writing, and carrying insight across the finish line.
Transcription
David - 00:00:01:
Because the role of a graphic user interface is downplayed, the psychology at play is less perceptual and cognitive and more social. It's interpersonal. A huge difference in two AI designs is does it remember what you said? And if it does, then it's much closer to being truly a companion, a human-like agent that you can work with and grow with, versus if it doesn't remember what you asked it one minute ago or one day ago. It's far less social.
Molly - 00:00:34:
Hello, fellow insight seekers. I'm your host, Molly, and welcome to The Curiosity Current. We're so glad to have you here.
Stephanie - 00:00:41:
And I'm your host, Stephanie. We're here to dive into the fast-moving waters of market research, where curiosity isn't just encouraged, it's essential.
Molly - 00:00:51:
Each episode, we'll explore what's shaping the world of consumer behavior from fresh trends and new tech to the stories behind the data.
Stephanie - 00:00:59:
From bold innovations to the human quirks that move markets, we'll explore how curiosity fuels smarter research and sharper insights.
Molly - 00:01:09:
So, whether you're deep into the data or just here for the fun of discovery, grab your life vest and join us as we ride The Curiosity Current.
Stephanie - 00:01:19:
Today on The Curiosity Current, we are joined by David Evans, Director of Market Research at Microsoft.
Molly - 00:01:26:
David has been at Microsoft for over a decade, blending market research with psychology to create brand strategies as well as shape products like Copilot and other AI products.
Stephanie - 00:01:37:
He's also a behavioral scientist, a social psychologist like me. So we'll have lots to talk about today, David, plus a long-time lecturer at the University of Washington and author of the book Bottlenecks, which argues for aligning UX design with user psychology.
Molly - 00:01:52:
So today, we'll be talking about AI's impact on market research, the role of psychology in tech, and how David's work helps brands connect with consumers on a deeper level.
Stephanie - 00:02:02:
David, welcome to the show.
David - 00:02:04:
Thank you. It's so great to be with the two of you.
Stephanie - 00:02:06:
Yeah. We're excited. To kinda kick us off here, David, we always love to see psychologists who make their way into the market research world. I think this can be such an interesting industry for those of us who are just kind of hungry to see our work applied rather than sitting in a journal devoted to foundational research. So to kick us off, can you kinda take us through your career journey and how that kind of culminated in your current role at Microsoft?
David - 00:02:33:
Yeah. I love to. And I still start representing myself as a psychologist. I'm just so committed to that form of geekery, but I was a new PhD and a professor of social psychology when social media really kicked off. And so I was just entranced by the idea of social behavior just being deconstructed and reconstructed online. So I moved from teaching at a small liberal arts college in Upstate New York to Seattle. And pretty soon after that, I was doing user and market research for Microsoft and Amazon and the state government, and a bunch of things like that. And I think what immediately sort of that revealed was that, you know, engineers, the ones I was working with, the ones whose designs and code I was testing, they're really very tech-centred. And I was very human-centred, and I kind of assumed everyone would be human-centred. But still to this day, I mean, I knew I'd be the only psychologist in the room, but I didn't realize I sometimes still am the only person taking the perspective of the user, you know, really kind of, you know, looking through their eyes. And I think as I saw other behavioral designers get very successful in social media, it really came to the point where all of us actually had to really kind of look at the ethics of what we were doing and make sure that we were aligning with human nature, you know, not exploiting it so that my teaching turned at University of Washington turned from Psychology of UX to Ethics of Psychology of UX. And that's actually culminated in an animated short movie that we just recently produced. Maybe I'll talk about that a little bit later. It's an anime. I did with a student of mine, and she's absolutely brilliant. But at Microsoft, you know, I have always done product research. And, actually, it was much more market research like you, Molly, you know, and brand research. Probably my favorite is working on Teams, Microsoft Teams during the pandemic when we needed to support remote work. It was just real real-world impact with that. And then we come up to the new platform shift, and moving to the age of AI.
Molly - 00:04:26:
I know I have a question about AI, but I have to ask about the anime movie. I just want a high level, looking for that because that absolutely caught my attention. I've not heard of something like that before.
David - 00:04:38:
Well, it's called Alice versus the Dark Patterns, and it features Alice. And, you know, it was really just, like, after all this thinking and reading books about how does a junior person successfully argue against an unethical decision after we sort of, like, how do we teach new people in industry to use research, to do the right thing, and, you know, and prove that you can do well by doing good? The only way to really kinda teach that was to make it into an anime. And I had a student, Badmaraj Badjarjal. She was in my class, and she was doodling. And I looked down, and a light went off, and I'm like, we need to turn that into an anime short. She did. And it's just out. It's on YouTube, Alice versus the Dark Patterns. So, basically, it's a how-to guide of ‘argue the right thing, and keep your job.’
Stephanie - 00:05:25:
That is such an important skill, though. Like, it's critical, but I love that you said the only way it could be taught clearly was anime. I don't think I've ever heard anybody say that, and I love it.
Molly - 00:05:38:
I don't know. I wish that that was the way that I could absorb material when I was early in my career, right?
David - 00:05:44:
Well, and it kinda brings us to AI. Bob Marag, whom I worked with, she was really great at hand drawing, but to get that animation, AI actually was a big help. She was a one-person studio on it. But, you know, in the end and I've been Alice, I've been in where I caught people wanting to do something kinda sketchy. And some arguments you lose and some you win. The best thing about winning an ethical argument is you gotta forget about it because you never forget the ethical arguments you lose, you know? But, anyway, so, you know, I mean, we gotta keep everything here. We gotta keep humans, psychology, research, and ethics all on the table at the same time.
Molly - 00:06:21:
I think that does play well into talking about AI because those are all things we have to consider now with this additional incredibly powerful tool and how we use it. So, as we all know, AI promises all this boundless data at incredible speed needed today. But as a long-time researcher, where do you kinda draw that line between what's an actionable insight and what's just kind of an AI-fueled kinda clutter? And what's the, for lack of a better term, intellectual laziness that people just, you know, plug some stuff into AI and have it kinda spit out something for them that sacrifices the rigor and, you know, the importance of the insight for just speed and ease.
David - 00:07:02:
You know, a lot of us are looking at AI-transforming research. I mean, I think what you're driving at, Molly, some people say that research, we all want it to be fast, cheap, and good, and we all know that maybe you can pick two of those because you can't have both. And I definitely believe that, and I've seen it in action. Now, laziness and what I call it when I'm being vindictive is I call it research theater, not research. You know, that's when I'm criticizing it. It's been around since before AI. And when you're in business, and you're trying to get empirical insights to affect decisions in business, there's never enough time, and there's always way too much at stake, right? So, I mean, I think in some ways, that's kinda like why I love doing research in for-profit business because of those constraints. There is a certain creativity to trying to stay rigorous and creative under those pressures. But, I mean, when you look at, okay, now AI, and you could really just go way too far with just let's remove all friction, let's remove all delays, and let's just have AI make up an answer. And I think that there's at the very, very end of a very long spectrum of integration. What we're finding at Microsoft is that the things that slow us down are really before this, the data collection starts. Like, as researchers, a lot of times, we all learn about a brand new product or market. And the other thing, one of my favorite uses of AI is that we may have done a bunch of research on this, we may have done $3,000,000 over six years of research on this, and I don't wanna act like that never happened. So, AI accelerating the pre-field work phase is awesome. We can finally do literature reviews. We don't have to tell anyone to wait. And then I also think that AI helping you to go persuade change post research is a big help. A lot of times, you know what the story is, and you know the mountain you gotta go move, you know the ethical decision you've got to make, and AI can help you get that persuasive message done. So it's right there in the data collection, maybe, Molly, where I am finding myself saying, let's slow down and think, because we have statistical assumptions we need to meet. We have what even is empiricism. This is the voice of the customer, not the voice of the language model. And, actually, what we're finding across our whole group is that there are good implementations of AI in the data collection, but a lot more of it is really speeding up the pre- and the post-phases. What are you guys experiencing?
Molly - 00:09:28:
I wanted to go in on that a little bit, the voice of the customer and not the voice of the language model. Because let's just say, you know, at Microsoft, you're synthesizing a massive amount of social listening data to get really granular qual insights. When you have a 10,000-person AI model that's pulling in synthetic data that tells you one thing, but you have a focus group with two people, and they tell you something completely different, what voice do you trust when you go into that, and how do you kind of reconcile those two competing things?
David - 00:10:00:
Well, obviously, large samples bring more confidence, and that's just a statistical fact. That's why we make the investment in large samples. So you're really gonna have to kind of think, well, 10,000 versus 2. Anytime you're kind of comparing those in the way you've set up, though, you're probably not really approaching it right. The two people; they should deepen your empathy, not give you more statistical confidence in what the market as a whole is doing. So, it's like it goes right back to the standards of sampling. We take samples to learn about populations. The larger the sample, the more confident we are about the population. But I think, like, I have been sitting on so many synthetic sample talks, seriously dozens, in just the last few weeks. And really, it comes down to this. To put it really quickly, first of all, when people use synthetic sample, they mean about a dozen different things. And about 11 out of the 12 of them are fine. They're like, create a persona, impute one piece of missing data, and possibly even, like, ask a novel question you never did in the survey or the call. Now, the one thing that I'm still, you know, again, it violates everything I taught my students in statistics, is that you can't have a language model read your quant survey written in Word and then just create any number of rows in a spreadsheet from that. And why am I still really stuck on saying there are do's and don'ts, and I'm not ready to do that yet? It's because significance testing is based on your sample size. So if you can generate any sample you want, then significance testing can no longer be used. It's not replicable, you don't know the population to which you're generalizing, and then, you know, more importantly, the data are probably not even independent observations. Is any of this recalling, like, stats?
Stephanie - 00:11:44:
Of course, it is. Yeah. All of the assumptions are being violated. Yes.
David - 00:11:48:
They're all being violated. So you can't just say, “Oh, we didn't quite get to under point 0.5, so let's have AI develop 80 more cases and get there.” But I really do wanna say that that one use case is really probably the worst among many other use cases that will sometimes be talked about when people say, “Oh, I'm doing synthetic. Let me now see my favorite thing with AI in research.” You and I both know, Stephanie, Molly, we've collected, like, 6,000 people and maybe, you know, 60 questions in a quant survey, or maybe we had two-hour qual conversations. And we have this massive amount of data, and then we have one little insight that kind of moves forward to executives who want five bullet points, right? So you got all these insights in previously collected data that have just never been extracted. Well, if your AI can quickly go do that, we've always talked about the value of reanalysis of existing datasets, as in the research industry, and that's just wonderful. It also, if you think about it, it's respectful to people because you sat them down and you made them talk to you for two hours. Why would you only use that one thing they said about the logo? Right? It's respectful to people. It's good science. It's a good use of resources. So that kind of excites me.
Stephanie - 00:13:07:
Yeah. And, you know, when you asked us earlier, like, where are we seeing it, it's exactly that, I think, for me. So because we're a supplier-side company, like, our use cases are gonna be a little bit different. But I'm gonna speak from the perspective of our clients for a minute and just talk about, I've heard clients say, “You know, before we had the ability to use tools like Copilot with our data lake, for instance, right, that we were basically declaring historical research bankruptcy, right?” It's like we had no way to access all of that stuff. We had no way to search it or ever look at it again. It was this massive data lake that we really couldn't utilize in any way, shape, or form. And AI is giving them the tools to go back in and, as you said, reanalyze that data, learn what they can, and be a lot more targeted in the new questions you're asking or, you know, use it as a jumping point for all kinds of things. I think that's super exciting.
David - 00:13:59:
I do too. I do too. And I don't, let me just emphasize one more time. Doing a literature review even before you collect data is just so important. And I've actually found myself a lot of value in the jobs and the roles that I have because I'm the guy who goes and does a lit review. I'm the authentic intelligence, right, who goes, reads everything we've ever done on this. It's so valuable. And AI to be able to offer that to anyone and do it at the speed of business, more literature reviews, how could that be bad? There's another thing that I think, more theory. Let's go back to Kahneman, let's go back to Lewin, let's go back to all the great social psychologists, right? And we need to, and we need to, otherwise, we just collect data anew, atheoretically imposing on thousands of respondents and not really truly valuing that resource. So the other thing there, and so both of these, you know, they're in the pre and the post, but, making up data, I'm sorry. I'm a social scientist. I'll even answer that question based on values, not even based on stats. I'm here to represent the voice of the customer. I'm human-centered, I'm not tech-centered.
Stephanie - 00:15:02:
I do love, though, how much of your psychology background informs the way you think about things, simply because even calling that a literature review is just straight from your background, right? I'm gonna switch gears for a moment if we can, and I have a two-parter. I like to warn you because we're gonna be talking a lot here. I wanna talk about your book, though. So in your book, Bottlenecks, you advocate for understanding user psychology as this powerful way to design digital experiences that, you know, really bottleneck in human nature, both to the benefit of the user and to the benefit of the business. Are you able to walk us through, like, a practical example of a UX innovation that was driven by, you know, understanding of user psychology and applying it to UX design or UI? I figured since you wrote a book, you might have some examples you could share publicly.
David - 00:15:50:
Well, just sort of fleshing out that UX bottlenecks metaphor again. We always lament the limited attention span. We always lament that people don't form a habit and that limited motivation. We always lament that they don't share it with their friends, and they don't even remember it. Then, after studying those so much, I'm like, look, these constrictions wouldn't have evolved unless they're useful, and really, what are people doing? Those things we lament in tech and design and marketing are adaptive because they help people focus on what's meaningful and suppress the noise. And I think that if you just instead of, you know, complaining about the attentional bottleneck, if you just go past and be really whole human and say, “I need to be a lot more meaningful with my offerings. I need to be a lot more valuable. I need to have better timing. I can't just break through this human nature of these constrictions, these bottlenecks in sort of an aggressive way.” I think really good things happen when you start realizing how your tech survives neurological bottlenecks. So here's an example: every few years, we do a new Windows version, right? And we sunset our old one, and we just got done with the end of service of Windows 10. Windows 10; it is now in extended service, we're not supporting it anymore. We really wanna get a lot of people over to Windows 11. This is not the first time this happened. It's happened since Windows 95, right? Well, there's a lot of bottlenecks because, you know, you have a pop-up on the Windows, you know, desktop that says, “Look, we need you to do this little journey. We need you to go through this flow.” What we've been doing was just sizzling how good Windows 11 is. We just kept trying to sell it with the positives. And when they brought me in, we realized, and you do some research. We're like, we also need to dispel people's concerns. They have an appetitive desire to go to Windows 11, but they also have an aversive, a lot of aversive concerns, and this is called Omega Persuasion. So I said, “Look, you've gotta dispel those concerns. I don't wanna break my machine. I don't want this to take forever. You know what I mean? I don't wanna lose all my data.” And now, when you see these updates, you see us saying things like your machine is compatible because before in marketing, they're like, don't even admit there's a problem. And I'm like, well, that's not good psychology. You gotta be against what people fear. And so that's like an example I'm proud of. Not my work, but another really good thing that just really shows bottlenecks. I don't know about your guys' car, but in my new car, I have a few graphics that are projected on the windshield, like, right above the steering wheel. Speed. This is another just wonderful example of a bottleneck because if my foveal acuity is pointed out the windshield, it doesn't matter how big your monitor is down there. I don't see any of it. And in Teslas, I don't know, have you seen how big those monitors are? They're just giant. Well, it doesn't matter. No matter how big you make it. And this is just a wonderful example of how the retina is built with this tiny little bottleneck. We only have the acuity to read some symbols right there, not in our periphery. So I don't know. This is just to me, again, it's kind of like it just helps with understanding the metaphor, but I see memory as that same kind of little restriction. So restored most is not. I see your sense of self as also a tiny little restriction. Some stuff is for me, most stuff isn't. You ask my children, you know, “Is Microsoft for you or is TikTok?” And guess what gets through that bottleneck? So you see where I'm going with that, but you know, maybe one other example that I'm proud of is, like, with Teams. I did a lot of work for a lot of years with Microsoft Teams. One of the things I'm very proud got built was Together mode. I don't know if you've seen this, but, you know, you don't just have to see everybody's camera feed in this sort of Brady Bunch-like little squares on the screen. We started to do those feeds with actually mixed reality and artificial intelligence, you know, on a roller coaster or sitting in a movie theater, and it's not used all the time, but during the pandemic when people really needed a sense of belonging with each other, it just turned out to be a really powerful feature for a short period of time, and I thought that was psychological.
Stephanie - 00:19:58:
Absolutely. I love that. Here's the second part of my question then, David. Thanks for that. Fast forward, you know, eight years since you've written your book, probably, you know, ten years since you started writing it. UX isn't just websites and apps now, right? We're designing AI tools that talk back to us. And I'm curious from your perspective, what stays the same and what changes when you're designing AI-based experiences versus traditional digital products? It feels like the visual interface can be, this is just my layperson's understanding, but that the visual interface can be simpler, but that the real UX is in that interaction and how the assistant responds, the tone, the pacing, those things. Is that UX for AI tools, or how do you think about it?
David - 00:20:42:
Well, and you know, you're right, Stephanie. You're a social psychologist. And if what you're saying, well, you know, this is how I feel as also a social psychologist, which is because the role of a graphic user interface is downplayed, the psychology at play is less perceptual and cognitive and more social.
Stephanie - 00:21:01:
Yeah. It's like interpersonal.
David - 00:21:03:
Interpersonal. A huge difference in two AI designs is does it remember what you said? And if it does, then it's much closer to being truly a companion, a human-like agent that you can work with and grow with, versus if it doesn't remember what you asked it one minute ago or one day ago. It's far less social. I'm excited because I think the later bottlenecks, maybe the last half of the book, is starting to come into play, thinking about designing personality. I've been working with a lot of linguistics. We look at how we talk to AI and how AI talks to us. Do we treat it like an insight or not? You know, you start to think about when you get social, keep in mind that we didn't start studying new interfaces with AI when ChatGPT came about. We had been working on conversational interfaces with voice agents. We'd been working on robotics. There are great old books on the social psychology of designing good robots. Cynthia Balzio, I think, is the name of the author on that. But I guess I just wanna say, I don't have an answer right now. Everybody's working on this. It's in progress. I think we are right in framing the question, which is that, is psychology is far less cognitive and much more social? And that's really only the beginning of it. The best UI for AI might be an embodied robot, but tech succeeded in putting a small-screen smartphone in everybody's hands on planet Earth. And so that's what we've got to work with hardware-wise.
Stephanie - 00:22:33:
It's true. We've got good coverage that way, right?
David - 00:22:35:
Yeah. Good coverage that way, but then the best UI for AI there looks like it will be chat. And I think it's an exciting time to answer those. And I think the most important thing is we have to continue to be scientists and realize that we gotta try things and we could be wrong.
Stephanie - 00:22:51:
Totally. Because these are empirical questions, right? Like, they're not just answered already.
David - 00:22:56:
Unfortunately, the market with so much hype and so much investment is not really looking too kindly on experimentation. But the fact is the experiments are being run on the world right now. Some will fail. I mean, think of all of the tech that has taken decades to even really get traction, like, have you guys been in the metaverse lately? I haven't.
Stephanie - 00:23:21:
No. I haven't either.
David - 00:23:22:
Have you had a phone call that was voice only lately and you didn't even use video? Video calling is actually the slowest to be adopted technology in the world. It was available to the public in the Nixon administration in the 60s.
Stephanie - 00:23:36:
Are you serious?
David - 00:23:38:
Yes. Video calling is the slowest technology to be adopted ever. When you look at conversational interfaces, right, look, we've had Alexis and Siri's now for a while. We also have to admit that there's a lot of scenarios where you can't talk. I can't talk at work in a group work situation. I can't talk. I can't use conversations interface on a bus. So I do believe in maybe wrapping this up. It's a great question so much so. This is where I come up with a four-hour lecture, right? But I think wrapping it up, it's sort of like, I believe that AI needs a UI. I think where we stand now with this, ‘ask me anything prompt’, ‘prompt me for anything, I'll give you anything’, that's not UI. It's not good UX. And I actually think something will arise that will combine AI and a graphic interface that will do a lot better things like create affordances and expectations and mental metaphors, and it'll just be a lot easier to use.
Molly - 00:24:33:
I feel like that aspect of the AI needs UI is why there's a plethora of courses on just prompt engineering that you can't actually use it, until you develop another skill that shows you how to use it properly.
David - 00:24:49:
Yeah. And there are people who lean into prompt engineering, and they're gonna get good at it. But we promised that AI would mean we didn't have to code. We said that AI meant low code or no code. When you start to look at how advanced that prompt engineering looks.
Molly - 00:25:01:
It's 100% coding, which actually segues perfectly into my next question. I don't have a social psychology background, but I have a background in being a human being. And my background in being a human being tells me that humans are not perfect. We are often irrational. We make weird choices. We like weird things. We don't like other weird things for other strange reasons. And the promise of AI was promised a flawless prediction of human behavior, but can we ever actually truly predict human behavior? So in that context, what to you is the single most important human quirk, human nuance that perhaps AI is not yet able to capture? And how do you account for that in reports and datasets and actually crafting insights for decision making?
David - 00:25:48:
You must have fun writing that question. I mean, you know, there you go. That's a trillion-dollar question. Human. Wonderful. I go, and I've listened to a lot of real scholars. And one of the things they're saying that I think we should all meditate on is generative AI is human-like, but not human, and if you meditate on that, I think here's where I'm at. This is very much my opinion, and all of the opinions in this podcast are not necessarily those of my employer, right?
Molly - 00:26:14:
We have you on the show, David. We don't have Microsoft on the show.
David - 00:26:18:
I think the word that I would use to answer your awesome question without just dodging it completely is the word reasoning. I think that when you read, you know, a generative AI puts out, which is really it's a prediction of what one would answer when asked that question based on the data that you have. It's fundamentally a prediction, that text response that we get. That is really far more language like than it is rational-like. It doesn't necessarily have the kind of reasoning that geometry, science, legal theory has just brought the human species to, over decades of the humanities and so forth. The one class I make my college kids, my own children in college, take is logic because that is just not something you pick up going through your life as human. And I honestly think that we need to demand and hold a really high bar to the AI. We might hear claims from companies, great companies, companies as great as mine, saying it's reasoning. And as consumers, we need to just demand better and better reasoning, and I honestly don't think it can be really called reasoning yet. It's a prediction based on language. I think AI does incredible linguistic things.
Stephanie - 00:27:37:
Just a question to us to say you know, to follow that thread a little bit, right? Because you're talking about mechanism right now, really, more than anything. And I guess I'm wondering from an outcome perspective, what differences do we see as a function of the fact that humans use reasoning, whereas our models are using prediction?
David - 00:27:55:
It's kind of a joke, but not really a joke that the other one trait humans get to keep an AI will never have is accountability. You can't blame the AI when it goes bad. So accountability and responsibility. And some people in tech might think, Oh, that's a really bummer, or in business, I'd like to blame the AI. But I don't look that way. I'm like, “Hey. That accountability is the one thing that's gonna keep us in the loop.”
Stephanie - 00:28:18:
That's your job.
Molly - 00:28:19:
Yeah. It's still the tool. You have to tell what to do.
David - 00:28:22:
But maybe these two things are connected, and I'm just kinda playing jazz here, Stephanie, now. Why would the human be accountable? Because the human can do more than the AI. The human can use not just that reasoning and not just get the linguistics, but also think about the larger social context, think about the ethical context and so forth. So, maybe there's a relationship between these two things. It's not yet reasoning. It's definitely, I don't know if it's really taking into account broader and broader context for its output. Humans can do that, and that means we are accountable for doing that. And maybe there's something there. It'll keep us in the loop for a while to say there's gotta be a throat to choke. And it isn't a fun job, but we're gonna do it.
Stephanie - 00:29:07:
Yeah. That's a great point. Well, I wanna talk to you a little bit about something else I know that you, an area that you play in, which is around privacy. And, you know, in this same context of of what we're talking about, AI, privacy is a big topic. I sometimes think of generative AI as kinda changing the conversation or changing privacy from being, like, what I choose to share to what can be inferred about me based on the questions that I ask and and the things that I say. And I think that I can imagine ways in which that's uncomfortable and even anxiety provoking. And so I guess I'm wondering psychologically, like, what kind of product features or product actions tend to be most successful in increasing a user's trust in an AI tool or system given that kind of weird privacy element that arises?
David - 00:29:57:
It's a great thing to bring up, and Microsoft has always said that we consider privacy to be a fundamental human right. And that's a place where, you know, it's a non-negotiable. It's a value. I have waded into privacy, again, not as a lawyer, not as an engineer, but as a psychologist, which is to really, like, I don't think we understand enough about this and how it's experienced. And the privacy paradox is proof of that. When you go sell privacy to people, they say they wanna buy it, and then they don't. But in your question, there's a lot of different data types that need to be private. Not just what I author, not just what I write or the pictures I take. That needs to be protected as private. But my behavior and my behavioral residue and what I consume, you know, and what I watch, that also needs to be protected as private. And now you kinda go to the next one, which is, well, what about predictions about me? Predictions about what I will watch next or read on Reddit or how I will vote or what I will buy. I absolutely agree with you that, you know, the user would also want that to be protected as private. You might predict that I'm gonna fall and need shoulder surgery in the next three months. By the way, I had shoulder surgery two weeks ago. If you predict that about me, I don't want you to sell that prediction to every single marketing agency. So, I absolutely do think that it's a very good expansion of how we think about privacy to expand not just to authored and observed data, but also to predicted data. That's moving the right direction with that. Then the part of your question about how do we really get people to trust AI, especially when it comes to this idea of protecting data flow in the ways that humans would expect of each other, especially intimate others. And, you know, this is another way where the AI that really wins with people will be very social psychological. It will follow the norms that we expect of our friends and family. And there's a lot of designers who have no idea what those norms are, so we need you, Stephanie.
Stephanie - 00:31:50:
Heading back to UX, guys.
David - 00:31:52:
But for me, boiling it down is we've treated people for too long as helpless, passive participants in observing and scraping data about them and then targeting ads to them. And there's been, like, little examples with graphic interfaces like, well, what kinds of things do you wanna be advertised to? I mean, honestly, can you imagine any machine learning model that wouldn't want that data? You know? What do you wanna be advertised to you? What are you in the market for? What are you buying? What are you thinking about right now? You know? What are you interest? I mean, like, my machine model has trillions of data points. Well, maybe you only need two or three kind of things people would tell you if you actually treated them as active participants in targeted advertising. Now that we have this AI that is already in a companionate conversation, we have a better chance of giving people agency and making them an active participant in targeted advertising than ever. And if we don't take it, I think, one, we're squandering a very profitable opportunity, and, two, we are just willfully treating people as less than whole human. So I think, basically, it comes down to this. I hope your AI asks you what kinds of things you're in the market for, and I hope that that conversational interface makes a better UI than ever to figure out, like, what do you wanna be sold?
Stephanie - 00:33:13:
I really like that because you're right, there's no getting away from the fact that personalization is the thing that counterbalances the desire for privacy, right? It is that thing that can get you over that hump. But it's the way you implement it and the level to which you involve that person in sharing that personal information that can make all the difference in the world.
David - 00:33:34:
And we do know that some people say, “I don't wanna have this conversation”, and then we're like, “Okay. I get it. You don't wanna, you’ve got better things to do, right?” So, do you want me to just know nothing about you, or do you trust me to just sort of hit the button and go ahead with inference and, you know, you let people do that too? I do wanna push another recent publication of mine called Toward a More Human-Centred View of Privacy, and it's not talking about privacy in legal and regulatory ways or encryption and data protection. And, oh, I've studied them all, but it's really just trying to get more conversation around why do people need privacy? And, you know, there's a lot of really cool new kinds of nuances in there. Well, what about their sense of ownership of intimate possessions or their spaces or their decisional autonomy or even their relationships? You you can't maintain relationships without privacy. And I think we forget that, you know? Not to mention, you know, your reputations. But even reputation; I wanna elevate, like, a psychologist and say, “Look, I get to pursue my identity and a preferred definition of self.” So, I don't know, lately, I'm like, the reason we can't sell privacy is maybe we need to start selling what privacy needs are enabled. Maybe that word privacy alone, I don't know, has gone a little bankrupt. And maybe it is a fundamental right, but maybe it's not an intrinsic value of itself, but an instrumental value that leads you to a lot of things that maybe are more visceral and more emotional, you know, for you and stuff like that. I'm glad you brought up privacy. And last thing I'll say about privacy and AI is your favorite AI companion, I hope it's Copilot, might actually help you understand the privacy policies you're agreeing to all the time on a deeper level than you ever have.
Stephanie - 00:35:14:
That's such a good point.
Molly - 00:35:15:
I always think of, when I think of how the impact of data tracking and the fact that people wanna learn more things about themselves and they wanna learn personal things, especially personal things that reflect their identity, my mind always goes to, like, the Spotify rap to the end of the year about how they made gathering all this data and listening usage and all these different types of things, like, in a very package that everybody looks forward to at the end of the year when it's really just showcasing the extensiveness to which they're gathering information about your music consumption.
David - 00:35:45:
No. It's a huge point, but what a subtle flip of a switch. It could be creepy as heck, but they have found a way to survive the bottleneck of value and motivation. And instead of being turned off, a lot of people are turned on. I just got mine. It's amazing.
Stephanie - 00:36:02:
I did too just yesterday, so it was just top of mind. Yeah.
David - 00:36:06:
Real quick. What are your top artists of 2025? Just by name?
Stephanie - 00:36:10:
Mine were like Blood Orange, Wilco. It was weird, and my musical age was 35. What was yours? I'd never seen that before.
David - 00:36:18:
Yes. My musical age was pretty on target. I'm happy about that. I'm 56, but people in my family say you're 70 years old. Molly, who were your top artists?
Molly - 00:36:27:
I have a child, so I think my top sound was white noise and some random other song. 24hrs fan was, I think, my top artist.
Stephanie - 00:36:39:
24hrs fan? Haha.
Molly - 00:36:42:
Thank you so much, David, for all of these incredible insights that you shared in a multitude of different ways. I wanted to wrap us up here with a quick round of our segments that we call on the show here Current 101, and we ask all of our guests across a variety of experiences and titles and companies the same question, and that is, in your experience, what is one trend or practice in market research that you would like to see stop, and what's one thing that you would like to see more of?
David - 00:37:11:
That's great. So, more of inside quant surveys at large sample. We've always asked open-end verbatims, and to have a little AI agent do a couple of follow-ups moderated, I'd love, I'd like to see more of that. I do really think, again, you just collect such nuance, such depth, and we've never really pulled as much out as we could. And pulling a few quotes is what we've always done, and we knew we could do better. So I do like AI summaries of DeepQual, even long summaries. And then I am an educator. So, for students listening to that, I heard a speaker say that think of AI as your sparring partner or a thought partner, not some kind of butler who's gonna do it all for you. And I've been repeating that. Now things I would like to see stop. Again, I think that if you write a survey, this is a quant instrument. If you write it in Word and then you have your LLM just generate an Excel file, I think that should stop. You don't know what the source of the variance is. The source of the variance might just be the temperature setting, you know, in the AI model. It's not human beings. And then I think that every person, there are some forms of writing that are a means to an end, and there are other forms of writing that define you, define your contribution as a researcher, as a thinker, as a consultant, as a contributor. And the last thing I wanna see sort of stop is don't skip the struggle that's writing because it's a beautiful, beneficial, it's good friction, not bad friction. If you just have AI summarize the data and give it to it, and you didn't go in there yourself, you lose something. And you lose something for yourself trying to grow a career as a research, and empirical consultant, and you also lose something for the people you care about and the people you're supporting, clients, stakeholders, whatever. But writing and that piano behind me are the only two art forms I've really done in my life, and so you're gonna get a little biased there, which is writing is joy even if it's tough. So, there are some do’s, there's some go’s and no go’s from me.
Stephanie - 00:39:23:
I love that. I would say that writing is far more of a struggle than a joy, but I really believe that writing is how we make sense of things and where we build a perspective that we can defend. And I think that, you know, when we are not doing that work, I think it calls back to Molly's comment about intellectual laziness. It just contributes to a kind of intellectual laziness, I think, when we start to lose that skill.
Molly - 00:39:48:
I'll do a shout-out for writing. Writing is absolutely a joy. Marketing in my personal life, I think that the way that AI has sort of changed that art form and made it more attainable, less attainable, I'm not exactly sure how to actually quantify the entire way that I think about it distilled into one thing, but I'll do a shout out for writing. I think it's the best thing.
Stephanie - 00:40:10:
Well, I think that wraps us up here. David, this has been an absolutely fascinating conversation with you today. I don't usually leave with, like, a set of things, a set of resources that I wanna go watch then listen to, but I have done both today. So I really, really appreciate that.
David - 00:40:25:
These have been great questions, and, you know, you just helped me evolve my own thinking. You know, we need to use AI as we should, not as we could. And you guys have also really kind of reaffirmed for me that we're not gonna stop being empirical social scientists. So thank you. Thank you for this great conversation.
Stephanie - 00:40:40:
It's been great. Thanks, everyone.
Stephanie - 00:40:43:
That's it for today's episode. Thanks for listening.
Molly - 00:40:47:
The Curiosity Current is brought to you by aytm, where curiosity meets cutting-edge research. To learn more about how aytm helps brands stay in tune with their audiences, head over to aytm.com.
Stephanie - 00:41:00:
And don't forget to follow or subscribe to The Curiosity Current on YouTube, Apple Podcasts, Spotify, or wherever you like to listen.
Molly - 00:41:10:
Thanks again for joining us, and remember, always stay curious.
Stephanie - 00:41:14:
Until next time.
Episode Resources
- The Curiosity Current: A Market Research Podcast on Apple Podcasts
- The Curiosity Current: A Market Research Podcast on Spotify
- The Curiosity Current: A Market Research Podcast on YouTube


















.jpeg)


