Description
In this episode of The Curiosity Current, Stephanie and Molly are joined by Howard Fienberg, Senior Vice President of Advocacy at the Insights Association. They discuss how AI is reshaping market research, the challenges of keeping up with evolving regulations, and the ethical responsibilities that remain constant despite rapid technological change.
Howard explains that both policymakers and the insights industry are navigating a mix of optimism and uncertainty when it comes to AI. As tools become more powerful, expectations around transparency, consent, and data handling are becoming more important. The conversation explores what transparency looks like in practice.
They also unpack the complexity of the regulatory landscape. The discussion touches on the Insights Association’s updated code of ethics, the importance of keeping humans involved in research processes, and how to maintain respect for participants in an increasingly automated environment.
The episode closes with a broader look at the census, data quality, and the importance of advocacy in shaping the future of the insights industry.
Episode Resources
- Howard Fienberg on LinkedIn
- Insights Association Website
- Stephanie Vance on LinkedIn
- Molly Strawn-Carreño on LinkedIn
- The Curiosity Current: A Market Research Podcast on Apple Podcasts
- The Curiosity Current: A Market Research Podcast on Spotify
- The Curiosity Current: A Market Research Podcast on YouTube
Transcript
Howard - 00:00:01:
Both policymakers and the insights industry are sort of in the same headspace. I think all of us are kind of grappling with a certain amount of excitement and optimism about all the AI tools and the possibilities, but there's also skepticism of what are we actually gonna get, what are we losing in the process. I have a coworker that always talks about AI as Skynet. So, you know, there's apocalyptic things that people have in mind, which I don't expect. But there's kind of a broad range, but we're all looking at a lot of similar things and having some similar, you know, both positive and negative feelings. And, frankly, that's informing both how the industry is approaching it in many respects, but also how the legislators and regulators are looking at it.
Molly - 00:00:45:
Hello, fellow insight seekers. I'm your host, Molly, and welcome to The Curiosity Current. We're so glad to have you here.
Stephanie - 00:00:53:
And I'm your host, Stephanie. We're here to dive into the fast-moving waters of market research where curiosity isn't just encouraged, it's essential.
Molly - 00:01:03:
Each episode, we'll explore what's shaping the world of consumer behavior from fresh trends and new tech to the stories behind the data.
Stephanie - 00:01:11:
From bold innovations to the human quirks that move markets, we'll explore how curiosity fuels smarter research and sharper insights.
Molly - 00:01:20:
So, whether you're deep into the data or just here for the fun of discovery, grab your life vest and join us as we ride the curiosity current.
Stephanie - 00:01:28:
Today on The Curiosity Current, we are joined by Howard Feinberg, Senior Vice President of Advocacy at the Insights Association.
Molly - 00:01:37:
Howard spends his time at the intersection of public policy and the insights industry. He works directly with lawmakers, regulators, and research leaders on issues like consumer privacy, AI regulation, research ethics, and policies that shape how insights work actually happens in the real world.
Stephanie - 00:01:54:
Howard's background is interestingly rooted in both policy and data. His father was a statistics professor and advisor to the Census Bureau, and that connection sparked an ongoing interest in research, data, and the public policy that impacts them.
Molly - 00:02:08:
So today, we are going to explore attention that a lot of researchers are feeling right now. The tools are getting more powerful and faster than ever, but the ethical and regulatory frameworks around them are continuing to evolve. What does this trust look like in that environment, and what does this mean for the people who are running research today?
Stephanie - 00:02:26:
This is gonna be a good one. Howard, welcome to the show.
Howard - 00:02:30:
Thanks so much for having me on. I appreciate it.
Molly - 00:02:32:
Howard, I've known your work, and I followed you throughout your career in the insights industry so far. And what really got me thinking about specifically having you on the show was something that you talked about when you spoke at the IA Ignite AI event recently in Los Angeles. There's this sort of tension that's happening right now where, like I just said, AI is making research more powerful and scalable, but at the same time, it's raising a lot of questions in a couple of different directions. Can we trust this data? And, also, can we trust the AI systems with the type of sensitive work that we're doing? And from where you sit, talking to policymakers and researchers every day, where do you think the industry's head is sort of at on that question right now?
Howard - 00:03:16:
So, interestingly, I think both policymakers and the insights industry are sort of in the same headspace. I think all of us are kind of grappling with a certain amount of excitement and optimism about all the AI tools and the possibilities, but there's also skepticism on what are we actually gonna get, what are we losing in the process. I have a coworker that always talks about AI as Skynet. So, you know, there's apocalyptic things that people have in mind, which I don't expect. But there's kind of a broad range, but we're all looking at a lot of similar things and having some similar, you know, both positive and negative feelings. And, frankly, that's informing both how the industry is approaching it in many respects, but also how the legislators and regulators are looking at it. And both, I think, yeah, on both our side and on the policymakers, I think sticking to long standing principles is gonna be the biggest key, and this is why you'll hear me hit on transparency over and over again during our discussions because it's one of the most important things out of our own ethical codes and professionalism in the insights industry, but also something that is being hammered pretty frequently in legislatures and in congress when it relates to AI and data and AI especially. But, you know, I think that that's a reasonable thing to focus on in most issues that we might come into contact with.
Stephanie - 00:04:40:
That makes a lot of sense. And related to that, I keep coming back to this idea that trust in research has always depended, to your point, on transparency. Respondents, you know, they typically know someone, usually a brand, is asking the questions. They know why they're there to answer the questions, provide their information, and their evaluation, and then they generally know how their information is being used. I wonder, though, as we're layering on all of these technologies that respondents may not see sometimes, may not understand sometimes, you know, AI moderators, automated analysis, synthetic data that's built off of their data, to some So, it makes me wonder, what does transparency look like in an environment like this that's becoming increasingly complex? Is it, I guess, I wonder with everything moving so rapidly, how do you think about transparency differently at all?
Howard - 00:05:34:
I think you have to, well, we're talking about it both in terms, in similar kinds of terms, but there's transparency that you need within the industry itself, within and with clients. But there's also the transparency that's required when dealing with research subjects or potential research subjects. So, the principles are similar, but you know, it'll be exemplified in different ways. And, you know, when it comes to dealing with a research subject, you know, legislatures are already charging ahead on certain things. Like, if you deal with an AI chatbot, for instance, California, New York, and Maine, they already have laws on the books that require you to notify someone who is using a, you know, sometimes it's a companion chatbot, sometimes they're thinking, you know, just broadly of any kind of conversational AI that the AI is going to remind the human that they are not human. Sometimes they need to provide that warning not just once, but, you know, like, regularly during the interaction. Sometimes it requires inclusion of all sorts of other things, like protocols against suicidal ideation, all sorts of complicated things. But the transparency is the starting point there, and sometimes, again, just a reminder that, because sometimes the conversation can feel kind of natural. But we want people to understand at the end of the day that they are not talking to another human, and that's sort of a base transparency that one would expect in any interaction. And as it relates to the business side within our industry, whether it's between vendor and client, between, you know, brand and a research provider, anywhere within the chain, that transparency is not just about disclosing lots of things because, I mean, our contracts and our terms of use, they get really lengthy because there's a lot of stuff to cover. There's a lot of stuff to go over. Contracts, you know, and even when they're looking relatively standard, there's a lot of stuff going on on the legal side. A good point to remind people, I'm not a lawyer. This isn't legal advice.
Stephanie - 00:07:43:
Gotcha.
Molly - 00:07:44:
I gotta have the disclosure.
Howard - 00:07:46:
Absolutely. You need to disclose that. It is important for people to communicate, and I think that's one of the most important fundamentals that I'm at this point, reminding people when you're charging into the AI space or even just dipping their toes, clients need to know what's going on with their information and what might be happening during the research process that may be different than they expect. And, you know, that's part of, yeah, we're gonna get into it a little later, what does the Insights Association codes of ethics and standards have to say about it. We're hitting on base principles of some honesty and communication between all the partners in the research chain, so that everybody has a reasonable understanding of what's happening, what's gonna go on with client information, and what should they expect on the other end in terms of results. These are all things that you need to set a good standard for in your own discussions with each other and expectations. So, it's not just about what you think everybody knows because what everybody knows is constantly changing, and it's hard to keep up.
Stephanie - 00:08:57:
Sure. Yep.
Howard - 00:08:58:
So, things are on the move, and so if you're not talking to each other about it, stuff will get lost, and that's what will lead to someone getting really mad at you or vice versa. And we wanna avoid that.
Stephanie - 00:09:09:
We do.
Howard - 00:09:10:
And, you know, you don't wanna be surprised by that sort of thing. It's okay to be surprised by results because that can happen in research, obviously, but you don't wanna be surprised by each other in a bad way.
Stephanie - 00:09:22:
Right. Yeah.
Molly - 00:09:24:
You said something a little bit ago where you had said that it's important to have that transparency in the disclosure to let participants know that they're speaking to an AI, and they're not speaking to a human. And there's some interesting things that happen there in just our own work. TM ran some research recently about weight loss and GLP-1 use, which can tend to be a more sensitive topic. And we found that, interestingly enough, respondents tended to be a bit more candid with their responses and a bit more forthcoming with what they had to say when they know they weren't being observed by a human being. And you mentioned something too at QRCA recently that people may open up to an AI faster, but they also get much angrier when something in that system goes wrong again, because they know they don't have to have that filter of talking to a human. So, there's a lot of really interesting, you know, psychological things that are going on here in just how people respond to these humans. What do you think is sort of happening here? Can you break that down for us?
Howard - 00:10:30:
Well, again, recognizing I'm not an expert on that sort of thing necessarily, but I look at it sort of an extension of what we learn in the social media space where people will say and do things much more freely sometimes than they would in the real world because you're not, in that case, it's just you're not face to face with somebody or sometimes you're not talking to people who you know in any real con space. Like, they're just, you know, digital representations at best. Or maybe you have no idea who you're talking to at all, and you have no expectation you'll ever encounter them outside of that space. And that doesn't necessarily bring out everybody's best. And so in the case of dealing with an AI system, you know, maybe you are more free with your id, that won't necessarily, I mean, I guess it depends to some extent what the client wants to understand, because hitting the id unfiltered can be useful to the research, but isn't always. And it doesn't necessarily get you a reasoned response. It might get you an emotional one. Maybe that's what you're gone for. Maybe that's, you know, so that could be to your benefit. I mean, there are all sorts of ways in which this could play out that deserve a lot more study, and I think we're learning a lot of it in real time and, you know, keep building from there.
Molly - 00:11:51:
Yeah. Absolutely. Well, I wanna switch gears into the nitty-gritty that I'm super excited about, which is the regulatory landscape, which we've heard from a lot of researchers that this can sometimes feel impossible to track. I mean, even for someone like yourself, it could be difficult to catch up with all of the different things that are required in all of the different jurisdictions because there's now 20 U.S states with comprehensive privacy laws that are currently in effect, and each one is a little bit different. So, if someone were trying to run the same or similar study across multiple states utilizing an AI system or AI moderation, where would they start in understanding the space? And what's potentially something like, you mentioned, you don't want people mad at you. So, what's potentially something that people are potentially, you know, well-intentioned researchers could miss that may actually cause problems down the road?
Howard - 00:12:47:
Well, first off, certainly, the place to start is you should make sure you're an Insights Association member. Yeah. From the perspective of being able to access all of our compliance information on, say, for example, all the 20 different state privacy laws that are in effect, but also lots of other policy issues that are impacting the insights industry, sometimes quite directly, sometimes tangentially. And, you know, further to that, you get to be a part of the advocacy for a federal privacy law that would hopefully preempt a lot of this fractured mess of 20 state laws, which, unfortunately, is likely to be more than 20 state laws by the end of this year.
Stephanie - 00:13:23:
Sure.
Molly - 00:13:24:
Yeah.
Howard - 00:13:25:
That's just the way things go, and I could talk about lots of different steps that you can take, but I think among the things you're most likely to miss when you're working on this is checking on all of your policies, your notices, and your contracts, but how they relate to your actual procedures and your internal policies and how you do stuff. Because that disconnect between what you're putting out into the world and what you're actually doing can get you into massive legal trouble. You know, certainly trouble with your clients and with research subjects when things don't match up.
Stephanie - 00:14:03:
Right.
Howard - 00:14:04:
But that's really one of the most important things, and I think it is missed by a lot of people. And for years, early on, I mean, I've been doing this, 19 years now at IA and its predecessor organizations. I spent years trying to explain to people, no, you can't just copy and paste a privacy policy from another company because it might have no bearing on how you operate.
Stephanie - 00:14:27:
Right. Right.
Howard - 00:14:28:
I'm sure it looks good. I mean, I was reviewing a master services agreement from a platform, not a research platform; it's a tech platform yesterday, and I'm going through, like, wow, this is awesome. Like, I wish more people did this, but then I look at it like, I don't think a lot of the other platforms that I deal with, you know, the smaller companies and such, they can't operate internally this way, so it wouldn't make sense. So, along with all the steps that we recommend for people in dealing with all these different laws and regulations and expectations, you know, and reviewing your your privacy policies and your notices you're supposed to provide and how you handle consent and what you're putting into your warranties and your contracts, all these different clauses, how you respond to people making privacy rights related requests. All of that is gonna come back to how you actually operate. So, it's not just about the forward-facing. It's about the back end. And with that, I would also point out that there's great benefits to ISO certification. It's something that I kind of sneered at for years because I looked at it and said, well, seriously, I looked at ISO and said, wow, that's a whole lot of bureaucracy. That sounds like a huge pain. And you know what? It's not an easy thing to certify to ISO. It's
Stephanie - 00:15:47:
It's not. Yeah.
Howard - 00:15:48:
So, we have two that IA helps people with. One is ISO 27001, which is focused on data security.
Stephanie - 00:15:56:
Yep.
Howard - 00:15:57:
And another is 2252, which is for market research.
Stephanie - 00:16:02:
Yeah.
Howard - 00:16:03:
And, you know, certainly, I can tell you for the data security one, that's something to make sure you're well-positioned for dealing with data security laws in the United States. And there are a bunch of states that give liability protection against data breaches if you're ISO certified for data security. It's Connecticut, Iowa, Ohio, Utah, and Texas. But yeah, and the market research one in its own way also helps you get a handle on data in a broader sense because you have to understand your own internal processes and procedures, what data you have, where is it, where did it come from, where is it going, what state is it in, not U.S state, I mean, how is it kept, and what format and so forth. All those things help you comply with all the data-focused laws and regulations and put you in a much better position. It's not a guarantee of anything, but it puts you in the right spot to really have a handle on your own practices and policies.
Molly - 00:17:05:
I just have a clarifying question on when you say that there's different states that will offer protection, is this the state that the company is operating in or the state in which they're conducting research in, because especially for online, where they're in all those different places?
Howard - 00:17:19:
Yeah. It's about the state where the research subject is located.
Stephanie - 00:17:22:
Gotcha.
Molly - 00:17:23:
Yeah. That makes it hard.
Howard - 00:17:25:
Yeah. Well, absolutely. And that's the same thing with the state privacy laws.
Molly - 00:17:29:
With the privacy laws. Yeah.
Howard - 00:17:30:
Like, it's not about where you are. You know, you could be located in Timbuktu, but if the research subjects you're working with are all you've got, you know, hundreds of thousands of them in, you know, California, you've got to worry about California. If you've got, you know, more than a handful in Montana, you need to pay attention to Montana, you know.
Stephanie - 00:17:51:
Well, and GDPR are in Europe, I mean, yeah.
Howard - 00:17:54:
Yes. Absolutely.
Molly - 00:17:57:
That's a whole other beast, those privacy laws for Europe.
Stephanie - 00:18:02:
Well, to stick on the topic of regulatory frameworks, because they're riveting, I do find it fascinating right now, and I would love for you to kind of impact this for us. It seems to me from where I sit that AI is largely being regulated right now. It's rarely treated as its own category, and instead, lawmakers seem to be kind of weaving it into existing privacy and consumer protection frameworks. And I'm curious, like, is that approach like, is that the right approach? Does it leave gaps anywhere that are, you know, researchers, suppliers, brand side, should maybe be paying attention to if we're not going to be building these very specific AI-related frameworks and just build these into existing frameworks?
Howard - 00:18:46:
I think it's a useful approach, certainly right now when, I mean, getting policymakers to agree on how to define AI is not the easiest thing. So, how are they gonna charge in and regulate it as its own separate thing when, you know, they can't even get to grips with some of the basics of it? So, like, you know, there are states, California being the leader in it, trying to restrict the use of what they call automated decision-making systems, which, you know, we fought for years with them about this because they started, they originally started with a definition that was less about AI and more about really any kind of automation. So, Excel spreadsheets. They were gonna basically regulate those out of existence. And so it was a hard back and forth. So, we're still, and we're gonna keep struggling to get our heads around what AI is, both from the industry side and for the lawmakers. So, trying to apply long standing principle like we were talking about earlier makes the most sense. Yeah. And there's been discussion even in Congress about, you know, why do we need a new AI regulator that specializes in AI? But we have existing government authorities that have a certain expertise, and they don't all have expertise in everything that relates to it. But the idea that you would task, say, the National Institutes for Standards and Technology, NIST, under the Commerce Department with looking at practices and, you know, very technical aspects of AI and AI and security and other things, it’s a wonky area. You want your very, you know, tech-heavy, scientific-focused people focused on that. And if you're worried about consumer protection and, you know, ask things about, you know, notice and transparency, how the data is gonna be handled as it relates to individuals, you're talking about the Federal Trade Commission, which has decades of experience relating to those issues. And, yes, are they all gonna be experts on the latest things with AI? No. Not necessarily. But they're in a good position to be able to understand it and grow their understanding over time because they're focused on those kinds of issues and that kind of scope. So, I think it's the same sort of thing for researchers, perhaps, to look at this as, yes, it's a new tool. There are new capabilities, but you still need to come back to first principles and what you're strong at and what are you weak at, and you know, try to adjust that way.
Stephanie - 00:21:25:
That's so interesting. So, it's almost like, at least in this nascent period of AI when people are not even in agreement about exactly what it is and it's rapidly changing, that our policy should be focused and our regulations around the areas of impact because there's already infrastructure there where people have a deep understanding of their area matter and can layer in these sort of AI regulations over top.
Howard - 00:21:49:
Correct. And, like, you can see it even with California, of course, being the leader in regulating, they have mostly turned to their privacy regulator, the California Privacy Protection Agency, to take the lead on a lot of these things because that's where they see the big impact. You know, people are gonna fight about the energy use of data centers, they're gonna talk to their energy and resource departments, you know, as a good example. And so I think it makes sense that we would be approaching things in a similar way within our own businesses.
Stephanie - 00:22:22:
Yeah.
Molly - 00:22:23:
I'm curious, on that, do you think that's going to stay like that forever? Or as AI becomes more embedded into everyone's everyday lives, perhaps that could change?
Howard - 00:22:34:
I mean, of course, it could change, but it's hard to predict what it's gonna look like. Because, certainly, the idea that I'm sitting here and, you know, making use of ChatGPT professionally at least, you know, once a day, I would not have thought that I would be doing that, even yeah. Certainly, a year ago, I would have laughed at the idea. Oh, this stuff is stupid, but yeah. There's some usefulness to it. And it's creeping in in all sorts of tools and all sorts of platforms and for all sorts of purposes. So, who knows?
Molly - 00:23:05:
I had a guest speaking lecture that I just did for a grad class on market research at Cal Poly Pomona, and I mentioned that a year ago, I did the same talk, and I have my little cards here, and it would say, AI is going to be part of your life. It's like, forget it now. It is you are going into the market research space. You need to know AI not just as a shiny thing. It needs to be an intrinsic part of the way you operate. Because even for us, you know, at aytm, I work in marketing. And my last gig, I wrote SEO articles for 60% of my job, and that's just like, now I can ask Claude to do that. I can ask any of those systems to do it for me, and then I can actually move on to other things. So, it'll be an interesting thing to watch for us. Well, you touched on this at the beginning, so let's dive into the IAE code of standards and ethics. You recently helped to lead a lot of the updates, and the 2025 version now includes specific provisions as it relates to AI. So, walk us through the practical side of that, and what new things came out in that document that perhaps was different, and recommendations that you're offering now that perhaps you didn't in the past. And what do researchers from those guides need to understand today?
Howard - 00:24:27:
So, I'll point out, I think the new parts of the code that are AI-specific are not new in concept. They're just new in their specifics. Because, yeah, AI is, again, constantly changing, becoming more pervasive. So, you've got to roll with it and, you know, figure out what you need to focus on next. So, one of the points that was added is something that to me seems straightforward, which is, you know, I would say the respondent's anonymity is essential across the data life cycle. So, if personal data is being used for an AI training dataset without informed consent, you know, researchers need to make sure that any personally identifiable information can't be reverse-engineered by AI inference. This is just going back to basic principles of privacy and security, with research subjects' data. It's not a new thing. I mean, we're talking about new and, you know, more easily doing interesting things with data, but the base concept hasn't changed.
Molly - 00:25:27:
Yeah. That makes sense.
Howard - 00:25:29:
And let's see. The second one was if AI tools are selected, then their use, the purpose, the technique, the accuracy of the model, where the data came from, whether it's primary, if it's a secondary data source, if it's synthetic data, you need to disclose that. So, AI-generated data, whether it's predictive or generative, you need to clearly distinguish where the data came from, whether it came from human research subjects or somewhere else. Again, these are basic principles of transparency with business partners, yeah, with clients and between vendors and providers, making sure that everybody in the chain knows what is happening and is not, yeah, there are no negative surprises. Right? Going back to base principles. And the third one that's specific was that no AI system used in research should operate exclusively without human judgment embedded in its life cycle. And this is something, likely to end up in legislation in one form or another, requiring human involvement when it comes to decision-making relating to anybody. And, you know, at its most basic, that kind of concern is focused on decision making for the big picture things, you know, the major stuff like employment, health care insurance, getting a job, you know, getting a loan. But, you know, we're talking about something that's at a very different level. But the basic principle still applies if you are going to be treating a research subject with respect. It also involves making sure that they're not gonna be abused in the process, that, you know, there's a human that knows what's going on in some capacity. And that's also about the respect for your business partners, that someone knows what's going on inside the black box. And you can't know everything that's going on within the black box, but there's got to be some kind of human involvement.
Stephanie - 00:27:30:
I'm so glad that you're talking about that because I really think that is the part of the code that I look at, and I certainly am a fan of and agree with, especially where AI is today. I just, you know, I do enough with AI that I know where, you know, the softnesses are, right, in its performance, let's say. And so I love that conceptually, and I think a lot of researchers would really resonate with this idea that, like, the human in the loop, essentially. But it does raise the question of, like, what does that mean in practice? And I think you kind of hit on two parts of it. One is certainly in the design. Right? And that goes back to, well, two things. One, that you're doing right by your client, but also that you're creating a survey instrument that is, you know, gonna be respectful of your respondents. And then also in the analysis, but specifically, the sort of distillation into these key insights and ensuring that, like, what we learn is vetted by a human to say, I can see all the evidence for this, right? Because the AI has laid them out for me, and I can look at this and be confident that this is what the data says. And that's my sort of human oversight of that, looking over that stuff and saying, I see how you got there, and so I can trust this and put my stamp of, like, you know, human expert approval on it. Is that kind of how you think about it?
Howard - 00:28:53:
Yeah. I think that it'll depend on the study and the tools where human judgment is needed. But, yeah, it is needed somewhere in there, and it's gonna, it'll vary depending on the situation, but humans still have a role to play, and we shouldn't chuck them out the window.
Stephanie - 00:29:13:
Yes. Let's not.
Molly - 00:29:16:
Yeah. I mean, that whole thing of just human in the loop is a whole other concept that we've gotten into a couple of times on the show, talking about ensuring that humans are, at a base level, still collecting data from human beings to inform things that will impact human beings.
Howard - 00:29:34:
And yeah, I should have even mentioned this from a legal context. Yeah. California has new regulations as of late last year, I believe, early this year, on, you know, going back to automated decision making. But because they've drafted it really broadly, it also includes research subjects when they receive incentives because they become independent contractors. And at the moment, it's mostly focused on notice, and as it relates to the big picture interaction with the research subject in that context, and that's their, you know, the hiring and firing. And in our case, that's, you know, automation as it's involved with bringing them on to a research study, choosing them for a sample.
Howard - 00:30:17:
Yeah. Onboard, bringing them into a panel or kicking them off a panel, which seems really basic, but it's something that I think a lot of people are not looking at yet. And so it is a useful reminder not just to go up and look up our information about that regulation, which we have on the IA website, but also just to think about it in terms of just your own respect for the research subjects, even when we're not happy with them when we're kicking them off a panel or rejecting them from a study. You know, whether or not they're actually a real person or not, you know, those are other fights for another day, but maintaining a reasonable expectation that they're human and that we're human will help both sides.
Molly - 00:31:03:
I was watching your face there, Stephanie. Have you heard about a lot of things?
Stephanie - 00:31:07:
I did not know. I'm so, like, fascinated by this idea of if I'm hearing you right, Howard, you're saying that, you know, by law in California, that research subjects in that relationship are designated as independent contractors, essentially?
Howard - 00:31:22:
If you're receiving an incentive for participation in a research study, unless you're an employee of whoever's doing the research…
Molly - 00:31:31:
Yeah.
Stephanie - 00:31:32:
Yeah. Yeah.
Howard - 00:31:33:
You are an independent contractor of that company or organization.
Stephanie - 00:31:36:
I think the reason it fascinates me so much is because it really butts up against our data quality initiatives as well, right, where it's like there's so much pressure to make sure that our panels are clean and full of people who are ready to be attentive and serious about answering questions. But at the same time, when you start to think about these as independent contractors, to what do we owe our respondents in this sort of conflicting, you know, this relationship where everybody has a stake in it? It becomes a lot thornier. It's fascinating.
Howard - 00:32:06:
A longer conversation there. But certainly one that I think is worth more people thinking about.
Stephanie - 00:32:14:
Yeah.
Howard - 00:32:15:
Because there are other, there are all sorts of basic legal issues that come up, and it's one of the things I lobby on all the time, trying to make sure that we can continue to treat them as independent contractors and not have to turn them into employees.
[00:32:29] Stephanie: Sure. Yeah.
Molly - 00:32:30:
Yeah. That's a whole other can of worms.
Howard - 00:32:32:
Yep.
Stephanie - 00:32:34:
Wow.
Molly - 00:32:34:
All of our research participants now need health insurance.
Howard - 00:32:38:
So, since you bring it up, I mean, there are all sorts of, there are bills in, I'm I think I've counted out, like, 25 states this year that are trying to look at legislation that would allow for benefits to be provided to an independent contractor without them automatically being considered an employee. And, again, like a lot of stuff related to independent contractors, it is about gig workers. They're not thinking about research subjects, obviously.
Stephanie - 00:33:05:
Right. Right. Right.
Howard - 00:33:06:
But it would impact us. And does that put pressure suddenly on the Insights industry to say, do we need to start, you know, allowing and providing, you know, contributing to a benefit structure for our panelists? I would hope not, but it is something that makes me nervous.
Stephanie - 00:33:24:
Brings new meaning to the phrase professional survey taker, doesn't it?
Howard - 00:33:28:
Right. Yes. And that's a huge piece of it, is the presumption under these laws that someone needs to hold themselves out as a professional research subject to be considered an independent contractor, that's crazy.
Molly - 00:33:41:
Yeah.
Stephanie - 00:33:42:
Yeah. That's wild.
Molly - 00:33:44:
I think this gets us into the next part of our conversation that we had, which is that there's a legal side, but then there's also a human side and an ethics side. You know, we're all, I mean, and this is not a new conversation, but we've been all competing for the same finite pool of respondents who are willing to take surveys, to join panels, and to donate their time and participate in these studies. So, I wanna talk on something that is, I don't think, new, and it's relatively simple, but it's important, which is what does genuine respect for a research participant look like, especially in the age of AI, where we can perhaps more quickly reach audiences or we're tightening timelines, all of those things? How do we maintain that genuine respect for their time, given all these new advancements in the industry?
Howard - 00:34:35:
Oh, we could go down all sorts of rabbit holes. We could talk about surveys that are too long.
Stephanie - 00:34:41:
Yep. Yeah.
Howard - 00:34:42:
Yep. Research studies that collect way too much information, yeah, including, like, insanely detailed demographic information that's not actually relevant to the results of the study. Like, all things that add time and burden on the respondents and create greater risk, anything that happens with their data. You know, the more cognizant you are of that, the greater respect you're gonna have for the research subject that's taking their time and energy and putting them towards your project. So, again, it was things like that. I mean, respect and transparency, they're key aspects of the professional code of ethics for IA and really in most cases, for most industries. So, why shouldn't you apply it here? But, frankly, that's only one side of it. We can go down lots of different areas. Did you guys wanna talk about the research subject bill of rights?
Stephanie - 00:35:37:
Yeah. Go for it.
Howard - 00:35:39:
Participant bill of rights? Yeah. So, yeah, as part of the Global Data Quality Initiative, folks developed a participant bill of rights, and, you know, I think there are, like, 16 different pieces to it. And some of them are very simple. You know, like, I have the right to communication that is easy to read and understand. Well, you know, you look at that and say, well, I hope so.
Stephanie - 00:36:01:
Yeah.
Molly - 00:36:02:
Right?
Howard - 00:36:03:
Oops. If you're failing that one, yeah, that's a real problem and not just with the research subject. But they're ones that mean a lot to me just from my own experience here at IA, like, where it says I have the right to know how to contact the company that invited me to the research study for years, dating back to when this was the Seymour. Seymour had a program in which we would provide, you know, background and information for companies to share with research subjects about what research is and, you know, sort of a generic introduction to what is research, how does it work, etc, and there was a phone number that went along with that for a while that people could call to learn more of the basics of research. And some companies, having seen that thought, hey, that's great, instead of letting somebody know how to contact me, if there's ever a question about the study that they're involved in, that they've been contacted about sometimes accidentally in the middle of the night or in some poor fashion. We're gonna plaster on that phone number so they can contact Seymour, and we don't have to deal with it. So, I took angry phone calls for years, just in, and emails all the time from people that were confused about what was going on in a research study and didn't have a way to contact the research company that was running it. So, to me, you know, one of the most important things in the Bill of Rights is this very anodyne numb piece in there. But, you know, some of them, again, straightforward, either the right to be free from harassment or intimidation when it comes to joining or continuing a research study. Yeah. I have the right to know how I can leave a study at any time. What are some other good ones? Oh, I have the right to know if I will receive an incentive for my time, in what form, its value, how, and when I will receive it. Being clear with that sort of thing seems a good idea to me because it avoids all sorts of pain and suffering later when people, again, don't have the right expectations, and they're expecting that the incentive will arrive in, you know, their mailbox, or be in their hands before they've even finished the study, for example. Yeah. You wanna have everybody on the same page. What else? Oh, I have the right to not be sold anything or ask for money as part of a research study.
Stephanie - 00:38:21:
It's a good one.
Howard - 00:38:22:
Principles of not mixing your marketing and your marketing search.
Stephanie - 00:38:25:
Marketing with your research. Yeah.
Howard - 00:38:26:
Yes. Well, again, whole other discussion. We have lots of fun things there. But, obviously, in an ad tech world, the bright lines get blurry, and not because of us, but because of what brands wanna do with information and the goals that they have. But, you know, it's important for us when we are conducting a study, designing them, working and interacting with platforms and vendors, making sure that people understand that that's something that needs to be relayed to research subjects all the time, that this is for research purposes, we're not gonna be using this to sell to you, and frankly, principle-wise, if it's gonna be, you know, used for marketing in some capacity, you're making sure you're letting them know, getting some form of consent for that because it's a change of purpose in what's happening in the interaction and what's gonna happen with their data.
Stephanie - 00:39:21:
Well, and I think in the context of, you know, and these are not new, but all of the DIY platforms, a lot of this kind of thinking and decisioning sat with people who were very experienced at thinking about these kinds of things, you know, supplier side researchers who were pretty steeped in it. On a DIY platform, you've got a lot of people in a lot of different roles coming in and running a survey. They might be a marketing intern. They might be a product manager, right? And they're not gonna have that background. And so it sets up a very interesting tension of, like, how much can we build into our guardrails for them with the tech? You know? And that's a huge part of our responsibility to the respondent experience and to the brand. But some of it, you can't do for them. And, you know, we've certainly had times where I've caught, like, a survey in field where someone's like, hey, and it's been a few years, but you know, kind of doing a soft sell in the survey and just catching it and being like, whoa, woah, woah, that's not what this is. We can't do that. You know? And it's new rules for people who are outside of market research, and it can be really challenging. So, the more I think that tech companies can do to help those DIYers, the better it will be for everyone.
Howard - 00:40:36:
Definitely. The scope of people involved is always growing.
Stephanie - 00:40:40:
Yeah. It is. You know, to switch gears on us again, let's get into the census if everyone's cool with that. Howard, you are, among other things, co-director of The Census Project, and that is, from my understanding, a coalition advocating for a strong and accurate 2030 census. I think most researchers probably don't think a lot about the census on a day-to-day basis, but, obviously, a lot of our sampling frames that we rely on for U.S-based surveys and population research are absolutely rooted in census data. So, my question for you really is, why should researchers be paying attention to the 2030 census at this particular point in time? And I have a little bit of a pointed follow-up on that from my own perspective, which is that I know that there's a lot of political debate about, like, whether to count non-citizens in the next census, which we have historically done, right? And so that's a big change, and I imagine it matters or doesn't based on what you use the census for. But in market research, I mean, non-citizens have buying power, so it feels like a fundamental change that could have an impact on market research that's potentially negative. How do you think about it?
Howard - 00:41:58:
So, yeah, there's well, there's a lot of issues that go into this. You know, our prime concern is support for the decennial census and also the American Community Survey and the ACS, which used to be the census long form, but now goes on all decade long. That data fuels every piece of quantitative research in the country, but also how people put together even their approach in qualitative research. And that's not just in the private sector, but also in every other government study. It all comes back to a frame that comes from the upcoming 2030 census and results out of the ACS, so we're focused on trying to make sure it is as complete and accurate as possible. So, that means day-to-day, we're in, year to year, we're battling for funding, but also focus. And that means trying to keep the Census Bureau focused on these core constitutional responsibilities instead of doing what they've been up to since roundabout 2019, 2020 when they started very specifically trying to build their probability based online panel to compete directly with our industry, which is something they did really badly and continue to do so badly that they don't have anything to show for it. But they've spent money, and keep spending money on it, and they wanna put more into it and make it the source for a lot of things. Instead of just buying this research off the shelf or the services off the shelf, even from our members and from our industry, you know, they're looking to compete with us, which is ridiculous. You know, including from the fact that we spend all this energy trying to support them in both advocacy and otherwise. So, that's one piece of it. And the other is just generally trying to make sure that the process that leads to the decennial is coherent and well researched, well-founded, well-tested. And, frankly, the debate over the citizenship question being the decennial census in the last decade for the insights industry that was mostly concerned with, are you going to test it? Are you gonna put it into place early enough in the process that you'll have time to test it, figure out what the real impact is gonna be, and then, you know, adjust for any, you know, fallout from that. So, in the last decade, we were joined with lots of other groups who had all sorts of different motivations and all coming together to try to kill the addition of that question because it was being done in a haphazard way against regular procedure and testing late in the decade. And you know, we help, you know, win on that at the Supreme Court, and everybody's happy. But, you know, it doesn't all shake out the same way in the next decade because right away, there was already discussion earlier in this decade about adding a citizenship question. And again, this time around, at the beginning of a decade, it's not as big a deal. Again, as long as you're committed to a sensible process of figuring out what you're gonna do and testing it, determining what the pitfalls are, what are the upsides, not just assuming it one way or another and following procedure, including, in this case, as it was framed in the decision out of the Supreme Court. So, we're not so focused on it. And I think a lot of the focus at, you know, the federal level, among people that want to not count non-citizens, it's mostly focused on the apportionment count. So, it's not that they would not be counted in the census count as a whole, but that it would not matter for purposes of apportioning congressional districts.
Stephanie - 00:45:45:
Gotcha.
Howard - 00:45:46:
Or, you know, or, yeah, redistricting of state legislative districts, for example, an issue that we don't have a personal stake in as an industry. As long as we still have access to the information, an extra question is not necessarily a bad thing. You know, we don't wanna jam tons of questions into it because then you're gonna get less people responding, but adding a question is not inherently a bad thing.
Stephanie - 00:46:11:
So, it's not that the census just goes to citizens. It's that there's a citizenship question, and it becomes like a parsing or filtering tool, but we wouldn't have to do that. We would still have access to the total population level estimations.
Howard - 00:46:23:
Correct.
Stephanie - 00:46:24:
Gotcha.
Howard - 00:46:25:
And there will still be fallout concerns. Like, if you add this, does that mean you have less people responding because they're concerned about it being there and what might happen with that information?
Stephanie - 00:46:33:
Having to indicate that. Yeah.
Howard 00:46:35:
Compared to the rest of it. I think some of that concern is overblown, but some of it is real. And, you know, again, as with most things, more research needed? Absolutely.
Stephanie - 00:46:47:
Yeah. Well, Howard, we have a recurring segment that we do on The Curiosity Current called Current 101, where we ask our guests the same set of questions. The first is, in your experience, what's a trend or practice in market research that you would like to see stop, just end as quickly as it can? And then what's one thing that you would like to see more of in market research?
Howard - 00:47:13:
So, the one thing I'd like to see end immediately is the way, and this is the government's fault. It's how the U.S Office of Management budget approaches research policy for the U.S., and they demand a ridiculous response rate. And I'm talking, it's not a specific thing that they have in their own rules. It's how they've allowed it to be interpreted across all the federal government agencies that they should see, you know, like, 70%, 80% response rate on any survey for it to be considered good or acceptable, which is absolutely insane, and this is across all kinds of methodologies. It's almost unachievable in almost any context, but what results from that is that they are ruining the research subject experience for everybody else. Because if you are working on a research study for the federal government, you end up being told that you're gonna have to do dozens of, for example, if it's a phone survey, you're doing dozens of callbacks in the course of a short period of time, and that is harassment. The government is requiring harassment of potential research subjects, and it is offensive beyond all measure. And I wish they would just cut it out.
Stephanie - 00:48:30:
I would like that to stop you. Yes.
Howard - 00:48:33:
And, certainly, things I would like to see more of, I go back to transparency. I wanna see more communication between, you know, the business partners, and I wanna see more communication with the public. People should know what, it shouldn't be black boxes that they're dealing with on either side. I mean, we don't have to open up the doors to everything, but let's open it up a little bit more at least.
Molly - 00:48:55:
Yeah. So, Howard, thank you so much for taking the time and chatting with us today. I feel like this is something that undercuts every single piece of our activities that we do every single day, but not something that I feel gets the spotlight as often as it should.
Stephanie - 00:49:10:
True.
Molly - 00:49:11:
So, let's say there's somebody listening to the show, and they really want to triple track and cross the t's, dot the i's, and make sure that they are on top of everything that we've discussed today, the regulation, the ethics, the census being part of that. What's the first thing that they should do? What's the best starting point, and where should they go from there?
Howard - 00:49:33:
So, top of the checklist is joining the Insights Association. Yeah. It represents the whole industry, and you should be a part of it, obviously. But along with that, there is an insane amount of information on our website, insightsassociation.org, lots of useful information there on compliance, on ethics, you have the code there, which anybody can, you know, review and should be looking at regularly anyways. Most of the information is really behind the member's only wall when it comes to legal and compliance issues, which is, again why you should be a member. We've got a daily electronic newsletter that highlights all sorts of things happening across the industry, but also what's going on on the advocacy and compliance sides. If you follow us on LinkedIn, we do regular IA on the Hill posts about what's going on on Capitol Hill and on the advocacy side and even a monthly fighting for you newsletter that I do on just rounding up everything that's gone on in a given month, because there's a lot going on every month as it relates to advocacy.
Stephanie - 00:50:33:
I get that every month. I love to look at it. Yep.
Howard - 00:50:36:
Yeah. And then, yeah, we even have specific things like the general counsel and privacy officer forums for our company and department-level members, very candid discussions on all sorts of issues, several times a year. Like, there are a lot of opportunities in which you can learn a lot, sometimes very quickly and get up to speed, but also you can be a part of something very important in advocating for your own industry. We're gonna be launching a grassroots tool in very shortly that will allow anybody to contact their policy makers on behalf of the industry in support of industry interests as it relates to everything from a data privacy bill in, you know, a given state to support for the census, the latest regulation on AI. Everything's gonna be on the table, and you wanna be a part of it.
Molly - 00:51:27:
Amazing. I think I have that checklist, same thing as Stephanie. I get all your newsletters all the time, and they're super helpful context to always be aware of. Well, thank you again for joining us today, Howard. We really appreciate your time and for spending the time to share your knowledge with us and with the industry.
Stephanie - 00:51:45:
Absolutely.
Howard - 00:51:46:
I really appreciate you guys taking the time to talk to me because there's always lots going on. Take care.
Stephanie - 00:51:52:
Thanks. Bye.
Outro - 00:51:53:
The Curiosity Current is brought to you by aytm. To find out how aytm helps brands connect with consumers and bring insights to life, visit aytm.com. And to make sure you never miss an episode, subscribe to The Curiosity Current on Apple, Spotify, YouTube, or wherever you get your podcasts. Thanks for joining us, and we'll see you next time.


















.jpeg)


