A note from our CEO: Finding opportunity in the data quality challenge

aytm logo icon
Posted May 14, 2025
Lev Mazin

The headlines confirm: our industry faces a significant data quality challenge. Reports of fraud and unreliable data understandably cause concern, shaking trust in the insights that should guide crucial decisions. It’s not a new issue and many learned to view it as a problem to be managed defensively, a cost center to be minimized, or noise to be filtered out.

At aytm, we care about insights—our brand clients depend on us to get insights right. We’re not agnostic to the issue. We’ve always had a perspective, and an intimate stake, in data quality—given our status as a panel provider. We don’t just observe these challenges from a distance, we experience them first hand—and we take them personally.

As researchers, we're trained to recognize that strong signals—even unexpected or challenging ones—aren't anomalies to simply discard. They are vital indicators demanding our attention, our curiosity, and our exploration. Data quality issues are precisely those kinds of powerful signals. Shying away from them, treating them merely as errors, or hoping they simply disappear means willfully ignoring crucial information about our methods, our respondents, and our ecosystem.

Our philosophy is to turn towards these signals with rigor and even enthusiasm. We believe that deeply investigating the 'why' behind quality deviations doesn't just lead to tactical fixes; it fuels fundamental understanding, sparks necessary innovation, and guides us to build stronger, more resilient systems for generating trustworthy insights. This exploration isn't a detour from our work—it is the work that leads to progress and more reliable discovery.

It starts with people: Our respondent philosophy

Many treat data quality as a clean-up job at the end of the line. We believe it starts at the source: nurturing the relationship with the respondent—an inadequate label we use to describe the people who choose to spend their precious time reading and answering our questions. This perspective isn’t just sentiment; it’s the bedrock of PaidViewpoint, our proprietary panel.

We built PaidViewpoint on principles of respect, fair value exchange (real cash for quality time), and genuine engagement. That’s why we have respondents who stay with us for over a decade, why our panel grows organically through trusted referrals, why we’ve been named the #1 survey site by SurveyPolice.com for 10+ years, and why we can make promises we keep.

Critically, we don’t overuse our panel or treat it as a volume-based commodity. Access to PaidViewpoint is exclusive to aytm clients—we never resell it to other providers. This isn’t just a business choice; it’s a data integrity safeguard. Be cautious when working with a provider that doesn’t use its own panel. If they can’t vouch for the people answering your questions, how can they vouch for the data?

This foundation of goodwill isn’t a byproduct; it’s proactive data quality management. When people feel valued and heard, they provide better, more thoughtful insights—a stark contrast to treating respondents as a commoditized resource procured from opaque corners of the internet.

Design matters: The shared responsibility for better surveys

Because we operate an integrated platform, not just a sample source or a survey tool, we recognize quality extends to the research instrument itself. Let's be frank: monstrously long, boring, or confusing surveys are invitations for poor data, regardless of respondent intent. 

Our commitment to better design is multifaceted:

  • Championing smarter design for all users: We actively champion smarter survey construction with all our users, from those leveraging our full-service expertise to dedicated DIY researchers. This means advocating for shorter, more focused instruments, clearer, localized language, and seamless mobile experiences.
  • Building an intuitive platform for DIY quality: Our platform itself is built to be an ally in this quality quest, with intuitive tools and built-in guidance that steer DIY users towards creating more effective and respondent-friendly surveys right from the start.
  • Empowering researchers with AI-assisted design: AI agents like Skipper Draft further empower all researchers to craft precise, engaging, and human-centric surveys faster—another way we proactively foster quality before a single answer is collected. Likewise, Skipper Modify can quickly wordsmith your questions for clarity and conciseness.
  • Advocating for best practices beyond our tools: This dedication extends beyond the platform. Through our engaging brand communications, educational content in our Lighthouse Academy, thought leadership shared via podcast, webinars,  social media and other channels, and proactive, excellent support, we champion these design best practices widely.
  • Investing in the future of quality research: Furthermore, our investments in educating the next generation of researchers at leading Masters Programs at UGA, MSU, Uark and more, aim to instill these quality-centric design principles in the next generation of researchers—fostering best practices from the ground up. 

Layered defenses: Technology with purpose

Even with a strong foundation, robust and evolving technological defenses are essential in today's complex ecosystem. But again, our approach is integrated and intentional:

  • The gatekeeper known as deduplication: Before a respondent even sees a survey, our sophisticated Deduplication Server, leveraging graph theory and multiple vectors, acts as a vigilant gatekeeper. It blocks a significant amount of suspicious traffic and prevents known bad actors from entry based on deep pattern analysis. It also blocks well-intended respondents from following more than one invitation from different panels.
  • The response-cleaning engine we call Data Centrifuge: Like everyone, we work with trusted external partners to reach niche audiences. Here, we don't have the same direct relationship as with PaidViewpoint. This is precisely why we developed Data Centrifuge. It’s our advanced, AI-powered cleaning system for analyzing response patterns post-collection, essential for maintaining quality across all sample sources (indiscriminately of their source). We've invested years and significant resources into its multi-vector analysis and machine learning capabilities. This sophisticated analysis examines a wide array of behavioral and textual signals, such as excessive speeding, random or contradictory response patterns, gibberish in open-ended answers, paste detection, and increasingly, the likelihood of LLM-generated text. We see Data Centrifuge as an evergreen project, constantly evolving to meet new challenges.
  • A library of quality assurance questions: Deploying questions within the survey designed to check for respondent attention and surface bad actors is a tried-and-true tool that researchers have used for ensuring integrity. We developed a series of question types that we then validated for their efficacy in identifying poor quality responses, and integrated those directly into the survey construction. These questions go beyond the traditional red herring and use researcher-backed design to effectively identify inattentive respondents and bad actors.
  • Built for an evolving landscape: No system battling evolving threats can be perfect. When it misses something, we don't see it as failure; we see it as identifying a new challenge that requires analysis, adjustment, and innovation. We actively invite users of our platform and partners to show us where the system can improve, making its evolution a collaborative process built on transparency. This requires continuous investment, and we are committed to it.

The human element: Indispensable guardians of quality

In the rush towards automation, it’s tempting to believe technology alone can solve quality. We fundamentally disagree. Our experienced research professionals and panel operations team aren't just support; they are crucial integrators—providing methodological oversight, interpreting nuances technology might miss, and ensuring the 'why' behind the data remains clear.

Looking ahead, this human element becomes even more critical. As AI generates increasingly plausible outputs, who will validate them against reality? Who will ensure automated models don't perpetuate human biases or simply hallucinate? Experienced researchers, statisticians, and data scientists—acting as guardians of quality, performing checks and balances—will be essential. We believe this provides enduring purpose and meaning, ensuring technology serves truth, rather than obscuring it.

Validating the synthetic

This perspective informs our view on synthetic data too. While incredibly intriguing, generated data is only as good as its ability to accurately describe the world and predict the answers of ever-changing consumers. It goes without saying that for it to be called a solid insight generation method, it must pass a rigorous validation against real-world, human responses through continuous processes involving expert human judgment. Trust requires verification.

Enthusiasm builds trust

When clients ask us about the industry's data quality woes, they often expect defensiveness. Instead, we meet them with enthusiasm—a genuine passion for discussing the intricacies of our systems, the philosophy behind PaidViewpoint, and the ongoing evolution of Centrifuge. We've seen firsthand how this approach transforms conversations, replacing skepticism with collaborative trust.

Treating data quality not as a burden, but as a core driver of innovation and integrity, changes everything. It requires investment, attention, and a commitment to transparency. It means leaning into the signals, even when uncomfortable.

This is aytm's commitment. We invite our clients, our partners, and the industry at large to join us—not just in acknowledging the challenge, but in embracing the opportunity to build a more trustworthy, insightful, and ultimately more valuable future for research, together.

Read more about data quality at aytm

Featured Stories

New posts in your inbox