Research Slop
“The absolute epitome of AI slop,” then “change AI to Research,” from Gemini Nano Banana.
AI slop has been inescapable online and in our conversations over the past year or so. It has entered the mainstream discourse as we grapple with what happens when people can generate low-to-semi-adequate content at virtually zero cost with unlimited distribution. The questions around AI slop are numerous, substantial, and complex - IP rights, quality definitions, the meaning of creativity, the politics of taste, etc, - and they range across disciplines, cultures, and technical implementations. Far too many to cover in one post.
Here, I want to zoom in on one specific manifestation of this phenomenon close to my heart happening within companies: research slop.
I’m referring to the rise in mass-generated “research reports” or “insights” enabled by the new wave of AI-powered research platforms and tools. They’re characterized by surface-level analyses, highlighting unsurprising themes in the data, and minimal filtering or editing before organizational distribution. These tools are used to conduct UX or customer research within companies - i.e. activities where organizations try to learn how people use their products, who they might build for in the future, and general sense-making about people, products, and markets. They are used by people with “researcher” in their titles but also by the general group of “people who do research” across organizations.
No one asks for AI slop or intends to produce it. What they’d like is to feel like they’re “rapidly exploring ideas” or making “a quick and dirty prototype” or they’re part of an organization with “continuous learning” at its core. They want real progress with compounding learning. All legitimate and laudable goals. But what they’re producing instead are blurry, AI-shaped objects that don’t actually further these goals.
People rightfully want to understand users better. But the problem is research slop does little to further this goal. Instead it gives the appearance of understanding, like looking at high-level metrics dashboards. Relevant as part of the process, but knowing numbers or headline insights isn’t the same as thoughtful understanding of people and data.
Now, I’m not against these platforms and tools in any way. Quite the opposite actually. My current team has adopted them and we’re expanding our use of them. I’m about as bullish on AI as transformative technology as you get. Applied well to research, they will help us ask better questions, scale our understanding of the people who use our products, and build better products over time.
But there are incentives driving the adoption of AI-powered research tools toward creating research slop instead of deeper understanding of people, continual organizational learning, and improving products and services. To ensure research is elevated and evolves with AI tools instead of races to the bottom, we need to understand these incentives, what they produce, why that’s a problem, and how to resist them.
The current research context
AI slop is not interacting with research outside of the current, larger context of research. Research itself has been grappling with many questions about its direction and future for years, well before AI-powered research tools. There are three questions being asked about research that create the current context, enabling the production and distribution of research slop.
1. Who does research?
In my professional world, UX researchers, obviously. But also marketers, data scientists, designers, PMs, even engineers. People who do research (PwDR’s). Research is a mindset and approach to problems, not just a title. The line has always been blurry.
AI tools promise to “democratize” research—more people in more roles using these tools to ask questions and learn from respondents. This is happening now, not in the future. Whether it meets my, your, or other people’s definitions of “research” or not, vastly more people will use these tools going forward to conduct research and research-like activities.
2. What counts as research?
At the extremes it’s obvious. Chatting with friends who say exactly what you want to hear to save your feelings isn’t research. Getting published in Nature probably involved proper, rigorous capital R research. Between those extremes is a vast spread of activities ranging in rigor, representativeness, and validity.
We’re seeing an ontological collapse around the concept of “research” in tech. Lean Startup, “get out of the building,” and “you are not the user” ethos have driven a positive trend toward more contact with users. That’s a good thing. And at the same time, that contact isn’t all research. The distinction matters because we need to know how much weight to put on each input into product decisions. I won’t weigh a comment overheard at a party the same as a representative study with adequate power using validated measures analyzed by an expert with time to reflect and contextualize findings.
3. How long should research take?
No one outside of research has ever told me, “I think research should take longer.” This sentiment long predates AI tools.
But expectations for how long anything should take in product development have dramatically contracted over the past two years, with research caught in the net. Twenty-four-hour turnarounds for evaluative studies are becoming the norm. AI-powered tools hit global scale rapidly, so you can spin something up Friday, collect completes over the weekend, and have results on Monday. They satiate a hunger for “something” as quickly as possible over “the right thing” a bit later.
Why this opens the door for research slop
These three trends converge: who does research and what counts as research both expand while time expectations contract. This sets the incentives and conditions for rapid expansion of AI tools in product research, but also for widespread generation of research slop.
Now, I do want to balance this by saying more people having more contact with users is fundamentally good—it’s the foundation of user-centered product development. But it also creates conditions for flooding organizations with low-quality research slop that crowds out high-quality work.
AI slop and how it manifests in research
AI slop is high-volume, low-effort, unreviewed output optimized for algorithms and scale. It’s content people don’t take pride in or attach ownership to (except for when people claim entirely AI-generated work as their own!). It floods communication channels, crowds out quality and true expertise, and reduces signal-to-noise ratios in any information ecosystem it inhabits. AI content doesn’t need all these characteristics to be slop, but they typically co-occur due to incentives and technical implementations.
Let’s walk through these characteristics of AI slop and how they manifest as research slop.
Low effort
Efficiency prioritized over quality. Little or no planning before prompting. Use of short, low context prompts. Going with first-response outputs. Using AI like a vending machine versus a thinking partner.
Research slop: The low-effort nature gets touted as an advantage. Tool demos usually show chat interfaces where you “talk to your data” with one-line questions: “What were the top issues for our North America customers last month?” No reflection on what questions to ask. No paring down options to ensure focus on the right question. No context engineering. Looking for and excited by the first-response insights the model outputs. This is using AI like a vending machine—punch buttons, get snack—versus like a thinking partner to ask better questions.
High volume
With production costs near zero, slop is defined by the volume in which it appears, leading to information overload and epistemic clutter.
Research slop: Looking across industry, it’s a safe bet that most teams are probably very low on the amount of research and user understanding they do. That number should go up. But we want it to go up with quality. With higher volumes of research slop, you increase the need to reconcile different findings to create an informed point of view and strategy. Otherwise you’re just flooding the zone with…stuff…while also accelerating the ability to create justification for nearly any product change.
Banal realism
Real-ish but in the blandest way. More “not wrong” than “right” or “good.” Rarely malicious, instead marked by an omission of caring. Being proximate to truth is good enough.
Research slop: This may be the biggest risk. The appearance that it’s not slop. Where first-level insights appear as depth. Where technically nothing is wrong so information gets treated on par with rigorous approaches to deeper understanding and empathy. It “will do” and is probably “the best we have,” so why not use it? I don’t this is typically malicious. At best it’s people who genuinely want to understand customers better and see this as a path. At worst it’s trading caring for speed.
Surface-level “insights” show up here again. Not technically wrong or way off the mark, just not really helpful. The worst part is slight-to-mid inaccuracies—ones not worth fighting over. Sure, we need to “ensure the experience is personalized,” but that’s such a broad statement I can’t do much with it. Also, were we planning NOT to personalize it?
A vague sense of being “off”
In text, there are tells like recurring words, phrasings, patterns, grammatical approaches, alliterations. The same structures repeat across outputs regardless of appropriateness. In imagery and video, tells include artifacts in images, lack of object consistency, and subtle motion in backgrounds and textures.
Research slop: This can be hard to pinpoint when it comes to research. Something you can’t quite name. The findings look reasonable, quotes support them, graphs aren’t obviously wrong, but something is off. Insights that are redundant or obvious with only cursory understanding of the product or use cases. Falling back on circular truisms: “Users want things to be simple.” Yes, got it, thanks for that “aha moment.” This is especially pernicious for terminology that sounds impressive or official or very “number-y,” creating a sense of false certainty. Like when people say we need “quantitative” data to make a decision when they really mean “we need a representative sample at a scale allowing small margins of error around point estimates.”
It doesn’t cohere. There are claims and statements but not sense-making. No narrative. No point of view emerges.
Contextually adrift
Lacks depth, nuance, or originality. Exists detached from a relevant context. Disconnected from culture on large and small scales. This makes its way through to the final outputs because there are straight pipelines from production to consumption without review, or only minimal sanity checks. A lack of accountability or pride.
Research slop: In theory, AI tools and platforms can incorporate all your corporate data as context to make the insights more targeted and aligned with company strategy. However, most context exists outside of the company’s files and employees. What’s more, context in organizations is distributed across people, organizations, and documents. Some context is digitized. Most resides within people. That’s a feature, not a bug. Context is constructed via multiple perspectives and is not a singular, unevolving state of information. Without this context, insights and recommendations are misaligned to the wider product context and have a high potential to lead product teams down the wrong paths.
Time pressure incentives drive a lot of this lack of contextualization. Consideration, reflection, and deep thinking about the implications of a finding take longer than their opposites. Review becomes very difficult when reviewers don’t have the necessary context to know when something feels off or might be a mistake. Researchers typically gain this through their presence in interviews, designing surveys and metrics themselves with their research goals in mind, and having interacted with dozens, hundreds, or thousands of participants to build intuition about data and analyses.
Displaces quality content
Pushes out work not optimized for speed, scale, and engagement. Brute-force attacks on content requiring time, care, and consideration.
Research slop: This is when things get really bad. Research slop creates a never-ending siege on attention. Siphoning it away from deep, thoughtful, highly contextualized research. It changes what people consider research in the first place. It moves incentives from understanding users better than anyone else to creating the illusion of certainty as quickly as possible.
It is a kind of “attention bait.” Most research circulates internally, so this shows up as sensationalized claims or headlines to garner teammate attention. Something shocking and lacking nuance that you want to click on when it appears in Slack, Teams, or email.
Cascading effects
A meta issue emerges as these issues cascade into each other: lots of content, seeming legitimate but with some doubts, so investments in the recommendations are hedged, suspicion sets in around any reported “insights,” the decay in clear signal leads to erosion of trust in “research” of any kind, which leads to retreat to using only intuition. Put together over time, these effects can deteriorate an organization’s ability to act on research of any kind.
A different approach
Ok, breather time.
After all that you’re probably assuming (again) I don’t think AI has a place in research (even after the disclaimer above).
Nope. Not at all. I’m incredibly bullish on AI in research. Used correctly, it will help us ask better questions, come up to speed in new areas quickly, catch biases and mistakes, scale data collection after we’ve built confidence, assist with first-level organization like transcripts and tagging, and provide feedback on communicating results.
But having AI help us research in these ways requires deliberate choice given incentives pulling toward slop. Here are some choices we can make to get the most from AI research tools without ending up in a slop swamp.
Own it
Anytime you use AI as part of research, you’re responsible for the final output. If you wouldn’t put your name and reputation on it, don’t publish it. Don’t ship it. Hesitation before hitting publish is all the signal you need to take another pass and scratch that nagging itch in the back of your brain.
Impact over volume
This is true whether we’re using AI or not. And we all know it. Volume isn’t the goal. Answering the right questions to substantially improve people’s lives through the products we build is the goal. Creating organizational rituals and rhythms around learning and acting on those learnings is the goal. Done at cadences that lead to sustained learning that becomes a durable POV.
Always conduct some sessions yourself
Even when using AI tools for scale and speed, conduct some sessions yourself. Use these to build up intuition and a feeling for how conversations and the data will likely go. This lets you spot issues in any AI-augmented outputs.
Adversary and accelerant, not author
Use AI tools to find your potential biases and blindspots, identify edge cases, cluster pieces of data, help you find and retrieve specific quotes or behaviors, translate, and help brainstorm ideas. Require a dissent pass before finalizing any artifacts. Use it to challenge yourself. Get in an argument with your findings. Then you edit, own, and sign off on the final versions.
Always take a confidence pass
Models are well known for communicating with inflated confidence. This is especially pernicious for research where we’re painstaking in our attention to what levels of claims different levels of evidence afford. Do a dedicated pass through any reports to ensure claims are stated with appropriate confidence relative to supporting evidence. Link to all sources.
Reproducible by default
Catalogue the tools, prompts, and context you use. Include them in the appendix of your reports. Disclose how you used AI in your work. This is what transparency looks like now.
Incorporating these approaches into our workflows will help us all get the best out of using AI, not slop.
Be curious about people. Let your research approach show your care. Build your design intuition through empathy. Use AI to help you ask better questions. Use AI to scale your ability to ask the right questions. Use AI — not its slop —to scale learning, care, and empathy.
Let’s head off research slop before it becomes endemic.
---
Thank you to Camille Basilio, Chris Monnier, Krystle Murphy, and Liz Danzico for their deeply helpful feedback on early drafts.
Use of AI disclosure
AI was used to look across published articles on AI slop after the initial idea for this post was conceived. The author used back-and-forth discussion with AI to create a first draft of ideas of how AI slop manifests in research in conjunction with the author’s own experience. Along with human reviewers and feedback on earlier drafts, AI was used to generate additional suggestions for improving early drafts. All final content was reviewed and approved by Jess Holbrook.



Great insights thank you! Ownership and accountability is important and its something I see a lot in my work. When a manager signs off a menu, a briefing or a guest recovery plan, they’re not ticking a box, they’re putting their reputation on the line. They know how quickly standards slip when accountability is missing.
AI can help, but only when someone guides and signs off on the conclusions and ultimately accepts the consequences if they're wrong.
Your comment about "contextually adrift" resonates so deeply. I often find myself explaining to stakeholders and more junior researchers how important the overall journey is to the final output. To me, great research means eating, sleeping and breathing one's data and questions, making the researcher an invaluable thought partner not simply a data processor.