stopsloppypasta.com

Stop Sloppypasta

slop·py·pas·ta  n.  Verbatim LLM output copy-pasted at someone, unread, unedited, and usually unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and forwarded without further thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

You just got an unread message notification.

Maybe it arrived in Slack. Maybe it was in an email, or dropped into a meeting doc. It is several paragraphs long. It has headers. It covers the general topic, more or less. And you are now expected to read it.

The person who sent it probably spent about ten seconds on it. They asked a chatbot, got a response, and forwarded it. They did not read it themselves.

That is what makes it rude. Not the use of AI, which is fine. The rudeness is in the asymmetry: they spent nothing and handed you the bill. Reading takes effort. When someone forwards text they have not read, they are asking you to do work they chose not to do.

There are a few common patterns. You may recognize yours.


The four types

❌ Example
#strategy
The team is mid-discussion about whether to expand into the German market.
sarah:
I asked ChatGPT about this! Here's what it said: Expanding into the German market presents both significant opportunities and notable challenges. Germany is the largest economy in the EU and offers a robust consumer base. However, businesses should consider the following key factors:

1. Regulatory Environment
Germany has strict regulations including GDPR compliance, local labour laws…

[6 more sections follow. None mention your company, your product, your budget, or the discussion already underway.]
⚠ unrequested · unread · generic · buries the live discussion

Type 01

The Eager Helper

Someone spots a conversation and wants to help, so they ask a chatbot and share whatever comes back. The intention is good. The effect is that a wall of generic AI text lands in a live discussion that had its own specific context.

The thread now has to make room for an AI essay before it can continue. The response feels helpful to send and creates work to receive.

The tell: the AI's response addresses the general topic, not the specific situation, because it did not have the full context. A response tailored to the actual conversation requires reading the actual conversation first.

❌ Example
you:
Does anyone know why our email open rates have been dropping? We changed the subject line format last month.
tom:
ChatGPT says: Email open rate declines can be attributed to several factors. These include changes in subject line strategy, sender reputation issues, list hygiene problems, and deliverability concerns. Here are the most common causes:

1. Subject Line Fatigue
If subject lines have become too similar or predictable, subscribers may stop engaging…

[Continues for 5 more generic sections. Does not mention your subject line change, your audience, or your platform.]
⚠ generic · not specific to the question asked · no context applied

Type 02

The Oracle

A specific question gets asked. Someone puts it into a chatbot and shares the response, expecting that to count as an answer. The AI has produced a general essay on the topic, which is a different thing.

"ChatGPT says:" is the LLM-era equivalent of replying with a Google search link. It points toward an answer without providing one.

The tell: the response would be identical regardless of who asked, what platform they are on, or what context surrounded the question. An answer that fits everyone fits no one in particular.

❌ Example
dan:
Hey team — I did some research on our competitors this week. Here's a summary:

Competitive Landscape Overview
The market is highly competitive, with several established players and emerging challengers. Key competitors offer distinct value propositions across pricing tiers…

[900 words follow. No dates. No sources. No live pricing. Based on a model with a knowledge cutoff from before this quarter. Presented as original research.]
⚠ presented as personal work · may be months out of date · hallucinated details possible · no one knows to check
✓ Done right
dan:
Used AI to pull together an overview on competitors, then verified pricing on each vendor's actual website this morning. Main thing: both Acme and Globex raised enterprise pricing since last year, so we are cheaper at the mid tier by about 20%. Worth discussing whether that is still the right positioning. Notes in the doc.
✓ AI-assisted · personally verified · specific · opens a real conversation

Type 03

The Ghost Author

AI output goes out under the sender's name as their "research" or "writeup," with no indication a chatbot wrote it. The recipient has no reason to question it, and may act on information that is out of date, incomplete, or simply wrong.

The sender's name carries trust. When unverified AI output travels under that name, the trust gets extended to something that has not earned it. If the information turns out to be wrong, so does the sender.

The fix is simple: use AI to get started, verify the output yourself, and say so. "AI-assisted, verified by me" is transparent and still useful. Claiming "here is my research" when you did neither creates a gap between what the recipient expects and what they actually received.

❌ Example
recruiter:
Hi [First Name],

I came across your profile and was genuinely impressed by your work at [Company]. Your experience in [Relevant Field] stood out to me, and I'd love to explore how your background aligns with an exciting opportunity we have…

[The "personalization" is all template variables that were never filled in. The rest is AI-generated boilerplate. Total engagement with your actual profile: zero.]
⚠ template variables unfilled · no engagement with the actual profile
❌ Also counts
applicant:
Dear Hiring Manager,

I am writing to express my strong interest in the Senior Designer position at Acme Corp. With over seven years of experience crafting user-centred solutions and a passion for meaningful design, I believe I would be a valuable addition to your team…

[Reads like every other cover letter. Contains no specific observations about Acme. Could have been sent to any company hiring for this role.]
⚠ no specific observations about this employer · reads the same as every other application

Type 04

The Hollow Hello

AI writes the outreach: a cold email, a cover letter, a LinkedIn note. It looks personal on the surface but contains nothing specific to the recipient. Sometimes the template brackets are still there. This is personalization theater: the appearance of effort rather than the thing itself.

The asymmetry is what makes it frustrating to receive. It took seconds to generate and minutes to read. The recipient spends more time on it than the sender did.

The tell: remove the name and company from the message. If it still reads perfectly, it was not written for the person receiving it.

Why it matters

For most of human history, text implied effort. If someone put words in front of you, you could assume they had at least read them. Writing carried a built-in proof-of-thought: someone spent time on this.

AI has made generating text essentially free. Reading it still costs time and attention. When AI output gets forwarded without being read first, that cost shifts entirely to the recipient.

A useful test, from Simon Willison: did producing this cost you less than reading it will cost them? If the answer is yes, it is worth pausing before sending.

"I think it's rude to publish text that you haven't even read yourself. I won't publish anything that will take someone longer to read than it took me to write." — Simon Willison
"Whenever you propagate AI output, you're at risk of legitimizing it with your good name, providing it with a fake proof-of-thought." — Alex Martsinovich, It's rude to show AI output to people

Simple guidelines

Read it before you send it.

If you have not read the output, you do not know whether it is correct, relevant, or current. Reading it is the minimum bar.

Cut it to what matters.

LLMs pad. They hedge. They produce six sections when one sentence was needed. Distilling it to the useful part is your job.

Verify before forwarding.

LLMs produce outdated facts, wrong figures, and plausible nonsense. Anything you forward carries your implicit endorsement, so it is worth checking anything consequential.

Own it or disclose it.

Read it, edited it, verified it? Send it as yours. Sharing a raw AI log? Say so explicitly. Passing it off as your own work without doing any of that is where it goes wrong.

Do not send what was not asked for.

Dropping unsolicited AI output into a conversation hands cleanup work to someone else.

Make it specific or do not send it.

If the message reads equally well addressed to anyone, it was not written for the person receiving it.

Using AI to think, research, draft, or summarize can be genuinely useful. The question is not whether to use it, but what you do with the output before passing it on.

Use AI to improve what you send. Do not use it to replace the act of sending something.

Read it. Trim it. Check the parts that matter. Make it yours. That is all it takes for AI-assisted work to land well.