Here I’ll delve on why I see this text as a turdlet, the provided prompt (see the PDF) as useless skibidi, and whoever wrote this text as a functionally illiterate and assumptive-as-a-brick person.
First off. The very premise is wrong. The text assumes style defines “slop”. So if you detect stylistic elements you can find “slop”. And if you remove those elements the output is “deslopped”.
Right? Well… no. Here’s the catch: what makes large model output “slop” is not style, but content — those models vomit [what humans would interpret as] assumptions, lies, self-contradiction, so goes on.
Mind an example? Here, fresh from the oven:

Since it’s just a table and two phrases, you won’t see any of the red flags listed by the text — and yet anyone who knows basic Latin (or checks the Wiktionary entry for this word) can see it is slop, because it’s shitting a lot of wrong info.
Incorrect info in this screenshot
- The prompt requested phonetic transcriptions (concerning raw sounds) and the output provides phonemic transcriptions (concerning abstract units). This is fairly minor but it’s already bad.
- Even if we disregard the above, the transcriptions are a fucking mess. The “o” in “citrōs” is /o:/ [o:]; not /o/ [ɔ]. Also note how the phonemic notation is /o/ instead of */ɔ/, as it’s the short counterpart for /o:/.
- Latin almost never stressed the last syllable; it’s typically in the second- or third-to-last. Now look at the ablatives and datives.
- Long vs. short vowels are all fucked up: both genitives, the nominative plural and the vocative plural all use lōōōng vōōōwels. Doing this shit in Latin is as bad as confusing “bit” and “beet” in English.
- The vocative singular is “citre” /ki.tre/ ['kɪ.tɾɛ]. I can’t rule out “citrus” popped up somewhere, but at least the default form should be listed.
- The word is not masculine, but feminine.
It’s worth noting that I don’t really expect most people reading my comment to know enough Latin to check if what I said above is accurate or inaccurate. However, odds are you have some relatively uncommon knowledge, right? Perhaps about banking procedures, or Skibidi Toilet, or the depths of the semantics of six seven, or the influence of maize kernels on the sexual life of ants. Whatever it is, test this out, by asking a bot in-depth information about a topic you know by heart, and then looking for the errors. Then you’ll see the slop.
[whataboutism] But humans also output wrong info lol lmao.[/whataboutism] You could argue that’s “human slop”. But it doesn’t magically make the above less slop.
In other words. If you want to find AI slop, focus on the content.
But let’s talk about style. Something we’ve been observing is that large models are indeed predisposed towards certain stylistic features, so the author isn’t completely in the wrong when associating them with AI writing. However, they don’t work as red flags, because all of them are things you’ll find humans doing — the large models didn’t come with those up ex nihilo, they’re replicating textual patterns they were fed with. If you try to detect AI based on a bunch of “style red flags”, you’re bound to get
- a lot of false positives — things written by humans, that you assumed to be AI.
- a lot of false negatives — things written by AI, that you assumed to be by a human.
Do you want an example of that? You’re reading one. So far I’ve made sure to include a lot of those red flags across this comment. You might not like it (I do use some of those things while writing, but I kind of forced it here), but it’s really hard to argue I’d be a bot, or wrote this piece with a language model, even if you focus on style alone. Why is that?
Because see, humans have a natural tendency to guess what’s inside the others’ head: their thought processes, their experiences, their personality, what they mean. It’s what they call “theory of mind” — you have a mind and expect others to have a mind. And even unconsciously, you’ve been using this to analyse what I say, perhaps even giving me attributes (that may or may not be true — you don’t know it.)
But this natural tendency breaks once you handle AI output. It feels off; as if the “author” of the text is neither trying to say something, nor rambling. What the author calls “dramatic pivot phrases” (actual name: discourse markers) will be there, but they’ll often point to nothing worth marking on first place, since the bot isn’t “trying” to say anything.
So if you want to detect human vs. AI by style alone (you shouldn’t, see the first section), that’s what you should look for: “what’s the point of this utterance? Why is this element here? What is the author trying to do with it? Is this contextually justified? If this doesn’t make sense, could I picture the author simply rambling, or getting distracted?”. I’m aware it’ll leak back into semantics, but that’s unavoidable.
Here’s a further analysis of the list, already taking into account humans do all those things.
- Em dashes: people who write professionally often use those. Including myself. Some also spam them, but that’s a stylistic issue (for most people, at least), not a sign of AI.
- Corrective antithesis (not X, but Y): people use this device a lot because it helps to highlight contrasts. If you must use this to detect AI, look for pointless contrasts.
- Dramatic pivot phrases (“here’s the catch”, “but here’s the thing”): this shit is not added for drama. It’s a bunch of discourse markers, helping with the flow of the text. Check if what follows deserves attention.
- Soft hedging language (“Something we’ve observed.” “This is where X really shines.” “It’s worth noting that.” ): those phrases add meaning, unless you’re functionally illiterate (like the author, apparently). For example, “something we’ve observed” highlights that the following is based on experience, not a logical conclusion. “This is where X really shines” is adding the author’s attitude towards the object. “It’s worth noting that” is a discourse marker, like #3 it helps with the flow of the text, redirecting the attention of the reader.
- Staccato on repeat (short and repetitive sentences): see what I said under #1 (stylistic issue).
- Cookie-cutter paragraphs (repetitive paragraph structure): stylistic issue.
- Gift-wrapped endings (summary at the end): no, good writing does not “trust you to remember what you just read” = “gullibly assumes the reader needs to help parsing the text”. Plus the assumer is implying “bad text, thus AI text”?
- Throat-clearing (“let’s explore”, “let’s unpack”, etc.): see #4.
- Perfect punctuation: is the author bloody serious???
- Copy-paste metaphors (repeating the focus of the metaphor multiple times): again, style issue you’ll often find human beings doing.
- Overexplaining the obvious: have you ever been on social media? If you don’t overexplain the obvious, some assumer is bound to assume you mean the opposite, and screech at you. Welcome to the internet, 2026, colourised.
- Generic examples: empty words aren’t exactly something new, right? Check what any politician says. (Even in pre-ChatGPT times. Fuck, check what some Roman politicians said, and you’ll find plenty instances of empty praise.)
And something must be said about the author babbling about “great writers”. It’s both assuming “bad writing → AI slop” and taking something that is ultimately subjective (is it good for whom?) as if it was objective (true/false).
Obviously, one or two red flags on their own is not an admission of slop. But when you see five or more at the same time, you can be pretty sure AI has been here.
Or alternatively the author is vomiting certainty and re-eating their own vomit. Oopsie. (NB: before Grok I’d argue crude language is a sign you’re dealing with human beings. Now not so much.)
We combined all 12 red flags with a killer prompt that works very well. Upload it to ChatGPT, Claude or any other AI and it’ll /deslop every one of them.
I tested the prompt. It doesn’t work anywhere as well as the author is bullshitting it to. As long as you look for the actual signs you’re dealing with slop, instead of assuming it by the presence/absence of “red flags”.


