A screenshot of a conversation with ChatGPT

3 Telltale Signs Text was Written by ChatGPT

Lessons from the Chase May 18, 2023
🌀
In Lessons from the Chase, we take you behind the scenes of building the world's first interactive experience platform. Members of the Goosechase flock let you in on business problems we're tackling, how we're thinking about them, and what solutions we put to the test.

After having a few conversations lately about “ChatGPT Stink” for written content, what started out as an annoying side effect has started to feel like one of the more interesting “whack-a-mole” games we’ll play over the coming years. Was a piece of text written by a human? An AI? A combination of both?

My thesis is that with the cost of producing decent content going down drastically, we’ll soon see a premium placed on high quality, opinionated text that feels like it was clearly written & crafted by a human. And on the flip side, text that has the "ChatGPT stink" will be quickly dismissed and ignored, kind of like we ignore all the similar cold outreach emails that hit our inbox each week.

3 Telltale Signs of AI-generated Text

So what is that stink?

These days there are ChatGPT checkers, plug-ins, and all sorts of AI text detectors.

But let's consider some old-fashioned options - here are 3 easy signs I look for to detect ChatGPT usage in written text:

1. Inoffensive, overtly neutral language - not quite academic, but not far off. The copy is just a bit TOO descriptive & vanilla, lacking that human roughness, personality & directness.

2. Tone & point of view changes - my personal experience is that this often comes when a prompt has instructions that are copied or parroted a bit too directly, creating a feeling of inconsistency with the final piece. Almost as if the AI is trying to stitch together two different voices as one, but not quite succeeding.

3. Too many reasons provided - when giving supporting evidence, you’ll often see two or more reasons included instead of a single strong one. Lists in particular seem to be heavily represented, almost like it doesn’t trust itself to get the single correct answer, so it hedges by including more.

There are definitely others, but I’ve found myself looking for these in particular when reading a body of text for the first time, and it’s proven to be quite effective.

Not to say that these tells aren’t solvable, in fact most of these can already be mitigated quite well by more detailed prompts, but I do think the fact that these three indicators ring true provides a pretty good insight into where we are right now with the default state of AI content.

And that’s not to say I’m anti-AI at all, I actually expect quite a significant impact going forward, but it is feeling more and more clear to me that we are still at the beginning of the disruption curve, at least for written content.

Tags

Andrew Cross

Co-Founder & CEO of Goosechase