Generative artificial intelligence is everywhere you look these days, including on the web: advanced predictive text bots such as ChatGPT can now spew out endless reams of text on every topic imaginable and make all this written content natural enough that it could plausibly have been written by a human being.
So, how can you make sure the articles and features you’re reading online have been thought up and typed out by an actual human being? While there isn’t any foolproof, 100 percent guaranteed way of doing this, there are a variety of clues you can look out for to spot what’s AI-generated and what isn’t.
Check the author
For now, at least, there aren’t any high-profile, well-respected online outlets pumping out AI content without labeling it as such—but there are plenty of lower-tier sites making full use of AI-generated text and not being particularly honest about it. If you’re coming across a lot of text without author attribution, that’s one warning sign to look out for.
In contrast, if an article has the name of a real person attached—even better, a real person with a bio and social media links—then you’re more likely to be reading something that has been put together by a human. You’ll probably not have time to background check everything you read online, but it’s worth it when you really need to know its source.
The alleged AI articles recently spotted on the Sports Illustrated site came with author profiles and bios alongside them—profiles and bios that were also made by generative AI, it turns out. A reverse image search (through something like TinEye) can identify images of people that aren’t actually real, which might be helpful in determining an article’s source.
More clues can be gleaned from a website in terms of its history, the type of content it publishes, whether or not it has an About Us page, and so on. For example, searching for the best phone reviews on the web brings up well-known tech sites staffed by human beings.
Check a Detection Engine
There’s plenty of debate about whether or not AI text detection works. OpenAI says not, and most reporting on the matter says these AI detectors aren’t to be trusted. However, there are still plenty of them in business at the time of writing, and within limits, they might be useful in checking for the use of AI online.
We ran a brief series of tests on a few AI detectors online, including Copyleaks, GPTZero, and Scribbr, and what we found tallies with what other people have found: These detectors can tell the difference between AI writing and human writing, but not all the time, and not to a level that conclusively proves anything one way or another.
These detectors seem to have a better success rate at spotting human writing than AI writing. They’re essentially looking for originality in the text, trying to figure out what an AI would say next based on its training. The more data they have to work with, the better, but there are limits on how much you can use for free.
The studies we have to date suggest that some detectors are better than others and that some are even right most of the time—but none of them are consistently right to a high level. These detectors are perhaps best thought of as another tool you can use alongside other avenues of inquiry and not something to rely on entirely.
Check the Signs
As we said at the start, there’s really no guaranteed way of identifying which online text has been produced by AI and which hasn’t. However, there are still certain signs to look out for: Because of the way generative AI is trained, its output tends to be generic, vague, and obvious at times.
Certain touches of originality, humor, and humanity are often missing (as are personal anecdotes). AI always wants to generate text that has a low level of perplexity—put another way, a high level of predictability. At their heart, these engines are just predicting what word should come next, and that can show in a general mushiness and blandness that is sometimes noticeable.
You can also look out for glaring errors (such as hallucinations), but of course, human beings make errors in their writing, too. AI text might be capable of getting something significantly wrong or significantly wrong multiple times in different ways, but it still doesn’t prove if AI has composed an article.
Taking all these signals and clues and flags together, you may just be able to make an educated guess about whether something came from a human mind or not, even if the only way to be sure is to watch it being written: AI text is certainly harder to spot than AI imagery, but that’s a whole other topic.