Recently, our team at Intellitect launched AIS Images, a curated library of AI-generated visuals. The platform gives designers easy access to high-quality, professionally styled images designed for real-world use in presentations, campaigns, websites, and more. Read more HERE.
To promote the launch, we ran a fun challenge on social media:
We shared a set of images, told people that two were AI-generated and one was made by a human, and asked them to guess which was which.



It’s a simple idea, but it taps into a much bigger question:
Can you still tell the difference between what’s real and what’s AI?
What I Did
We’re told there are three images: two from AIS Images and one created by a person. So the goal isn’t to prove which are fake, but to find the odd one out. That reminds me of a trick I’ve used to spot fake accounts on social media.
It’s called Error Level Analysis (ELA). You can learn more about it at FotoForensics, but the short version is: ELA highlights how different parts of an image compress. Highlighted areas often indicate where an image may have been edited. AI-generated images will often be more uniform and show fewer highlighted areas.
I uploaded all three images to FotoForensics:
- Image 1: FotoForensics – Analysis 1
- Image 2: FotoForensics – Analysis 2
- Image 3: FotoForensics – Analysis 3
But even simpler, if you look at these three and asked to pick out the odd one.

Even without advanced training, you can usually see patterns. One image had significantly more edge data than the others. That alone isn’t proof of anything, but it helped me make an educated guess. Since we’re just trying to find the non-AI image, not prove authorship in court, that was enough. Unsurprisingly the numbers in the corners are also highlighted showing that area of the image was likely edited for the original post.
Some Context
Of course, there are easier tells in AI-generated images: distorted fingers, strange reflections, shadows that don’t make sense. But ELA adds another layer. It is even used in research, often combined with machine learning models, to identify synthetic media. In my case, I just used my [slower, more fallible] human brain.
If you want to go deeper, here are a couple of links worth checking out:
Final Thought
For a simple challenge, this turned into an interesting exercise. Tools like FotoForensics won’t give you a yes or no answer, but they can help you think more critically about what you’re seeing. That’s especially useful in a world where AI visuals are only getting more convincing.
And sometimes, that’s all you need.