Fake Images Are Everywhere Now—Here’s How to Spot Them
How can you discern in this age of misinformation if the images you’re seeing are real or fabricated?
As artificial intelligence (AI) continues to develop, it’s getting harder and harder to tell the difference between real photographs and AI-generated images.
AI can create everything from stunning landscapes to lifelike portraits in a matter of moments—and at first glance, they may appear perfectly legitimate.
In this age of misinformation, how can you discern if the images you’re seeing are real or fabricated?
Fortunately, there are subtle clues that can help us tell the difference, including inconsistencies in texture, anomalies in human features, and garbled writing. By understanding these telltale signs, you can better navigate digital information and assess the authenticity of the images you encounter.
Hands and limbs are surprisingly complex structures, and AI often struggles to accurately replicate them. Look for extra, oddly shaped, or misaligned fingers. They may also be positioned in an unusual way or have improper dimensions.
In the above image, the children’s hands and feet look unnatural, with misshapen fingers and toes as well as misaligned sandal straps.
AI has difficulty rendering small details, so you might also see objects and elements being subtly merged together in unnatural ways. These imperfections occur because AI relies on pattern recognition, which can fail when handling intricate or nuanced details.
So if you’re uncertain if a crowd photo is legitimate, take a look at the details. For example, background faces are often blurred or have soft, poorly defined characteristics.
In this AI-generated image, the man in the water has a blurry face and fingerless hands, while the young man on shore has a transparent leg that appears to merge into the background.
Go paid at the $5 a month level, and we will send you both the PDF and e-Pub versions of “Government” - The Biggest Scam in History… Exposed! and a coupon code for 10% off anything in the Government-Scam.com/Store.
Go paid at the $50 a year level, and we will send you a free paperback edition of Etienne’s book “Government” - The Biggest Scam in History… Exposed! OR a 64GB Liberator flash drive if you live in the US. If you are international, we will give you a $10 credit towards shipping if you agree to pay the remainder.
Support us at the $250 Founding Member Level and get a signed high-resolution hardcover of “Government” + Liberator flash drive + Larken Rose’s The Most Dangerous Superstition + Art of Liberty Foundation Stickers delivered anywhere in the world. Our only option for signed copies besides catching Etienne @ an event.
I feel queasy everytime someone uses and acronym as a subject. More so with this thing.
Regardless of the origin of manipulated pictures, I have noticed that people still have not renounced to their confirmation bias: when they see a picture of someone they wish to be true/false they automatically believe/disbelieve the picture. It's amazing how fast the rationalizations kick in.
I'm far from having been cured of this problem. But I am astonished every day that so many people seem to operate under the delusional thought that they are not manipulable but other people are.
There is really a quick and easy fix for this - if the people call for it. Everything that is Ai generated (a small "i" for intelligence) is to have a watermark on it so we can tell the differenece. I am imagining that not having it is going to cause all sorts of problems with our interpretations of pictures, we're just not going to be able to tell whether some of them are real or not. Sometimes I can tell straight away but on the pictures showed, those details are so subtle that they would still not help us determine that they are Ai generated. Why not have a watermark on them? Corps do that to stop copyright "infringement" (copying is a right) so it can be done on Ai generated work as well. That is if and only if, the people demand it.