- ← Retour aux ressources
- /Midjourney V5: When AI-Generated Images Became Too Real
Midjourney V5: When AI-Generated Images Became Too Real
On March 15, 2023, Midjourney V5 launched with photorealistic images. Days later, a fake Pope photo fooled millions. AI art had crossed a line.
For months, you could tell AI-generated images apart from real photos. The hands looked wrong. The lighting was off. Something always gave it away.
On March 15, 2023, that changed. Midjourney V5 launched, and suddenly AI could create images indistinguishable from reality.
Days later, the world saw the consequences: a viral photo of the Pope in a white puffer jacket. It wasn't real. Millions believed it was.
This is the story of when AI art became too good—and what happened next.
The Leap to Photorealism
Midjourney had been improving steadily through versions 1, 2, 3, and 4. Each update brought better quality, but the images still had that "AI art" look—slightly dreamlike, artistic, not quite real.
Version 5 changed that overnight.
What Made V5 Different
The improvements were dramatic and immediate:
Lighting: Natural, realistic light sources and shadows Textures: Skin, fabric, metal—all rendered with photographic accuracy Composition: Professional photography-level framing and depth Details: Tiny elements like fingerprints, fabric weave, and reflections all correct
You could generate an image that looked like it came from a professional camera. Not "AI art style"—actual photographic realism.
The Pope That Broke the Internet
On March 24, 2023—just nine days after V5 launched—an image appeared online: Pope Francis wearing a stylish white puffer jacket, looking like he was heading to Milan Fashion Week.
The image went massively viral. People shared it across Twitter, Reddit, Instagram, and news sites. Many believed it was real.
It wasn't. A Midjourney user had generated it as an experiment.
Why People Believed It
The image was convincing for several reasons:
- The photorealism was perfect
- The Pope's face looked accurate
- The lighting and composition felt natural
- It was plausible enough (fashion-forward Pope? Why not?)
Even media-savvy people admitted they weren't sure if it was real. That uncertainty was the problem.
The Wider Implications
The viral Pope photo crystallized concerns that had been building about AI-generated imagery.
1. Misinformation at Scale
If a silly Pope photo could fool millions, what about politically charged images? Photos of politicians doing illegal things? Fake crime scenes? Fabricated evidence?
The technology for creating convincing fake images was now accessible to anyone with a Midjourney subscription and basic prompt skills.
2. Reality Became Questionable
"Pics or it didn't happen" had been internet wisdom for years. Photos were proof. Now, photos proved nothing.
Every image online became suspect. Was this real or AI-generated? Without context or verification, it was increasingly hard to tell.
3. The Authentication Crisis
News organizations, fact-checkers, and platforms scrambled to develop detection methods. But AI-generated images don't have obvious tells anymore.
Metadata could be stripped. Forensic analysis could be fooled. The verification arms race had begun.
The Creative Revolution
While ethicists worried, creatives celebrated. Midjourney V5 was an incredible tool for artistic expression.
Concept artists used it to rapidly prototype ideas that would take hours to paint or photograph.
Advertisers generated product mockups and marketing imagery without expensive photo shoots.
Writers created book covers and character visualizations.
Independent creators could produce professional-quality imagery without budgets or technical photography skills.
The democratization of high-quality image creation was genuinely revolutionary—and deeply disruptive.
The Copyright Debate Intensifies
Midjourney V5's quality made the copyright questions more urgent.
The model was trained on billions of images scraped from the internet—including copyrighted photographs, artwork, and illustrations. Artists argued this was theft.
When V5 could generate images in the style of specific artists—sometimes matching their style perfectly—the ethical concerns became impossible to ignore.
Lawsuits began. Getty Images sued Stability AI (makers of Stable Diffusion). Artists organized class actions. The legal framework for AI-generated art remained completely unsettled.
The Platform Response
Within weeks of the Pope incident, platforms started implementing AI image policies:
Twitter/X considered labeling AI-generated images Instagram explored detection tools News organizations updated policies requiring AI disclosure Stock photo sites banned or limited AI-generated uploads
But enforcement was nearly impossible. How do you detect photorealistic AI images at scale?
The Evolution Continues
Midjourney didn't stop at V5. Version 6 launched in December 2023 with even better realism. Version 6.1 improved further in 2024.
Each update made the images more convincing, more detailed, more indistinguishable from photographs taken with cameras.
The technology kept advancing faster than society could adapt.
Where Are They Now?
Today, Midjourney remains one of the most popular AI image generators, with millions of users creating images daily. Photorealistic generation is now standard—V5 made sure of that.
The Pope in the puffer jacket has become a historical moment, the example everyone cites when discussing AI misinformation. It's taught in media literacy classes and used in discussions about authentication and trust online.
Society hasn't solved the problems that image raised. We've just gotten more skeptical. Every photo now comes with an implicit question: "Is this real?"
March 15, 2023 was the day AI art crossed the uncanny valley and landed firmly in "too real to trust." The technology proved it could create perfect images. What we're still figuring out is how to live in a world where seeing is no longer believing.