Breaking News

AI is so ubiquitous that it will be more practical to fingerprint real media than fake media.

It’s no secret that AI-generated content will take over our social media feeds in 2025. Today, Instagram CEO Adam Mosseri made it clear that he expects AI content to overtake non-AI images and the significant implications this shift has for its creators and photographers.

Mosseri shared his thoughts in a lengthy post about the broader trends he hopes will shape Instagram in 2026. And he offered a particularly frank assessment of how AI is shaking up the platform. “Everything that made creators important — the ability to be real, to connect, to have a voice that couldn’t be faked — is now available to anyone with the right tools,” he wrote. “Foods are starting to fill up with everything synthetic.”

But Mosseri doesn’t seem particularly concerned about the change. He says there’s “a lot of amazing AI content” and the platform may need to rethink its approach to labeling these images by “fingerprinting real media, not just looking for fake ones.”

From Mosseri (emphasis mine):

Social media platforms will come under increasing pressure to identify and label AI-generated content as such. All major platforms will do a good job identifying AI content, but their situation will deteriorate over time as AI improves in its ability to mimic reality. There are already a growing number of people who believe, like me, that it will be more convenient to take fingerprints on real media than on fake media. Camera manufacturers could cryptographically sign images upon capture, creating a chain of custody.

On one level, it’s easy to see how this seems like a more practical approach for Meta. As we have previously reported, technologies intended to identify AI content, such as watermarks, have proven to be unreliable at best. They’re easy to delete and even easier to ignore completely. Meta’s own labels are far from clear and the company, which has spent tens of billions of dollars on AI this year alone, has admitted that it cannot reliably detect AI-generated or manipulated content on its platform.

The fact that Mosseri so readily admits defeat on this issue is telling. The AI ​​won. And when it comes to helping Instagram’s 3 billion users understand what East real, this should largely be someone else’s problem, not Meta’s. Camera makers – probably phone makers and actual camera makers – should come up with their own system that looks a lot like a watermark to “verify authenticity at capture”. Mosseri offers few details on how this would work or be implemented at the scale required to make it feasible.

Mosseri also doesn’t really address the fact that this risks alienating the many photographers and other Instagram creators who are already frustrated with the app. The executive regularly responds to complaints from the group who want to know why the Instagram algorithm does not systematically present their publications to their subscribers.

But Mosseri suggests these complaints come from an outdated view of what Instagram is. The flow of “polished” square images, he says, “is dead.” He says camera makers are “gambling on bad aesthetics” by trying to “make everyone look like a professional photographer from the past.” Instead, he says more “raw” and “unflattering” images will allow creators to prove they’re real, not AI. In a world where Instagram offers more AI content, creators should prioritize images and videos that intentionally make them look bad.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button