Elon Musk teases a feature for X that labels edited images as manipulated media but leaves many questions unanswered.

The only information we have is from Musk’s brief post saying, “Edited visuals warning.” He shared this alongside an announcement from DogeDesigner, a known source for news about new features on X. DogeDesigner mentioned that this new tool could help stop misleading images from spreading, especially from older media sources.
Before being renamed X, Twitter did have rules. They labeled images that were changed to trick people. Yoel Roth, who worked there, explained that tweets could be marked if they used “editing or cropping.” It’s still unclear if X will keep these past rules or change them for AI-made images. The current help section mentions a rule against sharing fake media, but it’s not often enforced.
Calling an image “manipulated media” can be tricky. Since X is used for sharing political ideas, it’s important to know how the company will decide what counts as “edited.” Users should also find out if there’s a way to appeal decisions made by the platform’s Community Notes.
Boston: On June 23, 2026, a Techcrunch event highlighted issues with AI image labeling, pointing to Meta’s struggles. They found that real photos were labeled inaccurately, showing how technology can misinterpret edits.
Meta faced confusion when AI tools altered images and affected their labeling technology. Similarly, Adobe’s tools could trigger incorrect tags on photos. Because of this, Meta changed its labeling to be clearer about AI involvement.
Now, there are groups working to check the truthfulness of digital content, like the C2PA. These initiatives aim to help show where and how content was created.
It’s likely that X’s new feature will follow some rules about identifying AI content, but Musk has not shared these rules. We don’t know if his thoughts apply just to AI images or all edited photos. X is also not yet part of the C2PA but may have plans to join. We’ve asked for details but haven’t received a reply yet.