Media attribution and provenance have become a growing issue, especially on social media, in recent years. Your feeds are inundated with AI-generated content, including voices that sound just like public figures, videos of fabricated events that appear to come from reputable sources, and even entire personas created solely from pixels. While some of this content is amusing and seemingly harmless, some of it can have impactful real-world consequences.
There’s a crucial standard being developed that could profoundly impact the future of digital content. This standard is C2PA, and it’s currently gaining momentum across the industry for protecting media attribution and provenance information.
The Challenges We Are Facing
We are losing the ability to trust the content that we view and hear online faster than anyone could have predicted.
Granted, people have been generating and circulating fake photographs since the 1800s. Advanced editing and manipulation tools have existed for more than three decades. Similarly, misinformation has been disseminated on social media platforms long before the arrival of generative AI. However, this moment marks the simultaneous advancement in three key areas:
- Generating synthetic media is much cheaper. Expensive tools that used to have extremely limited access are now highly accessible and can be run on laptops or phones.
- The quality of these resources has advanced to the point where it’s difficult for the average person to distinguish between authentic and synthetic media.
- Third, distribution channels for these resources are now ubiquitous and instantaneous, allowing fake or manipulated media to reach millions of people before anyone can question their origins.
For example, there was a well-known incident in Hong Kong in early 2024. A finance employee at a multinational firm joined a conference video call with the company’s CFO and several colleagues…or at least that’s who it appeared to be. They instructed the employee to wire millions of dollars to a series of accounts. However, it was later discovered that every individual on that call, including the CFO, was synthetically generated. The convergence of social engineering and genAI represents one of the most critical security risks we have ever faced in the enterprise environment, and it is predicted to grow rapidly in both pace and complexity.
As the public becomes more aware that deepfakes and fabricated content exist and are growing more convincing, malicious actors gain a new tool. They can now claim that real evidence is fake. The skepticism and defenses that people develop in response to synthetic media can be exploited.
Should We Invest in Detection Technology?
If AI can create falsified media, then surely AI can detect it. Investing in detection tech may seem like the obvious answer, but it is also the wrong one. Here’s why:
First, generative models are improving faster than detection models can keep up with. With every new technique a detector learns to spot synthetic media, it is fed into the next round of training data for the generators. It is an arms race, and the offense has structural advantages over the defense, with the gap continuously widening.
Second, even world-class detection fails at scale. If a detector is 99% accurate but the synthetic media’s volume is in the magnitude of millions, the 1% that slips through is still a massive figure. It still presents a considerable risk with substantial consequences to the general public.
Third, and most importantly: detection only works after the content has been distributed. By the time something is flagged as fabricated, the media piece has already been seen, shared, and absorbed into public consciousness.
Beyond a singular solution, such as detection, it will take a coordinated effort of detection, signing, tagging, validation, and (most importantly) education to successfully combat the risks associated with falsified media.
The Bigger Issue at Stake
Detection is a defensive posture and cannot fix that people can un-see or un-hear things, despite how good it gets. If anything, the more detection improves, the more credible a liar’s claim that detection failed becomes. The very awareness of the problem now serves as the basis for the lie. Consider the following:
Judicial courts depend on evidence. Journalism relies on sources. Documentation is everything in the insurance industry. Elections depend on a shared record of what each candidate has actually said. In every one of those institutions, there is a baseline of evidence that everyone can agree exists, even if there is disagreement about the meaning.
But if that baseline is removed, you have misinformation compounded by a society where any inconvenient fact is deniable. Any uncomfortable truth can be dismissed by falsely claiming “That’s AI.” The consequence extends far beyond clickbait content on social media, but spurs the slow erosion of the fundamental foundations that global institutions rely on to function.
More Value Beyond Truth
Proving what’s real is a true offensive posture. Once an asset is created, you attach a record of who made it, where it came from, which tools were used, and how it has been altered since. This record travels with the file. Anyone can verify it, and they can see if it has been tampered with.
Provenance has always been the foundation of authenticity. Courts use it when establishing a chain of custody for evidence. Art galleries and dealers use it to separate a real Picasso painting from a forgery. The value of a baseball with Mickey Mantle’s signature is not in the ball itself but in the paper trail that proves he actually signed it.
While provenance isn’t new, making provenance work for digital media at scale is. This is the technical issue that will be addressed over the next several years. While provenance does not declare content to be true (a signed record from a legitimate news organization can still describe events incorrectly), it offers something more useful than the truth: reliable information about origin. The viewer is free to absorb the content and decide its credibility for themselves. In digital trust, we must be able to attest to the media’s creation or modification.
The Arrival of C2PA
C2PA stands for the Coalition for Content Provenance and Authenticity.
Since launching, the C2PA conformance has grown significantly, expanding beyond its founding members. Major media organizations, generative AI platforms, hardware manufacturers, and certificate authorities, including SSL, the first publicly trusted CA in the C2PA ecosystem, are all participants now. SSL issues the certificates that signers use to bind verifiable provenance to the media they create or modify.
If you operate or build systems where trust matters, let’s connect. Our team is here to help you make real decisions about how content authenticity solutions, such as SSL’s C2PA and/or CAWG certificates, fit into your work.
This article is abridged and interpolated from SSL EVP of Technology Dustin Ward’s original blog post. You can read the full version here. This is his first entry of a series centered around C2PA.