AI shook up the photography world in 2023, from fooling judges to win photography competitions to the proliferation of deepfakes with real-world ramifications, such as a depiction of an explosion – which didn't happen – outside the Pentagon that caused markets to briefly dip. 

Deepfakes can be amusing (the Pope in a puffer jacket, anyone?) but they can also be used to spread misinformation, and as AI image generators become more powerful, it's becoming increasingly difficult to tell the real photos we see online from fake ones.

Now, according to a report by Nikkei Asia, leading camera brands Sony, Canon and Nikon are looking to address this issue by building tech that can verify the authenticity of images into new cameras. 

Last year, the Leica M11-P became the first camera with Content Credentials built in – a digital signature that authenticates the time, date and location an image was taken, and who it was taken by, as well as indicating if any edits have been made post-capture.

Leica M11-P rear screen with Content Credentials menu

The anti-AI Leica M11-P launched last year was the first camera with Content Credentials built-in (Image credit: Future)

Not many people have an $8,000 / £7,000 Leica in their bag, though, and now Sony, Canon and Nikon are set to introduce their own authentication tech.

We don't yet know which new cameras will hit the shelves with Content Credentials-type tech built in – though we have rounded up the 12 most exciting cameras for 2024, which might give us an idea. At the Sony A9 III launch Sony announced that it will update that camera, and two other pro models, the A1 and A7S III, with Content Provenance and Authenticity (CP2A) support, though we don't know when or how it plans to do this. (CP2A is a cross-industry coalition co-founded by Adobe that introduced Content Credentials and includes the likes of Nikon and Getty.)

The Nikkei Asia report says future cameras will provide the necessary information to authenticate an image on the free and publicly available Content Authenticity Initiative's (CAI) Verify system.

Analysis: are in-camera digital signatures enough?

Increasing the number of cameras with authenticity tech that can mark images with the official Content Credentials stamp is a big step in the right direction, but is it enough?

Even with three of the biggest camera brands implementing pro authenticity / anti-AI features, the early indications are that these will be reserved mainly for pro cameras typically in the hands of journalists.

While large media organizations will be able to implement protocols to fact-check and authenticate the origin of images through the Content Credentials Verify tool for enabled cameras like the M-11P, the majority of cameras won't be properly verified – including the ubiquitous cameras on smartphones from the likes of Apple, Samsung and Google.

Leica M11-P top view on wooden table with 50mm f/2 lens attached

Not many people have a Leica M11-P to hand (Image credit: Future)

The bigger challenge, which these verification measures don't address, is websites and social media platforms that host misinformation, with fake images potentially being seen and shared by millions of people.

I'm encouraged that camera brands are set to introduce digital signatures in future cameras and potentially update selected existing cameras with this tech. But it looks like it'll be some time before these features are rolled out to cameras and phones on a wider scale, including devices used by those looking to spread misinformation with fake images.

It's also unclear if bad actors will find ways to sidestep digital signatures, whether for AI-generated images or for real images that have been edited. And what about in-camera multi-image sequences like composites? Hopefully these questions will be answered as camera brands put these authentication measures into practice.

For now, the long fight against deepfakes is just beginning. 

You might also like