In an age when fakes abound, the camera proves the real (I took this photo) SONY announces a camera that complies with the C2PA standard.

【The day is coming when I can prove that “I took that picture.”】
https://www.gizmodo.jp/2024/04/284871.html

 

Sony announces that it has “provided authenticity camera solutions, including support for the C2PA standard, to news organizations.

・Microsoft, Intel, Adobe, ARM, BBC, Truepic, and others are launching C2PA (Coalition for Content Provenance and Authenticity) in 2021.

C2PA is an organization that develops functions and technical specifications for recording the history of digital content.

・In other words, the C2PA standard provides a means of digitally proving who took the photo.

・The C2PA standard is a system that provides a means of digitally proving who took the photo. A digital birth certificate is also created for this signature, and it is possible to check at image verification sites that the image was actually taken with the camera. Proof that the image was taken by the camera, not a generated fake

・With the development of AI and other technologies, it is becoming more important to prove the authenticity of images and photos, such as who created them or whether they were generated at all.

Sony’s new products will record 3D depth information at the same time, so it will be possible to tell whether a 3D object or a flat surface such as an image or display was photographed.

 

The above is a quote from the article

 

 

 



 

 

Will all content eventually have digital certificates?

 

As I have written many times in this blog, the quality of images and videos created by generative AI is rapidly improving and becoming indistinguishable from the real thing.

 

Recently, a video of Princess Catherine’s cancer announcement was circulated,

Many people have pointed out that this video may have been AI-generated.

In fact, the ring she is wearing disappears in the middle of the video.

 

 

 

Regardless of the authenticity of Princess Catherine’s video,

This is about the level of distinction between whether or not it is AI-generated.

 

Whenever I saw such problems, I imagined the future as follows.

 

・In order to prevent damage caused by fake videos of ourselves, I imagine that in the future all human beings will have a system to record their own actions (e.g., videos). Would it be easier to prove oneself by creating a biological recording device that could be implanted in the body?

 

・Will there be a rule (obligation) that everything generated by AI must have a digital proof such as “This was made by AI”?

 

I had always imagined the above two points,

 

However, when I heard that the camera manufacturer was going to provide digital proof that “this is realistic,” I thought, “I see! I had left that part out of my imagination.

 

Just by looking at the content, you can no longer tell the difference.

 

Once again,

 

 

・Proof from the human side (subject side)
(Proof that this is me, not me)

・Proof from AI side
(Proof that this is by AI, not by AI)

・Proof from the real side (camera, etc.)
(Proof that this is real, not real)

 

I thought that we are entering an era in which proofs in three directions are necessary.

Let’s keep an eye on how these will take shape.

 

See you then

 

In many ways, there is the idea that “this world is just an information space,” and as a matter of fact, this may be becoming true.

 

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *