How synchronization of 3 light sources protects against forgeries

Post Thumbnail

Artificial intelligence has learned to create video fakes that are impossible to distinguish from reality. And this is a huge problem and question of trust in society. But scientists from Cornell University found a brilliant solution. They hid watermarks right in ordinary lighting.

The technology is called NCI. Which stands for noise-coded illumination. Professor Abe Davis and his team made light flicker in a special way. These tiny brightness fluctuations create a unique code that the human eye doesn’t notice. But the camera records every detail.

Imagine — ordinary lamps in the room or your computer screen transmit a secret code through barely noticeable brightness changes. And the light flickers according to a set pattern, creating an invisible watermark throughout the entire video.

And here’s how authenticity is verified. A computer with the key to the code analyzes the recorded video and generates its own version with timestamps from the light fluctuations. If the video is real — the versions match perfectly. If someone used a deepfake — obvious discrepancies appear. Black areas emerge in forged places or the image disappears completely.

But most interesting — different light sources can transmit different codes simultaneously. The chandelier broadcasts one code, the desk lamp another, the laptop screen a third. To fake such a video, a fraudster would have to recreate each light code separately. And they all must be perfectly synchronized with each other. This is practically impossible. Probably.

Unlike existing protection systems that require special cameras, NCI works with any device. From professional cameras to ordinary smartphones.

The technology is still in development. But one already wants them to succeed.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.