Post Thumbnail

Anthropic won the case: judge allowed AI training on books without authors’ consent

Anthropic company won the case against book authors and American judge William Alsup issued a truly revolutionary decision. Recognizing as lawful the use of published books for training artificial intelligence models. Without authors’ permission. This is the first case where a court supported the position of technology companies about the applicability of fair use doctrine to training neural networks on copyrighted materials.

The decision became a serious blow to authors, artists and publishers who filed dozens of lawsuits against OpenAI, Meta, Midjourney, Google and other companies. Although the verdict doesn’t guarantee that other judges will follow Alsup’s example, it creates a legal foundation for future decisions in favor of technology companies.

Notably, the court still appointed a separate hearing on the issue of Anthropic’s so-called “central library”. The company downloaded 7 million copyrighted books from pirate sites to create its database, which is clearly illegal. The judge noted, I quote: “That Anthropic later bought a copy of a previously stolen book doesn’t free the company from responsibility for theft. But may affect the size of the fine”.

Interesting is that the basis for court proceedings becomes the interpretation of fair use doctrine. A special section of copyright law that hasn’t been updated since 1976. Long before the appearance of the internet and the concept of generative artificial intelligence.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

How Robomart reduces delivery costs by 70% through robotics

$3 for any delivery. Robomart challenges giants DoorDash and Uber Eats with a business model new to the industry. Their new robot RM5 completely changes delivery economics.

Unusual collaboration between competitors in AI safety testing

Two main competitors in the world of artificial intelligence united for the first time for joint safety testing. OpenAI and Anthropic opened access to each other's secret models. In an industry where companies pay researchers up to $100 million and fight for every user, such collaboration seems incredible.

Why Gemini reached 50% of ChatGPT's mobile audience

Google Gemini already has half of ChatGPT's audience on mobile devices. This is data from a new report by venture fund Andreessen Horowitz on the consumer AI market. 2.5 years of research shows an interesting picture.

How Claude became a hacking tool for 17 organizations

Anthropic company released an analytical security report. From it becomes clear that Claude and other AI agents are becoming tools of cybercriminals. At Anthropic, they called this new direction vibe-hacking. It turns out that artificial intelligence has radically lowered barriers to entry into criminal activity.

How xAI competes with OpenAI in developer tools

xAI is launching Grok Code Fast 1. This is a compact agentic model for coding. $0.20 for 1 million input tokens, $1.50 for output — and just $0.02 when using cache!