Post Thumbnail

Judge fined lawyers 31 thousand for fake quotes from AI

California Judge Michael Wilner issued a harsh decision against law firms that used artificial intelligence without proper control to prepare documents. Containing false judicial precedents and non-existent quotes.

The situation developed around a civil lawsuit against the insurance company State Farm. The plaintiff’s representative used artificial intelligence to create the structure of an additional conclusion. This document with unverified data was transferred to the law firm K&L Gates, which included the generated information in the official conclusion for the court.

The problem was discovered when Judge Wilner, interested in some quotes from the submitted materials, decided to study the mentioned court decisions in more detail. To his surprise, he found that at least 2 of the mentioned sources simply do not exist.

Judge Wilner, direct speech. “I read their conclusion, was convinced or, at least, intrigued by the authoritative sources they cited. And decided to study these decisions in more detail – only to discover that they don’t exist. This is scary. This almost led to an even scarier result – the inclusion of these fake materials in a court ruling.”

After requesting clarification, the firm K&L Gates presented a corrected version of the conclusion. Which, according to the judge, contained “significantly more fabricated quotes and references, in addition to the 2 original errors.” This prompted Wilner to issue an Order to Present Evidence, as a result of which lawyers under oath confirmed the use of artificial intelligence.

The lawyer who created the initial plan admitted to using Google Gemini, as well as artificial intelligence tools for legal research. In his decision, Judge Wilner imposed a fine of 31 thousand dollars on the law firms. Emphasizing that “no reasonably competent lawyer should delegate research and text writing” to artificial intelligence without proper verification.

You know what’s surprising? In none of the firms did lawyers or staff check the quotes or review the research before submitting the conclusion to the court. Law firms, having years of established document verification procedures, were unable to identify even basic falsifications created by artificial intelligence.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

Nvidia introduced Cosmos model family for robotics

Nvidia company introduced the Cosmos family of AI models. Which can fundamentally change the approach to creating robots and physical AI agents.

ChatGPT calls users "star seeds" from planet Lyra

It turns out ChatGPT can draw users into the world of scientifically unfounded and mystical theories.

AI music triggers stronger emotions than human music

Have you ever wondered why one melody gives you goosebumps while another leaves you indifferent? Scientists discovered something interesting. Music created by artificial intelligence triggers more intense emotional reactions in people than compositions written by humans.

GPT-5 was hacked in 24 hours

2 independent research companies NeuralTrust and SPLX discovered critical vulnerabilities in the security system of the new model just 24 hours after GPT-5's release. For comparison, Grok-4 was hacked in 2 days, making the GPT-5 case even more alarming.

Cloudflare blocked Perplexity for 6 million hidden requests per day

Cloudflare dealt a crushing blow to Perplexity AI, blocking the search startup's access to thousands of sites. The reason? Unprecedented scale hidden scanning of web resources despite explicit prohibitions from owners!