Post Thumbnail

The New York Times allows employees to use AI

The New York Times has allowed its editorial and product teams to use artificial intelligence tools, reports Semafor. The publication announced the launch of its own AI tool Echo for creating content summaries and presented employees with a set of approved AI products.

According to internal communications, editorial staff can use AI to suggest edits, formulate interview questions, and assist with research. Clear restrictions have been established: it is forbidden to use AI for writing or substantially revising articles, as well as inputting confidential source information.

Among the approved tools are: GitHub Copilot for programming, Google’s Vertex AI for product development, NotebookLM, some Amazon AI products, and OpenAI’s API (excluding ChatGPT) through a business account. In the future, the publication is considering using AI to create voiced versions of articles and translations into other languages.

Notably, the decision to use AI tools was made against the backdrop of ongoing litigation between The New York Times and companies OpenAI and Microsoft. The publication accuses them of copyright infringement in training generative AI models on their content.

In the future, AI tools may be used to create social media texts, SEO headlines, and program code. This decision reflects the growing trend of AI integration into journalistic work while maintaining control over key editorial processes.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

ChatGPT calls users "star seeds" from planet Lyra

It turns out ChatGPT can draw users into the world of scientifically unfounded and mystical theories.

AI music triggers stronger emotions than human music

Have you ever wondered why one melody gives you goosebumps while another leaves you indifferent? Scientists discovered something interesting. Music created by artificial intelligence triggers more intense emotional reactions in people than compositions written by humans.

GPT-5 was hacked in 24 hours

2 independent research companies NeuralTrust and SPLX discovered critical vulnerabilities in the security system of the new model just 24 hours after GPT-5's release. For comparison, Grok-4 was hacked in 2 days, making the GPT-5 case even more alarming.

Cloudflare blocked Perplexity for 6 million hidden requests per day

Cloudflare dealt a crushing blow to Perplexity AI, blocking the search startup's access to thousands of sites. The reason? Unprecedented scale hidden scanning of web resources despite explicit prohibitions from owners!

Threats and $1 trillion don't improve neural network performance

You've surely seen these "secret tricks" for controlling neural networks. Like threats, reward promises, emotional manipulations. But do they actually work? Researchers from the University of Pennsylvania and Wharton School conducted a large-scale experiment with 5 advanced models: Gemini 1.5 Flash, Gemini 2.0 Flash, GPT-4o, GPT-4o-mini and GPT o4-mini.