Post Thumbnail

How Claude works: leak revealed details of AI operation from Anthropic

The community of specialists in the field of artificial intelligence is discussing an unexpected event. The publication of the system prompt for the Claude model from Anthropic company. This document, which defines the principles of operation and behavior of the artificial intelligence system, appeared in the public domain, causing a wide resonance among experts and users.

The published prompt impresses with its scale – 16700 words and 24000 tokens. For comparison, a similar document from OpenAI contains only 2200 words. Such a difference in volume indicates different approaches of companies to configuring their artificial intelligence systems.

The document describes in detail many aspects of Claude’s functioning. From formatting responses to specific algorithms for solving problems. For example, it contains specific instructions on how the model should count letters in words. A significant part of the prompt is devoted to interaction with external systems. Integration with the server, search algorithms, and mechanisms for updating information after a certain date. This indicates the complex architecture of modern artificial intelligence systems, which goes beyond a purely language model. Link to this full prompt in the description.

Andrey Karpathy, who previously held the position of director of artificial intelligence at Tesla and was part of the founding team of OpenAI, suggested considering the leak as a catalyst for discussing a fundamentally new approach to training models. Instead of the traditional method of fine-tuning the weights of a neural network, he put forward the idea of manually editing prompts. By analogy with how a person works with notes to improve their skills. In his opinion, such an approach could help artificial intelligence systems better adapt to context and remember effective strategies for solving problems.

However, not all experts agree with this perspective. Critics point to potential problems. Autonomous prompts can introduce confusion into the model’s work, and without constant training, the effect of such modifications may prove temporary and limited.

Well, it turns out that the leak of Claude’s system prompt demonstrates that modern artificial intelligence systems are not governed by abstract algorithms. But by specific, detailed instructions created by humans. Which makes their behavior more predictable, but at the same time more limited by the framework of these instructions.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
XPeng introduced world's first female humanoid robot

Chinese electric car manufacturer XPeng introduced the new generation humanoid robot IRON. And this is the first female humanoid!

Michael Burry bet 1.1 billion dollars against Nvidia and Palantir

Michael Burry - this is a legendary investor who predicted the 2008 mortgage crisis. And now he's making a loud move again. Michael bet 1.1 billion dollars in put options against 2 major companies from the AI sector. These are Nvidia and Palantir.

Anthropic conducts interviews with models before sending to retirement

Anthropic published a policy for "decommissioning" outdated AI versions. Key commitment is to preserve weights of all public and actively used internal models for at least the company's lifetime. So that in the future access can be restored if necessary.

Nvidia head believes there is no AI bubble

Nvidia founder Jensen Huang dispelled concerns about a bubble in the AI market. And according to him, the company's latest chips are expected to bring 0.5 trillion dollars in revenue.

Sam Altman is tired of money questions

Sam Altman is tired of questions about OpenAI's money. And this became obvious during a joint interview with Satya Nadella on the Bg2 podcast.