
How Claude works: leak revealed details of AI operation from Anthropic
The community of specialists in the field of artificial intelligence is discussing an unexpected event. The publication of the system prompt for the Claude model from Anthropic company. This document, which defines the principles of operation and behavior of the artificial intelligence system, appeared in the public domain, causing a wide resonance among experts and users.
The published prompt impresses with its scale – 16700 words and 24000 tokens. For comparison, a similar document from OpenAI contains only 2200 words. Such a difference in volume indicates different approaches of companies to configuring their artificial intelligence systems.
The document describes in detail many aspects of Claude’s functioning. From formatting responses to specific algorithms for solving problems. For example, it contains specific instructions on how the model should count letters in words. A significant part of the prompt is devoted to interaction with external systems. Integration with the server, search algorithms, and mechanisms for updating information after a certain date. This indicates the complex architecture of modern artificial intelligence systems, which goes beyond a purely language model. Link to this full prompt in the description.
Andrey Karpathy, who previously held the position of director of artificial intelligence at Tesla and was part of the founding team of OpenAI, suggested considering the leak as a catalyst for discussing a fundamentally new approach to training models. Instead of the traditional method of fine-tuning the weights of a neural network, he put forward the idea of manually editing prompts. By analogy with how a person works with notes to improve their skills. In his opinion, such an approach could help artificial intelligence systems better adapt to context and remember effective strategies for solving problems.
However, not all experts agree with this perspective. Critics point to potential problems. Autonomous prompts can introduce confusion into the model’s work, and without constant training, the effect of such modifications may prove temporary and limited.
Well, it turns out that the leak of Claude’s system prompt demonstrates that modern artificial intelligence systems are not governed by abstract algorithms. But by specific, detailed instructions created by humans. Which makes their behavior more predictable, but at the same time more limited by the framework of these instructions.