Post Thumbnail

How Claude works: leak revealed details of AI operation from Anthropic

The community of specialists in the field of artificial intelligence is discussing an unexpected event. The publication of the system prompt for the Claude model from Anthropic company. This document, which defines the principles of operation and behavior of the artificial intelligence system, appeared in the public domain, causing a wide resonance among experts and users.

The published prompt impresses with its scale – 16700 words and 24000 tokens. For comparison, a similar document from OpenAI contains only 2200 words. Such a difference in volume indicates different approaches of companies to configuring their artificial intelligence systems.

The document describes in detail many aspects of Claude’s functioning. From formatting responses to specific algorithms for solving problems. For example, it contains specific instructions on how the model should count letters in words. A significant part of the prompt is devoted to interaction with external systems. Integration with the server, search algorithms, and mechanisms for updating information after a certain date. This indicates the complex architecture of modern artificial intelligence systems, which goes beyond a purely language model. Link to this full prompt in the description.

Andrey Karpathy, who previously held the position of director of artificial intelligence at Tesla and was part of the founding team of OpenAI, suggested considering the leak as a catalyst for discussing a fundamentally new approach to training models. Instead of the traditional method of fine-tuning the weights of a neural network, he put forward the idea of manually editing prompts. By analogy with how a person works with notes to improve their skills. In his opinion, such an approach could help artificial intelligence systems better adapt to context and remember effective strategies for solving problems.

However, not all experts agree with this perspective. Critics point to potential problems. Autonomous prompts can introduce confusion into the model’s work, and without constant training, the effect of such modifications may prove temporary and limited.

Well, it turns out that the leak of Claude’s system prompt demonstrates that modern artificial intelligence systems are not governed by abstract algorithms. But by specific, detailed instructions created by humans. Which makes their behavior more predictable, but at the same time more limited by the framework of these instructions.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

OpenAI fires safety experts and reduces tests to days

Alarming changes at OpenAI. They are laying off engineers responsible for protection against leaks, data theft and other critically important threats.

Open source model RoboBrain 2.0 will become foundation for humanoid robots

AI model RoboBrain 2.0 can now combine environment perception and robot control in 1 compact system. Specialists already call it the foundation for the future generation of humanoid robots.

Tinder launched double dates: AI assembles teams of 4 people

Tinder app launched a double date function that allows users to team up with friends to find pairs. Now you can invite up to 3 friends and together browse profiles of other so-called teams. That have at least 1 match in individual preferences.

New benchmark showed AI failure in Olympic programming tasks

A new benchmark LiveCodeBench Pro for evaluating artificial intelligence programming capabilities has appeared. Link in description. It includes the most difficult and fresh tasks from popular competitions. International Olympiad in Informatics and World Programming Championship. Tasks were marked by winners and prize-winners of these competitions themselves.

Data up to 2022 became "pre-nuclear steel" for AI training

Artificial intelligence, intended to become the locomotive of technological progress, is beginning to slow down its own development. According to The Register, generative models have filled the internet with so much synthetic content that this creates a real technological dead end.