Google’s AI recommended promoting an employee that Brin didn’t notice

Post Thumbnail

Imagine this. Artificial intelligence analyzes a work chat and recommends promoting an employee you didn’t even notice. This is exactly the case described by Sergey Brin, Google’s founder, in an interview. He told that inside Google there is a system resembling Slack, but with built-in artificial intelligence. This system is capable of processing all chat content and answering complex questions.

Brin experimented with this technology, giving it tasks like “Summarize the discussion” or “Assign tasks to employees.” He copied the artificial intelligence responses back into the chat, and colleagues didn’t even understand they were communicating with a machine. When Brin asked the system “Who in this chat should be promoted?”, the artificial intelligence chose a woman whom Brin himself practically didn’t notice and who wasn’t particularly active in discussions.

Interested in this choice, Brin talked to the employee’s manager. The manager confirmed and said “I think you’re right, she really works hard and has done a lot.” As a result, the employee was promoted. But for such deep analytical decisions, the system requires a large context window. And according to rumors, a version of Gemini with practically infinite context already exists inside Google.

Such technology opens tempting prospects – for example, loading an entire project’s codebase into one window and asking artificial intelligence to constantly improve it. Although Brin noted that at Google there are about 5 internal projects for any unusual idea, the main question remains the same “How well do they work?”

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.