DeepSeek begins talent hunt in AGI sphere and builds team
Chinese startup DeepSeek, which recently surprised the market by creating an AI model comparable to OpenAI’s developments, has launched a large-scale campaign to hire specialists in artificial general intelligence (AGI), demonstrating the company’s growing ambitions.
Over the weekend, the company posted listings for at least six AGI positions – from data specialists and deep learning researchers to head of legal department. Notably, the company is willing to pay interns $70 per day. Most positions are open in Beijing, with some in the startup’s hometown of Hangzhou in eastern China.
AGI, which technology company executives, including OpenAI CEO Sam Altman and SoftBank Group Corp. founder Masayoshi Son, call a kind of industry “Holy Grail,” represents artificial intelligence capable of understanding, learning, and applying knowledge across various tasks. Unlike specialized models like ChatGPT, AGI should possess general cognitive abilities comparable to human ones, raising concerns about the possibility of AI surpassing human intelligence.
Among the posted vacancies is the position of legal department head, who will be responsible for creating an AGI risk management system and interacting with government bodies, regulators, and research institutes.
For the deep learning researcher position, the company gives preference to candidates with “good results in industry competitions.” Even for interns, priority is given to those who have published papers at AI conferences or participated in open-source projects.
Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
AI music triggers stronger emotions than human musicHave you ever wondered why one melody gives you goosebumps while another leaves you indifferent? Scientists discovered something interesting. Music created by artificial intelligence triggers more intense emotional reactions in people than compositions written by humans.
GPT-5 was hacked in 24 hours2 independent research companies NeuralTrust and SPLX discovered critical vulnerabilities in the security system of the new model just 24 hours after GPT-5's release. For comparison, Grok-4 was hacked in 2 days, making the GPT-5 case even more alarming.
Threats and $1 trillion don't improve neural network performanceYou've surely seen these "secret tricks" for controlling neural networks. Like threats, reward promises, emotional manipulations. But do they actually work? Researchers from the University of Pennsylvania and Wharton School conducted a large-scale experiment with 5 advanced models: Gemini 1.5 Flash, Gemini 2.0 Flash, GPT-4o, GPT-4o-mini and GPT o4-mini.
Anthropic integrated Opus 4.1 into Claude Code and cloud platformsAnthropic released Claude Opus 4.1. This isn't just another update, but a substantial improvement in coding capabilities and agent functionality. What's especially pleasing — the new version is integrated not only into the classic Claude interface, but also into the Claude Code tool. As well as available through API, Amazon Bedrock and Google Cloud Vertex AI.