Post Thumbnail

OpenAI releases major GPT-4o update: what has changed

OpenAI has released a significant update to its flagship GPT-4o model, substantially expanding its capabilities in image analysis, scientific data processing, and understanding of current context.

A key improvement was the extension of training data temporal coverage from November 2023 to June 2024. This enabled the model to provide more relevant and accurate responses, especially in matters relating to cultural and social trends, as well as recent scientific research. The updated knowledge base has also improved the model’s search query efficiency.

Substantial progress has been achieved in visual information analysis. GPT-4o demonstrates improved performance in MMMU and MathVista tests, reflecting the model’s increased ability to interpret spatial relationships, analyze complex diagrams, understand graphs, and connect visual content with textual descriptions.

Developers paid special attention to enhancing the model’s capabilities in STEM (Science, Technology, Engineering, and Mathematics). GPT-4o shows improved results in solving mathematical, scientific, and programming tasks, confirmed by increased scores in GPQA and MATH academic tests. The model also demonstrated performance growth in the comprehensive MMLU test, which evaluates language understanding, knowledge breadth, and reasoning ability.

An unexpected addition was the increased use of emoji in system responses. The model now more readily applies emotional symbols, especially if users themselves use them in conversation. This innovation aims to create more natural and emotionally rich communication with artificial intelligence.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

Dongfeng deploys 1.7m tall Walker S robots with 41 servos

Dongfeng Motor joins forces with Ubtech Robotics to integrate innovative Walker S robots into production lines. These technological marvels standing 1 meter and 70 centimeters tall are ready to transform traditional automobile assembly processes. Dongfeng Motor's general manager emphasizes that implementing artificial intelligence in these robots will significantly improve the quality of component inspection and assembly.

MIT graduate student reduced painting restoration from 230 to 3.5 hours

MIT graduate student Alex Kachkin developed a cool method for painting restoration using artificial intelligence. Reducing work time from many months to several hours. As a demonstration, he restored a work by an unknown Dutch master of the 15th century that had seriously suffered from time.

AI prosthetic from Canada analyzes objects and decides how to grasp them

Artificial intelligence gives prosthetics independence! Scientists from Memorial University of Newfoundland created a revolutionary arm prosthetic that literally "thinks" for itself. Unlike traditional models that require reading muscle signals through sensors, the new device is completely autonomous.

DeepSeek packed LLM engine into 1200 lines of Python code

The DeepSeek team presented nano-vLLM. This is a lightweight and compact engine for running large language models. Which could change perceptions about code efficiency. Amazingly, all functionality fit into just 1200 lines of Python code! This is true technological minimalism in the world of artificial intelligence. Traditional engines like this, for all their power, often suffer from an overloaded codebase. Which makes their modification a real trial for developers. Nano-vLLM solves this problem by offering a simple but powerful tool without unnecessary complexity. The code is open.

Tesla robotaxi failure: 11 traffic violations in first days from 20 cars

The dream of robotaxis faces harsh reality! Tesla launched public tests of autonomous taxis in Austin, but the results were far from the promised technological miracle. In the first days of testing, at least 11 serious traffic violations were recorded. And this with only 20 vehicles selected for a limited circle of bloggers. Philip Koopman, professor at Carnegie Mellon University and expert on autonomous technologies, doesn't hide his surprise: "This is terribly fast for so many videos with unstable driving to appear".