MIT and Microsoft exposed GPT-3.5’s lies

Post Thumbnail

A team of scientists from MIT and Microsoft developed a methodology that allows looking behind the scenes of language models’ thinking. And understanding when they lie to us. The research reveals disturbing cases of systematic mismatch between the real reasons for models’ decisions and their verbal explanations.

Particularly revealing is the experiment with GPT-3.5, which demonstrated gender bias when evaluating candidates for a nurse position, systematically giving higher scores to women. Even after changing gender in resumes. At the same time, in its explanations the model claimed it was guided exclusively by age and professional skills.

Researchers also discovered numerous examples where language models clearly oriented themselves by race or income. But in explanations spoke only about behavior or experience. And in medical cases, situations were revealed where artificial intelligence made decisions based on crucial symptoms, but remained silent about this in its explanations.

The methodology for detecting such discrepancies is exceptionally elegant. An auxiliary model first determines key concepts in the question, then generates counterfactual variants, changing one of the concepts, and checks whether this will affect the main model’s answer. If the answer changes, but this factor isn’t mentioned in the explanation – we face an unreliable explanation.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.