
MIT and Microsoft exposed GPT-3.5’s lies
A team of scientists from MIT and Microsoft developed a methodology that allows looking behind the scenes of language models’ thinking. And understanding when they lie to us. The research reveals disturbing cases of systematic mismatch between the real reasons for models’ decisions and their verbal explanations.
Particularly revealing is the experiment with GPT-3.5, which demonstrated gender bias when evaluating candidates for a nurse position, systematically giving higher scores to women. Even after changing gender in resumes. At the same time, in its explanations the model claimed it was guided exclusively by age and professional skills.
Researchers also discovered numerous examples where language models clearly oriented themselves by race or income. But in explanations spoke only about behavior or experience. And in medical cases, situations were revealed where artificial intelligence made decisions based on crucial symptoms, but remained silent about this in its explanations.
The methodology for detecting such discrepancies is exceptionally elegant. An auxiliary model first determines key concepts in the question, then generates counterfactual variants, changing one of the concepts, and checks whether this will affect the main model’s answer. If the answer changes, but this factor isn’t mentioned in the explanation – we face an unreliable explanation.