
Unencrypted memory and political settings in new ChatGPT
Sam Altman slightly revealed the secrets of GPT-6! The main revolution — memory. Imagine a chatbot that remembers all your preferences, habits and peculiarities. This is no longer just a tool — it’s a personal digital companion.
GPT-5 just came out, and Altman is already talking about the next version. And most interesting — the gap between releases will be shorter than between GPT-4 and GPT-5.
People want memory — so said Altman. People want product features that require understanding them as individuals. ChatGPT will remember who you are, your routines, your quirks. And adapt accordingly.
OpenAI is now working with psychologists, measuring users’ emotional state and tracking wellbeing over time. And this won’t just be technology. This will be mental health care through artificial intelligence.
Political neutrality is also a priority. Altman stated — the base model will be centrist, but users will be able to customize it to themselves. Want a super-progressive version — you’ll get it. Prefer conservative — please.
But there’s a problem. Temporary memory isn’t encrypted. Confidential information could be under threat. And Altman admitted — encryption could well be added, but there are no timelines yet. According to him, medical and legal queries require special protection that doesn’t exist today.
By Altman’s admission, models have already saturated the chat use case. They won’t become much better at dialogues. Maybe they’ll even become worse. But according to him, this isn’t failure. This is evolution. Artificial intelligence is transitioning from simple communication to deep understanding and adaptation.
I understand everything. But unencrypted memory, working with psychologists, tracking emotional state. Essentially, this is ready infrastructure for manipulation. And political customization will create filter bubbles worse than social media. I’d be happy to be wrong.