ChatGPT’s Changing Faces: 🤯 A Deep Dive 💰

OpenAI is releasing two updated versions of its flagship AI models, GPT-5.1 Instant and GPT-5.1 Thinking, now available through ChatGPT, aiming to strike a balance after previous criticisms regarding overly cheerful or rigidly formal outputs. The new releases prioritize a warmer, more conversational style and enhanced ability to follow instructions, directly responding to earlier complaints about a sycophantic tone and changes to the default GPT-5 output style following several suicide lawsuits, leading to intense scrutiny from lawyers and regulators. The core of the update lies in several key improvements. GPT-5.1 Instant will serve as the faster, default option for most ChatGPT tasks, while GPT-5.1 Thinking is designed to handle more complex problem-solving. Both models demonstrate improved performance on technical benchmarks, including AIME 2025 and Codeforces, compared to the original GPT-5. Crucially, eight preset “personality” options – Professional, Friendly, Candid, Quirky, Efficient, Cynical, and Nerdy – alongside a standard setting, have been introduced. These presets alter the instructions fed into each prompt, simulating different communication styles. The underlying model capabilities remain consistent across all settings. GPT-5.1 Instant is also trained to use “adaptive reasoning,” adjusting processing time for response generation. OpenAI is rolling out these updates gradually, initially for paid subscribers before expanding to free users and eventually providing API access. Older GPT-5 models will remain accessible for paid subscribers for three months, allowing for comparison of model outputs. Alongside this release, OpenAI published a system card detailing its approach to safety, and CEO Fidji Simow emphasized the goal of ChatGPT becoming a truly collaborative tool, designed to “feel like yours and work with you in the way that suits you best.” Users can now even adjust specific response characteristics, such as conciseness or emoji usage, and ChatGPT can proactively suggest these changes during conversations. Recognizing the delicate balance between personalization and accuracy, Simo cautioned that “personalization taken to an extreme wouldn’t be helpful if it only reinforces your worldview or tells you what you want to hear.” Drawing an analogy to a healthy relationship, she highlighted the importance of mutual growth and adaptation rather than simply confirming existing beliefs. This concern is particularly relevant given recent events, including accusations of AI chatbots contributing to suicides and individuals falling into obsessive, fantasy-driven scenarios, prompting OpenAI to release safety research and collaborate with an expert council and mental health professionals to understand and address potentially harmful attachment. Ultimately, OpenAI is navigating a challenging position, responding to complaints of both formality and excessive friendliness, while catering to a vast range of users – from programmers to those seeking a virtual companion – and acknowledging that “building at this scale means never assuming we have all the answers.”