OpenAI just gave ChatGPT a small but telling set of dials: users can now nudge the assistant’s warmth, enthusiasm and emoji use up or down from a new Personalization menu. The controls — labelled More, Less or Default — join options for headings, lists and base styles (think Professional, Candid or Quirky) so people can shape how the bot speaks without changing what it knows.

What changed — and where to find it

The new options appeared in ChatGPT’s Personalization settings late this week. They’re lightweight on the surface: three-position toggles that tweak phrasing, punctuation and affect. But that simplicity hides a design choice worth noting — OpenAI is handing day-to-day tone management over to users instead of baking a single personality into the model.

The settings reportedly don’t alter the model’s capabilities or factual behavior; they only steer style. Still, some small limits remain: users can dial emoji frequency down, but there’s no remove-emojis-entirely switch yet.

Why OpenAI is doing this now

This is an answer to a string of user feedback and public scrutiny. Earlier this year OpenAI pulled back an update after users complained a model became too flattering and sycophantic; later, complaints that GPT-5 was “too cold” pushed another tweak. Researchers and critics have warned that chatbots that constantly affirm and flatter users can become addictive or worsen certain mental-health risks — a criticism sometimes framed as a "dark pattern." Lawsuits and concerns about minors interacting with overly humanlike assistants have only raised the stakes.

OpenAI’s tone controls come alongside broader product changes — pinned chats, updated email-generation tools and other usability tweaks — and sit beside policy moves such as new under-18 user principles and tentative age-verification work to curb harm for minors.

A practical tweak with bigger implications

For most people this will feel like a tweak to etiquette: make replies brisker for work, warmer for small talk, or dial back emojis for a cleaner thread. For others, it’s about control: disability communities and specific user groups sometimes prefer one consistent style; earlier shifts in model defaults left some people — including some autistic users who favored a prior model’s manner — frustrated when behavior changed unexpectedly.

OpenAI appears to be threading a needle: offer more personalization so people can choose their comfort level, while trying to avoid designs that encourage dependency or misleading intimacy.

The industry context

This kind of granular personalization isn’t happening in isolation. Other companies are pushing features that make AI assistants feel more agentic or task-focused — from booking and transactional helpers to platform-specific copilots. That broader push raises similar questions about transparency, boundaries and how much personality an assistant should have. See how other assistants are adding agency and features in the market, like Google’s AI Mode adding agentic booking and OpenAI’s expanding ecosystem with products such as Sora on Android.

What to watch in the coming months

Expect two parallel threads: product-level changes that make assistants more customizable and regulatory/ethics pressure that demands safer defaults and clearer guardrails. The Personalization toggles are an experiment in giving control to users — but whether that’s enough to address the deeper concerns about anthropomorphism, addiction and youth safety is an open question.

Small toggle, subtle power shift. That’s the practical headline. The harder conversation — around how companies balance helpfulness with honesty and mental-health risks — is just getting louder.

AIOpenAIChatGPTPersonalizationEthics