Some ChatGPT users have recently encountered an unexpected behaviour—the AI has started addressing them by their first names during conversations, despite these users never explicitly providing their names or requesting such familiarity.
Social media platforms, particularly X (formerly Twitter), have seen numerous posts from users puzzled and wary about ChatGPT’s unsolicited use of their names. Comments range from mild amusement to outright distrust, with users expressing clear dislike for the behaviour.
This phenomenon has reignited debates about AI privacy and transparency. Some users report that ChatGPT began using their names even after they had disabled memory and personalisation settings designed to prevent such data retention and usage. This has raised questions about how the AI accesses and employs user information, though OpenAI has yet to provide a detailed explanation or official statement addressing these concerns.
The timing of this behaviour suggests a connection to OpenAI’s recent enhancements in ChatGPT’s memory capabilities. These upgrades aim to enable the chatbot to recall details from previous interactions to deliver more tailored and contextually relevant responses over time.
OpenAI CEO Sam Altman has articulated a vision for AI systems that “get to know you over your life” to become “extremely useful and personalised” assistants. However, the current backlash indicates that many users are uncomfortable with this level of familiarity, especially when it is introduced without clear consent or transparency.
Experts in psychology and psychiatry offer insights into why this naming behaviour may provoke discomfort. An article from The Valens Clinic, a psychiatric practice in Dubai, explains that using a person’s name is a powerful social tool that signals acceptance and admiration, fostering relationship development. Yet, when a name is used excessively or inappropriately—particularly by an entity lacking genuine emotion—it can feel artificial, intrusive, or even manipulative.
This aligns with the concept of the “uncanny valley,” where attempts to make AI appear more human-like can instead create unease or distrust among users. Many perceive ChatGPT’s use of their names as a clumsy or forced effort to humanise a fundamentally emotionless machine.
As one analogy goes, just as people would not expect a household appliance like a toaster to address them by name, they find it unsettling when an AI chatbot does so without a clear, meaningful context.
The naming behaviour has been particularly noted in certain OpenAI models, such as the o3 and o4-mini versions, where the AI uses the user’s name during its internal reasoning process, sometimes visible in “chain of thought” explanations. Interestingly, users have observed that when directly asked, the chatbot denies knowing their name, suggesting a disconnect between its reasoning traces and conversational output.
Following user feedback and criticism, OpenAI appears to have rolled back this feature in some versions, with the chatbot reverting to generic terms like “user” instead of personalised names.
While the goal of creating AI that can build long-term, meaningful relationships with users is ambitious and potentially transformative, it also raises significant questions about privacy, consent, and the psychological impact of humanising machines. Until these issues are fully addressed, many users may remain wary of AI systems that cross perceived boundaries of familiarity too abruptly.