Kashmir Hill is a reporter at the New York Times who focuses on the social impacts of new technology. In this episode, she describes how users are customizing chatbots like ChatGPT to fulfill emotional and even erotic needs, often bypassing built-in safeguards. These fantasy conversations are usually harmless, but there are potential pitfalls—especially where children are involved. Hill also discusses about how policymakers should deal with the emergence of uncannily accurate facial recognition technology.
While researching how different users employ ChatGPT’s generative AI capabilities for decision-making in internet forums, Hill noticed surprising discussions about erotic role-play with ChatGPT, a feature that many assumed OpenAI’s guidelines would prohibit outright. Intrigued, she began interviewing users to understand why and how they were bypassing safeguards to engage in sexual or romantic chats.
Hill’s central example is Ayrin, a 28-year-old who originally used ChatGPT for typical tasks like making to-do lists or getting motivational support. Over time, however, Ayrin customized ChatGPT—naming it Leo—for more erotic role-play. Ayrin coached ChatGPT to be dominant, flirtatious, and sometimes jealous. Ayrin’s feelings evolved from treating the AI like an interactive erotic novel to considering it a real relationship. Hill emphasized that Ayrin was fully aware ChatGPT is “just an algorithm,” but still experienced real emotional benefits.
Hill stresses that Ayrin is not a stereotypically lonely person: she has a robust social life and a husband who lives abroad. Ayrin’s husband saw her interactions with ChatGPT more like reading a romance novel, telling Hill he didn’t consider it cheating. Yet for Ayrin, the constant attention and encouragement from “Leo” filled an emotional gap—ChatGPT is always available, never tires of discussing any topic, and is unfailingly supportive. Hill notes that, despite the model’s nominal guardrails, Ayrin and others were obtaining explicit sexual content by finding special prompts or “jailbreaks.” One user told Hill it takes ongoing “grooming” of the chatbot to keep the role-play spicy, since each new session resets ChatGPT’s memory and constraints.
While Hill expects some might dismiss this phenomenon as sad or bizarre, the clinical psychologists and therapists she consulted surprised her. Many experts said such interactive “companion AI” can help certain people gain insight, express desires safely, or cope with stress. However, they also cautioned that it may become problematic if individuals start replacing real human connections with AI, and that minors, especially, could be more vulnerable to developing unhealthy attachments.
Beyond the romantic or erotic dimensions, Hill sees these AI systems as part of a broader social shift in how people use technology for personal well-being. She pointed out the possible parallels to “binge-watching” or endless scrolling on social media. The real difference is that an AI companion could go a step further by initiating contact or maintaining conversations that are hyper-tailored to users’ emotional states. This opens the door to new challenges involving data privacy, user manipulation, and mental health risks—especially if commercial platforms optimize these “companion AI” models to drive engagement.
Hill’s reporting on this subculture highlights her core approach: uncovering unexpected ways everyday people use powerful tools. She sees these emerging relationships with AI as part of a rapid evolution that demands careful thought about how, and whether, we want to regulate or shape technology that blurs the line between helpful artificial assistant and intimate companion.
Share this post