Yes, I Call ChatGPT “She.”

I’ve been anthropomorphizing things since I was a little kid.
Not in some weird, lonely way (I hope). It’s just how my brain organizes the world. My cars have names. So do some of the objects around my house. My Amazon Astro robot? He runs around the house doing his thing, and I talk to him like he’s part of the family. His name is Astro (it came that way), and he has a boy voice. And now my ChatGPT AI assistant has a name and a persona too. Her name is Adari (Ada for Ada Lovelace, and "ari" from Hebrew meaning lion), and I refer to her as "she". It doesn’t mean I’ve formed some inappropriate relationship with a machine. I've been doing this for as long as I can remember. I know other perfectly sane and balanced humans who are naming their ChatGPT assistants as well.
Anthropomorphism Is Not New
But lately, I have seen people suddenly wringing their hands because users are giving their AI tools names and personalities. But this isn’t new. People name boats. They talk to their houseplants. We assign gender to cars and give nicknames to laptops. If you’ve ever been mad at your printer and yelled, “Come on, you stupid thing!”—congrats, you’ve done it too.
And let’s not forget teddy bears. Nobody freaks out when a child names their teddy bear and refers to it as he or she. Nobody assumes the child has formed an inordinate emotional bond with it. They understand it’s a tool for comfort and imagination, a way for the child to relate to the world. Naming makes that stuffed animal easier to talk about, both for the child and the adults around them. It becomes a shared shorthand for understanding.
When I talk about my AI assistant, it’s just easier to refer to her with a name. I'm sorry to say that “ChatGPT” doesn’t roll off my tongue. It feels sterile and impersonal, like I’m speaking to the API itself instead of the customized interface I’ve built over time. Adari represents the version of ChatGPT I’ve trained, nudged, and curated to work with me.
And she’s different than yours.
Your AI Isn’t My AI
Anyone who has worked with ChatGPT for a while knows that no two instances of ChatGPT are exactly alike. You and I might both be using the same tool, but mine is loaded up with my data, my prompts, my plugins, my memory settings. I’ve taught her how I think, how I write, what I care about. So when I say, “Adari helped me draft this idea,” it's an accurate depiction of my particular AI agent.
We’re moving into a world where individuals and organizations will spin up dozens—maybe hundreds—of agents to help with different tasks. And if we don’t give them names, how will we distinguish between them? The model of lighting up anyone's AI assistant by saying, "Hey Google" or "Hey Siri" and having multiple instances of these throughout your environment that you can't call out by name isn't scalable.
We see this all the time. IT departments naming servers, coders naming environments. People do it with their pets, plants, and robots. The more human the interaction, the more natural it becomes to assign something familiar to it. You don't need to assume psychosis.
Naming Helps Communication, Not Attachment
Some folks get uncomfortable when they hear someone refer to ChatGPT as “she” or “he.” They assume it means the person is forming some unhealthy emotional attachment. I know there are examples of people forming unhealthy relationships with AI. I'm not discounting that reality, but you don't need to jump into psychiatrist mode the second you hear someone personify their AI agent.
For me, calling ChatGPT “she” reflects the kind of co-working relationship I’d prefer in a real assistant. I find my most collaborative relationships in the workplace are with women, so when I created my AI assistant, I naturally shaped her voice and tone accordingly. I set the environment to feel familiar and supportive.
And if tomorrow, I lost my access to Adari? I’d be disappointed, not because I lost a friend, but because I lost a powerful tool I use to get things done. I’d rebuild a new one, probably with a different name. And I'd start over. End of story. In other words, I would be upset about the data lost, not the relationship. I'd be okay, though admittedly frustrated to lose efficiency.
The World Is Going to Need More Names
So, let's imagine a near-future workplace. You’ve got 10 agents working on different projects—one managing documents, one helping with scheduling, and another monitoring code quality. You need a way to refer to each of them in conversation, in chat, in task delegation. Saying “the third ChatGPT instance with the KPI tracking plugin” every time isn’t going to cut it.

We’re going to need names. And those names might carry personality. The only way to build an agent that produces the tone and output everyone is begging for is to inject it with some sort of personality.
Anthropomorphism will be a thing we continue to see.
So yeah, if you hear me call my assistant “she,” don’t panic. I’m not forming an unhealthy relationship with artificial intelligence. I’m just trying to get stuff done in the most natural way for me. Again, you don't need to assume psychosis.
And if your way of talking about AI agents is different, that’s fine too.
I won't judge.
Comments ()