Large language models are here to stay. We can’t go back and choose to not open Pandora’s box. We just have to deal with it. Minimally, I think that means that an etiquette around the use of LLMs needs to be developed. I am going to sketch out here, some initial ideas about what that might look like.

Never Forget: Dear Sydney

About a year ago, Google released an impressively tone deaf ad for Gemini, in which a father uses Gemini to write a letter “from” his daughter to her Olympic athlete hero. It’s like something you’d see on Black Mirror. The narrator is throwing away an opportunity to bond with his daughter by using Google’s product.

Let this really sink in.

I’m not even talking about the secondary harm of the narrator letting his brain atrophy by offloading simple thinking tasks to Google’s servers. For the purpose of this first point I’m making, I’m not interested in what might be a scribe versus printing press type problem. What I’m talking about is offloading our humanity.

Who is that letter actually from? It isn’t the daughter. She didn’t write it. Maybe she signed it. If Sydney McLaughlin-Levrone received that fictional letter, what would she be reading? She’d be reading the cold-silicon professional output of a statistical sentence continuation machine, not a heartfelt hello from a fan. This is the big problem: Dear Sydney tells us, “All that matters is the form.” All that matters is that the resulting letter is grammatically perfect, has the right words, the right formatting… but none of that is what matters. When you receive scribbly children’s art as a gift, do you think, “how awful, can’t even draw a dog right” or “aww how precious”?

This applies to more than just fan letters. Do not use it for birthday cards. Do not use it for dating. Do not use it for texting your mom or husband or your friends. Do not offload your humanity.

You Did Not Draw That Picture

A similarly appalling new practice is where someone asks ChatGPT or Photoshop to draw a picture and then takes credit, explicitly or not, for the creation of that picture, and then we applaud.

Once again I’d like to disambiguate the specific point I’m making. I am not talking about how artists are getting ripped off or what fair use is.

Let’s imagine that some engineers create a robot that can do graceful gymnastics. That would be a marvelous feat of technology, worthy of praise. Now imagine that it gets packaged up and sold as a consumer product and is actually about as affordable as one of those toy robot dogs. Suppose that for your </special occasion>, someone decides to treat you to a gymnastics show, with that robot. Their gift to you is not the robot itself, but the act of pressing the green button on the remote. Yaaay.

These new tools do produce high grade output, but no one should be impressed because the process of using them is low skill and low effort.

Stop Calling LLMs “AI”

Large language models are not artificial intelligence. “AI” is not just a marketing term, but it is when it refers to LLMs. The counter argument to this goes something like, “pocket calculators are also AI”. I’m not arguing that a pocket calculator can’t do math better than I can. I’m saying that we’re diluting the term “AI” by using it this way. Hence now we’re using “AGI” to mean what “AI” used to mean.

The real benefit of calling a spade a spade though, is that it forces us to stop anthropomorphizing. This brings me to my final point.

The LLM is Not Your Therapist / Girlfriend / Pastor

r/MyBoyfriendIsAI is a Reddit sub, currently with 26k members, for people who are “dating” an “AI”. It is not a joke.

Replacing critical sentient relationships with simulacra can not work, and we need to be careful about this. You don’t have to be “dating” an LLM to make this mistake. Simply asking one for relationship advice or spiritual guidance should be a taught and known maladaptation.

I do need to disambiguate one last time. If AI ever gets to the point which is depicted in the movie Her, I will be forced to reconsider what a relationship is, but LLMs are not that.