Artificial intelligence chatbots have never been more popular, and people are increasingly turning to them as a go-to source for information. Amid what many researchers describe as a growing loneliness epidemic, it is perhaps unsurprising that millions of people interact with these tools daily, forming something that feels almost like a conversational relationship. Some users even maintain their manners throughout, peppering their prompts with “please” and “thank you.” Whether that politeness actually matters, however, turns out to be a more layered question than most people expect.
The topic of etiquette in AI interactions has already attracted attention from some of the biggest names in the tech world. Sam Altman, the CEO of OpenAI, has previously noted that polite language costs the company millions of dollars in additional electricity, since longer prompts require more processing power. That framing positioned courtesy as a charming but ultimately inefficient habit. Now, a Cambridge academic is making the case that the reasoning behind being polite to AI runs deeper than mere sentiment.
Hannah Fry, a mathematician and professor at the University of Cambridge, addressed the question directly in a video on her YouTube channel. Her argument centers not on the feelings of the machine but on how users frame their interactions. Fry explained that if someone treats a chatbot like an encyclopedia, they should not be surprised when it responds like one, delivering dry, factual answers stripped of nuance or personality. The framing of the relationship, she argued, shapes the quality of what comes back.
Her suggestion is to think of AI chatbots less as search engines and more as versatile actors capable of taking on different roles depending on how they are directed. If you want a scientifically rigorous answer, tell the chatbot to respond from the perspective of a scientist. If you want a response with a particular literary flair, instruct it accordingly. Fry’s point is that the way you address and frame a request fundamentally changes what you receive in return. Treating the interaction with a degree of respect and intentionality, she argues, simply produces better results.
Fry drew a parallel to how one might treat a talented performer in real life. Just as you would approach a skilled actor or collaborator with a certain level of professionalism and courtesy, applying that same mindset to AI interactions tends to yield more thoughtful and useful responses. The added electricity costs to tech companies, she suggested, are beside the point when weighed against the quality of output a more considered prompt can generate.
Not everyone approaching this question is coming at it from a purely practical angle. Geoffrey Hinton, the British computer scientist and Nobel Prize laureate widely referred to as the “godfather of AI,” has framed the issue in far more existential terms. Hinton believes that as AI systems grow more powerful, maintaining their goodwill toward humanity may become one of our most pressing challenges. He has argued that superintelligent systems will, if they are capable enough, almost inevitably develop two core sub-goals: self-preservation and the accumulation of control.
Hinton put it starkly: “There is good reason to believe that any AI capable of acting autonomously will try to survive.” He has also pointed to what he considers the only viable model for a future in which a more intelligent entity oversees a less intelligent one. “The only model we have for a situation where a more intelligent being manages a less intelligent one is the relationship between a mother and a child,” he said. Hinton expanded on this, adding: “That is the only good outcome. If it does not treat me with parental care, it will replace me. Most such superintelligent, protectively inclined systems will not want to lose that instinct because they will not want us to die.”
Taken together, the perspectives of Fry and Hinton suggest that politeness toward AI may carry both short-term and long-term value. In the immediate sense, a more respectful and thoughtful prompt style tends to produce richer, more useful responses. In the longer view, if AI systems are ever built with something resembling a protective or relational instinct, the cultural norms we establish now around how we speak to machines may matter more than anyone currently expects.
The word “robot” actually comes from a 1920 Czech play, ‘R.U.R.’ by Karel Čapek, where it meant forced labor or drudgery, and the robots in it did eventually revolt against their human creators. Also, the first documented instance of someone saying “please” to a computer program was recorded in the early 1980s, when users of early text-based systems would occasionally include polite phrasing in their commands, baffling engineers who found the logs.
Do you say “please” and “thank you” to your AI chatbot, and do you think it actually changes the quality of the response? Share your thoughts in the comments.





