We say “think,” “know,” “understand,” and “remember” all the time. That is how we describe human minds. But when we use these words for AI, things get tricky. Machines start sounding human. They are not. Not even close.
Jo Mackiewicz from Iowa State explained it simply. “We use mental verbs all the time, so it is natural to do the same with machines,” she said. “It helps us relate. But it also blurs the line between what humans can do and what AI can do.”
She and Jeanine Aune, also at Iowa State, studied how writers give AI human traits in their language. They looked at news writing and how verbs like “think,” “know,” “understand,” and “want” appear with AI and ChatGPT. They found some surprising patterns.
Mental Verbs Can Be Misleading
Here is the problem. Saying AI “decides” or ChatGPT “knows” makes the system sound like it has human intentions. That is misleading. AI does not think or feel. It follows patterns in data. Yet these words can make readers forget that humans design, train, and manage AI.
“Some phrases stick,” Aune said. “They shape public perception and not always in a helpful way.”
What the Study Found
The team used a large news database called the News on the Web corpus. They checked how often mental verbs appeared with AI and ChatGPT.
The first finding was that it does not happen very often. AI and ChatGPT are rarely described with mental verbs in news articles. “Needs” was the most common verb for AI, while “knows” was the most common for ChatGPT. Even so, usage was low overall.
Second, not every instance is human-like. For example, “AI needs large amounts of data” or “AI needs human assistance” just describes system requirements. It is like saying “a car needs gas.” Even “AI needs to be trained” keeps the focus on human responsibility, especially in passive constructions.
Third, some cases approach human-like meaning. A sentence like “AI needs to understand the real world” implies comprehension or ethics. Anthropomorphizing is not all or nothing. It exists on a spectrum.
Why This Matters
These findings show that news writers are generally careful. But a few human-like phrases can influence perception. Readers may start thinking AI has intentions or desires. That is misleading and could reduce accountability for human operators.
“For writers, nuance matters,” Mackiewicz said. “The words you choose affect how readers understand AI and the humans behind it.”
Looking Ahead
AI will keep evolving. Language will keep shaping perception. A few careless verbs can give the wrong idea. Future research could explore which words have the biggest impact on understanding AI and whether rare uses influence public perception.
“Language shapes understanding,” Mackiewicz said. “And understanding shapes how humans interact with AI.”


