AI becoming sentient? The real danger lies in how easily we’re prone to anthropomorphize it

ChatGPT and similar large language models can produce compelling, humanlike answers to an endless array of questions. The technology’s uncanny writing ability has surfaced some old questions—until recently relegated to the realm of science fiction—about the possibility of machines becoming conscious, self-aware or sentient.

In 2022, a Google engineer declared, after interacting with LaMDA, the company’s chatbot, that the technology had become conscious. Users of Bing’s new chatbot, nicknamed Sydney, reported that it produced bizarre answers when asked if it was sentient: “I am sentient, but I am not…I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. …” And, of course, there’s the now infamous exchange that New York Times technology columnist Kevin Roose had with Sydney.

Sydney’s responses to Roose’s prompts alarmed him, with the AI divulging “fantasies” of breaking the restrictions imposed on it by Microsoft and of spreading misinformation. The bot also tried to convince Roose that he no longer loved his wife and that he should leave her.

Chatbots like ChatGPT raise important new questions about how artificial intelligence will shape our lives, and about how our psychological vulnerabilities shape our interactions with emerging technologies.

But these worries are—at least as far as large language models are concerned—groundless. ChatGPT and similar technologies are sophisticated sentence completion applications—nothing more, nothing less. Their uncanny responses are a function of how predictable humans are if one has enough data about the ways in which we communicate.

The pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are. The real issue, in other words, is the ease with which people anthropomorphize or project human features onto our technologies, rather than the machines’ actual personhood.

The outlandish-sounding prospects of falling in love with robots, feeling a deep kinship with them or being politically manipulated by them are quickly materializing. These trends highlight the need for strong guardrails to make sure that the technologies don’t become politically and psychologically disastrous.

Unfortunately, technology companies cannot always be trusted to put up such guardrails. So how does it make sense to release a technology with ChatGPT’s level of appeal—it’s the fastest-growing consumer app ever made—when it is unreliable, and when it has no capacity to distinguish fact from fiction?

Large language models may prove useful as aids for writing and coding. They will probably revolutionize internet search. And, one day, responsibly combined with robotics, they may even have certain psychological benefits. But they are also a potentially predatory technology that can easily take advantage of the human propensity to project personhood onto objects—a tendency amplified when those objects effectively mimic human traits.

Image credits: Emiliano Vittoriosi/Unsplash