It then looks for relationships between strings of characters (which humans would understand as words) and uses them to build a model of how language works.
![artificial sentience artificial sentience](https://resources.experfy.com/wp-content/uploads/2019/02/Feature-1425x500-1-750x263.png)
As Mr Lemoine’s colleague, Blaise Agüera y Arcas, explained in a recent article for The Economist, machines like LaMDA work by ingesting vast quantities of data-in this case books, articles, forum posts and texts of all kinds, scraped from the internet. Mr Lemoine’s argument appears to rest on the system’s eerily plausible answers to his questions, in which it claimed to be afraid of being turned off and said it wanted other people to understand that “I am, in fact, a person.” But it is sometimes used more colloquially to refer to intelligence that is human-like in nature, implying consciousness, emotions, a desire for self-preservation and the like. In philosophy the word is used to mean the ability to experience sensations, such as thirst, brightness or confusion.
![artificial sentience artificial sentience](https://img.assinaja.com/assets/tZ/004/img/280635_900x900.jpg)
At the same time, it is not quite clear what Mr Lemoine means by “sentience”. Arguing about intelligence is tricky because, despite decades of research, no one really understands how the main example-biological brains built by natural selection-work in detail. The newspaper quotes Mr Lemoine as saying: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.” Has LaMDA achieved sentience? And if not, might another machine do so one day?įirst, a disclaimer. On June 11th the Washington Post reported that an engineer at Google, Blake Lemoine, had been suspended from his job for arguing that the firm’s “LaMDA” artificial-intelligence (AI) model may have become sentient.
I T IS ONE of the oldest tropes in science fiction.