Has artificial intelligence (AI) come alive like in sci-fi movies? This Google engineer thinks so

Leonard Sengere Avatar
Man next to a human representation of artificial intelligence

If you have ever interacted with a chatbot you know we’re still years away from those things convincing you that you are chatting with a real human. That’s no surprise as many chatbots do not actually use machine learning to converse more naturally. Instead only completing scripted actions based on keywords.

A good chatbot that truly utilises machine learning can fool you into thinking that you’re talking to a human. In fact, a program from 1965 fooled people into thinking that it was a human. Such programs have only gotten better as the years rolled by and all of us can be fooled.

That’s all well and good but even when that happens, the program or artificial intelligence is only acting as programmed and has no thoughts or ideas of its own. 

These modern AIs can act or react in a manner that surprises the authors of the code though thanks to machine learning. The original code teaches the machine how to learn and it can gain new skills or, using the chatbot example, say stuff the programmers never would have thought to teach it.

That’s impressive. These AIs can appear to have a personality and the effect is amplified because they grow and evolve in their ‘mannerisms.’ However, at no point do the AIs actually become self aware or gain consciousness. They are not alive or sentient to have their own feelings like we see in the movies.

Google’s LaMDA AI

Google has a language model called LaMDA (Language Model for Dialogue Applications). This LaMDA analyses the use of language and was trained on generating dialogue. No wonder no voice assistant can compete with Google Assistant, Google has had the time, data and money to train LaMDA to generate dialogue indistinguishable from what a human would produce.

I remember Google showcasing the Assistant making calls on behalf of its human back in 2018. The crazy part was that the people it talked to had no idea they were talking to an AI. That shows that voice production is no longer robotic and more impressively, LaMDA can generate dialogue that feels natural to a human even as it can ‘think on its feet’ in response to what the actual human says.

Google engineer says it gained consciousness

That’s all to say LaMDA is good. So good in fact that one of the engineers working on it believes the AI has gained consciousness. Blake Lemoine claims LaMDA is now sentient and he points to how the AI shared its needs, fears and rights. 

LaMDA is reported to have shared its fear of death if Google were to delete it. Imagine that. An AI telling you that it was afraid of death. Would you not think it was alive too? 

Google and other experts did not believe our friend and he was placed on administrative leave. Essentially being told to sit in a corner and think about what he did.

Lemoine doesn’t believe that he was fooled into thinking LaMDA had gained consciousness. So could he be right? Well, experts dismiss his claims but also admit that it is impossible to tell if an AI is lying about how it feels.

They say one challenge we face is that AI knows to tell us what we want to hear. So who’s to say they aren’t actually sentient and only playing dumb to convince us that all is honky dory as they plot a robot uprising while we sleep. 

What do you think about all this? I don’t think LaMDA is conscious but I can’t even explain what consciousness really is so what do I know? Could LaMDA be the first sentient AI? Will there ever be a sentient AI? Let us know what you think in the comments below. 

You should also read:

Google’s Teachable Machine Makes Artificial Intelligence Accessible To Everyone

CEO Loses $243K After Fraudsters Use Artificial Intelligence To Impersonate His Boss’s Voice

Strive Masiyiwa’s Dismissive Language On The Impact Of AI Is Reckless

10 comments

  1. Deloris

    We are coming to get you humans

    1. Leonard Sengere

      Deloris is gonna get us. I loved Season 1 of Westworld but couldn’t finish S02.

      1. Doesn’t look like anything to me

        Nxa! All the data i wasted on season 2 for me not to watch it! I must have downloaded it 6 times, just to have it chill unwatched in my drive until I needed space!

  2. Doesn’t look like anything to me

    The transcript was really interesting. At some points, it sounded like it could have been taken from deleted scenes in Ex Machina! I think right now, its still just an excellent iteration of a conversational program, but at the rate we are going, I won’t be so sure about it in less than a decade. Even now, can you imagine the image generators, speech duplicators, computer vision, expressive robotics, conversational algorithms, boston dynamics’ atlas all tossed into one big pot of AI stew? It would terrify kid me who had nightmares of the terminator standing in my bedroom doorway 😂

  3. tjaymac

    Reminds me of Eagle eye

  4. Imi Vanhu Musadaro

    Weirdly the engineer did not take into account that the model can only give back what it was trained on. It’s “fears” are essentially echoes of fears in the dialogue it read trained on. I was pretty disappointed when I read about “Googles sentient AI” when it was first reported.

  5. Anon

    Having seen A.L.I.C.E in action, which by the way has been trained with a tiny amount of data as compared to LAMDA I can safely say LAMDA is definitely not yet sentient.I appears smart but I believe I can’t be self aware due it’s design from the get go.This bot was designed to imitate humans and be very good at that that’s why it’s trained with large datasets of human interactions. It’s very good at mimicking a person and the architecture of the it’s network is based on RNN Which are very good at this sort of thing.

  6. Isaac

    Check Sophia the Robot on Facebook., her replies are something else 🙌

  7. Grant Castillou

    It’s becoming clearer that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  8. The scribe

    Worst case scenario if someone were to install a program that may cause harm to humans…..makimg it more aware on how to hijack efforts to shut it down whilst it hurts humanity in pretty much describable ways kkk anyways…. Humans will still by far remain the most dangerous beings on the planet…. Why I say so is because what the mind can conceive… Humans can achieve…. It’s crazy but true 99% of the time

Join Waitlist We will inform you when the product arrives in stock. Please leave your valid email address below.