“Do androids dream of electric sheep?”

Published in 1968, the novel by Philip K. Dick – which inspired “Blade Runner” – explores the gray area between what it means to be human and what it means to be a robot. Even now, artificial intelligence is a hot topic, especially with the advent of driverless cars. The concept has taken notable roles in films and TV shows, such as in “Altered Carbon,” “Blade Runner 2049” and most recently, the second season of “Westworld.” Premiering Sunday, “Westworld” returns to HBO with Dolores (Evan Rachel Wood) and other robotic hosts who have gained consciousness and are hell-bent on claiming the Westworld amusement park for their own.

A Westworld park in 2018, however, is extremely unlikely. Creating a computer that contains consciousness still poses difficulties for researchers today. That being said, researchers do have general instructions on how to create one, and have made advancements in that direction.

Their research stipulates that there are two integral codes AI needs, without which it cannot acquire consciousness: generative code and memory. Generative code, in the simplest terms, is code that can write its own code. Not only can the code acquire data and learn from it, but it can also write up new programs to help it adapt to new situations. For example, in the first season of “Westworld,” Maeve (Thandie Newton), the madame of a town named Sweetwater, learns she is programmed to forget all her previous deaths and to be unintelligent and docile. As a result of generative code, Maeve figures out how to escape her life-and-death cycle by coercing engineers to increase her intelligence and eventually threatening one with a knife.

Currently, a system for generative code does not exist. The closest technology we have to it are neural networks or machine learning, as seen in Google DeepMind program’s Alpha Go robot, which beat the master Go player Lee Sedol in March 2016. Ultimately, neural networks and machine learning only involve acquiring data and “learning” from it, which excludes writing new programs.

However, if the problem of generative code is solved, there still remains the issue of memory. Though computers can store much more information than people, they have the poorest humanlike memory – they cannot prioritize information, associate it with other data or choose to forget that which they have already learned. To be truly humanlike and gain consciousness, AI would need to exhibit short-term, long-term and episodic memory – qualities not really seen even in the robots on “Westworld,” who for the most part, lack long-term and episodic memory in season one.

Even if researchers somehow managed to create generative code and humanlike memory in AI, there still remains the issue of emotions and self-reflection. True AI requires the ability to understand emotions and to react appropriately in a way that is similar to when “Westworld” robotic hosts run in fear from trigger-happy park attendees. Theoretically, specific human interactions can be tagged with particular emotions, so an AI can then express emotions based on the set associations.

When thinking about self-reflection and consciousness, I can’t help but recall Descartes’ “I think, therefore I am.” It makes sense; if I can think and create thoughts from almost seemingly nothing, I must be conscious. But asking a Westworld host from season one or Siri if they can think would be a redundant question. Trust me, I’ve tried. Siri concisely responds, “Why, of course,” but who’s going to believe her?

So, passing the Turing test has become a minimal qualification for intelligence or consciousness. Designed in 1950 by Alan Turing, the Turing test has an individual talk to both real humans and a computer using a chat messenger. If the computer is mistaken for another person more than 30 percent of the time in a five-minute period, the computer passes the test and is said to have intelligence.

The only AI to have passed the test was Eugene Goostman in 2014. Simulating a 13-year-old Ukranian boy, the computer program convinced 33 percent of the judges, but the conclusion was met with criticism. Scientists who opposed the results cited the experiment as poorly conducted and said that an AI which only acts like a 13-year-old boy with an odd sense of humor cannot possibly be indicative of intelligence.

But in 2017, Hanson Robotics showcased Sophia, a robot that was deemed “alive.” Sophia could hold conversations with people, “express” emotions and was even declared a citizen of Saudi Arabia. However, it too was met with opposition. Critics said the robot was simply incapable of human consciousness because it didn’t have emotions and couldn’t feel hurt by harsh language. The case of Sophia is in stark contrast to Dolores, who becomes furious after learning that she had been treated like a tool and consequently vowed to burn Westworld to the ground.

Dolores and the other robots of Westworld are a gold standard for AI. By the end of season one, they have consciousness and can feel and express emotions – a phenomenon scientists are striving toward. With significant steps taken to create AI, machine learning and robots that supposedly can pass a Turing test, a future with Westworld could be imminent – ignoring the ethical repercussions, of course. But for now, Siri’s tasteful sarcasm will do.

Join the Conversation

1 Comment

  1. A system for generative AI code does exist – it is called Google AutoML, unveiled in May of 2017.

    See https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-child-ai-bot-nasnet-automl-machine-learning-artificial-intelligence-a8093201.html , which states “Google has developed an artificial intelligence (AI) system that has created its own “child”.
    What’s more, the original AI has trained its creation to such a high level that it outperforms every other human-built AI system like it.”

Leave a comment

Your email address will not be published. Required fields are marked *