Made by Hong Kong-based Hanson Robotics, Sophia is an incredibly lifelike social robot designed for use in healthcare, customer service, therapy or education settings. She is powered by artificial intelligence (AI), and can see faces and process conversational data in order to form relationships with people, which means she isn't preprogrammed with answers. What she doesn't have, say her creators, is consciousness. Yet.
If you've been following Sophia's progress, you will probably have fallen into one of two camps: sheer wonder at her lifelike features and abilities and excited by the opportunities she presents; or concerned that she is becoming sentient and scared at where all of this might lead. After all, people like Elon Musk and Professor Stephen Hawking have been warning of the dangers of uncontrolled AI for some time, with Musk calling it "a fundamental risk to the existence of human civilization." Others are rather more upbeat and look forward to exploring the possibilities of AI. In the foreword of "Hit Refresh," a new book by Microsoft CEO Satya Nadella, Bill Gates says the technology is "on the verge of making our lives more productive and creative." Facebook's Mark Zuckerberg agrees, noting how "optimistic" he is to see where AI could take us.
These mixed messages are confusing, however. What should we be? Positive? Or worried? For futurist Gerd Leonhard, author of the book "Technology vs. Humanity," it's important to first define what AI actually is. "To really simplify, I would say there are four kinds of artificial intelligence," he says. "The first is Intelligence Assistance (IA), which is essentially just fancy software that we can use to, for example, schedule our meetings. That's 95 percent of the so-called AI we see today. The second is artificial intelligence, which is a limited machine intelligence that can actually learn and go beyond a single, narrow use; and then then we may eventually get to Artificial General Intelligence (AGI), where machines will be able to learn and understand and then perform an action based on their own thinking, i.e. become 'generally intelligent' in a human sense. Ultimately this may lead to Artificial Super Intelligence (ASI), where computers might have unlimited power and infinite IQ. The bottom line is that, right now, today, people tend to overestimate how 'intelligent' machines really are."
Ayesha Khanna, entrepreneur, technology author and smart cities expert, agrees with this; and she also agrees that AI is "the biggest event in human history," as Stephen Hawking put it. "Or, at least, it's one of the biggest events," she says. "It's certainly going to profoundly affect the way we work and live in the future; and, in some ways, AI will 'humanize' as it manifests social interactions. How fast it will become ubiquitous, though, depends on what you mean by AI. I think self-driving cars are still a way off, but the automation now used to match customer service calls to AI agents is becoming quite advanced — as is image recognition."
So just how intelligent could machines become in the future? What opportunities and risks do they present? And ultimately, should humans consider AI to be a friend or a foe? — Tony Greenway