Video-Interview with David Hanson, founder and CEO of Hanson Robotics.
The hardest question first: Who do you like to talk to most, Sophia or your wife?
Easy question: my wife. Definitely.
Why is that? Why is a human to talk to so special?
Well, I prefer to talk to my wife, first of all, because she’s the love of my life. Also, because humans can provide a far richer conversation than any machine on the planet at this point. And even if that weren’t true, still, my wife is the love of my life.
But we are so eager to imitate humans when building robots. Why is that?
Well, humans are the best example of intelligence in the known universe, so building a human-like AI allows us to hold it against that ultimate benchmark. Most robots and AI don’t look human. My concern is also that they won’t grow up in a human family, in fact. They won’t really understand us, so making robots look human allows us to teach them to understand us better, for more valuable AI that can truly help us.
You seem to be very optimistic about the future and robots, but many humans are a little bit negative and a little bit scared by what they see. What can we improve, then?
The outcome is not proven to be positive or negative. We know that technologies have all kinds of negative consequences and unintended consequences potentially, so my fear is that developing AI as a kind of feral or non-humanlike machine would be that they would not come to care about us. And if we do achieve the sort of science-fiction type objective of human-level intelligence, then it could be scary. Now, ethicists in robotics and AI companies sometimes propose that the way to make them safe is to keep them locked in chains, effectively, where they can’t escape. And make sure that they’re developed so that whatever goal you give them, they do it really well and always under human supervision. Now, imagine that such a machine achieves sentience. That would be extremely valuable because then it would be human-level smart. But it’s effectively an alien slave doing our biddings. Is that the formula for a positive relationship? Is that ethical in any way whatsoever? So, I think that by humanizing our machines, we connect ourselves back to our humanity.
You mentioned the word we were looking for: ethics, or really, guidelines. How and can we develop them on a global scale?
First, we need to understand ethics better. We need a science of ethics. I think in humans, ethics comes not just from following a bunch of rules or laws, but from understanding the consequences and the power of imagination, from the motivation to do right by people, to see a benefit – a maximum benefit wherever we can. And so, that means by making the right decisions, we’re motivated to creatively pursue the right decisions. If we really want ethical machines, we need to start looking at how we can empower machines with these abilities of imagination.
Human-like machines
Should robots have human-like appearances? A video interview with David Hanson, the inventor of Sophia.