Are machines taking over?
Video interview with Susan Schneider, Philosopher.
How realistic is it that machines will become self-consciousness and have emotions? Or in other words become human kind?
Susan Schneider: Good question. Superficially I think it's realistic that the androids will looks human but the question is really whether they might have interior traits that are a lot like us. And I guess we could break the answer down into different dimensions.
One question to ask is, will it feel like anything to be them. And I think that is a question about the nature of conscious experience and whether something made of a different substrates that didn't have a brain could actually feel.
I take that as an empirical matter and I’ve been developing tests for machine consciousness. But you might also wonder whether they'll have a sense of itself. So you mentioned self-consciousness and that really depends on whether the system understands its boundaries and has a narrative, whether it has a sense of the events that happened in. And then you might also ask about intelligence. Will they be intelligent in a way like us? And right now we have machines that can beat humans in playing ‘Go’, at Jeopardy. All sorts of tasks that are specific to certain domains. And the holy grail of artificial intelligence and what I think will see moving in the future is machines which increasingly can go from one thing to the next and have more of a flexible domain general intelligence. So maybe in 20 years we’ll see systems that I think like us. We don't know right now.
Do we need to fear such an super intelligent AI?
Susan Schneider: A super intelligent system would be a system that exceeds human intelligence in every domain. And some people worry that once we hit the level where we have systems that think like humans, it would be easy to upgrade those system. And we already see that ‘Alpha Go’, the game of Go, in one domain is much better than us.
So we couldn't imagine - at least in principle - an artificial intelligence which is general in its capacities and which blows us away intellectually. And then humans will no longer be the most intelligent beings on earth. Do we have something to fear? I think we have to be very careful. Because - I'm not saying this would happen tomorrow - but if we create something smarter than us what makes us think that we can control it? I mean we can't even control humans.
I mean we have kids who when they're teenagers do crazy things. We have people when they're adults who unfortunately are criminals and do seriously crazy things. I think we have to be very, very careful with AI safety.
Especially when it comes to the potential development of super-intelligent artificial intelligence.
How should we build AI to assure that it becomes human friendly?
Susan Schneider: There are a lot of projects right now looking at different avenues building a friendly AI. These issues are important to begin working on now as artificial intelligence moves from being good at specific domains like ‘Go’ and Jeopardy, we need to think now about the development of machines which can be more flexible in general - even if they're nor super intelligent. So we have to hit the ground running.
So you might think if you give it a developmental phase so the AI is allowed to develop in the way that a child develops. That might be one avenue. Another avenue is to look at the field of ethics in philosophy and code in principles also for ethical behavior in a all kinds of different approaches. It's very, very difficult. Especially when you start thinking about super intelligence or an artificial general intelligence that's at the level of a human because the systems could modify their own behavior and their own architecture and change very quickly in ways that we don't foresee. So AI safety is a big topic not just at the level of everyday Cyber Security - which gives us a lot to worry about already - but at the level of the creation of the smart super-smart AI moving into the future.
So are you more positive for more critical regarding this topic?
Susan Schneider: I tend to be very cautious. Because I think - even in the chance is only two percent that we could create something that we lose control over – because when we're talking about issues that affect the future of humanity we have to be extremely careful.
Even if the chances are two-percent that artificial intelligence could control us rather than us controlling it we need to take precautionary measures.
Artificial intelligence. Everything OK?
Optimists hope that AI will solve all problems. Pessimists fear that AI will take over power. Who is right?