Could Chappie's robot-policed future come to pass? A real-world robotics expert weighs in

Chappie is the artificially intelligent robo-protagonist of Neill Blomkamp's Chappie, but he's only one of three different types of robots we get to see in the film. He -- an autonomous, self-aware robot -- is joined by "The Moose," a remote-controlled machine, and the general police robots, who are something in between.

Given the increasing prevalence of drones and other advances in robotics, are we truly heading toward a world policed by machines? Will robots learn and grow as Chappie does? How likely is it that we will ever have a robot police force? Will we soon be welcoming our evil A.I. overlords? We spoke with Dr. Wolfgang Fink -- an associate professor and endowed chair of microelectronics at the University of Arizona with 13 patents on autonomous systems -- who talked to us about how Chappie stacks up to what we might see in the real world.

Here's what we learned:

The definition of artificial intelligence (A.I.) isn't what you think.

Dr. Fink says, "From a scientific point of view, A.I. happens to be a technical term, and only describes systems that are rule-based: If you encounter different situations and react [in different] ways. If you have many of these rules, it looks like the system is intelligent. The problem is, with A.I., if you encounter something for which you do not have a rule and did not anticipate, you essentially do not know how to react.

"As such, an A.I. system is actually not an autonomous system. My claim is how A.I. has been done over the last several decades is not the path to a truly autonomous system."

Although we have about 2 billion computers at our fingertips, it will take more than the world's combined computing power (with the power of Moore's Law behind it) to give us a creation like Chappie.

Fink says, "If you use thousands of CPUs together, they're equally dumb, because they still need to be programmed. Sheer numbers alone is not going to cut it."

The robot police may be coming.

We already have brainwave-controlled robots (although not the same type of kill-bot as the Moose in Chappie), but according to Dr. Fink, the robo-cops seen on the streets of alternate Johannesburg may appear in the future.

This type of A.I. is simpler to create because it's "rules-based. They are not self-aware, nor are they situationally aware." Dr. Fink says, "If [these robots] encounter certain situations, they know how to react to it. They pull a gun, things like that. But if there's a shootout and some civilians walk by in the line of fire, a human police officer would alter the course of action and consider the new situation that just arose. But the A.I. system would just follow the protocol and just keep firing. That's a problem there."

Autonomous robots may not think or feel the way we do.

Now, about our robot overlords ...

According to Dr. Fink, "If it's a truly autonomous system, it may not be governed at all by ethics and morals. It may think in ways that are not human. We may not be able to comprehend how it thinks."

Fink gave us an example: Let's say you have a swarm of autonomous systems, like robot birds. Let's also say you give them a goal to survive. But you also tell them that they need to get from one point to the next — and there's a wall in the way. Several birds may decide to sacrifice themselves and break through the wall so that the others can fly through.

Dr. Fink says this brings up the Turing test.

"The idea [of the Turing test] is if you ask a system questions from the outside, and you cannot tell the difference between the system and the human, it's considered to be intelligent. I'd like to put a spin on that: A swarm of autonomous systems sacrificing themselves, you could say, look like they have that concept of sacrificing themselves for the greater good.

"However, little do you know the system internally has a higher goal, which is surviving. The closest ones were the ones to bite the bullet. It might have been an ice-cold decision, without any feelings or emotions. It was a matter of optimizing an outcome, whatever the means. It gives you a different perspective on this."

It can all go horribly wrong.

According to Dr. Fink, we should be less concerned about autonomous systems ... and more concerned about cyborgs. For example, a stroke patient may acquire implants to replace the activity previously handled by the damaged part of the brain. In that way, the patient can have the same kind of life s/he had before. Sounds wonderful, right?

Not so fast. A not-so-benevolent person, Dr. Fink says, "can try to take control of implants over those areas of the brain and order them to shut down the areas where you feel pity and morals. You could suppress that and have the perfect soldier."

We may eventually be able to record memories.

At one point in Chappie, a character uses a machine to record memories. It seems that the concept is not so far-fetched after all.

Scientists are currently working to alter perceptions, such as removing objects from a person's vision, with transcranial magnetic stimulation. Dr. Fink believes the inverse principal may be applied as well: "Instead of making something vanish in front of your eyes by exerting a field, I could record what you're thinking, [which was] generated while you were thinking about a certain object. I can use that to basically figure what you're thinking about."

In other words, the ability to record very specific images might be within our capability.

It's highly unlikely that we'll build a robot like Chappie. However ...

Yeah, we may be able to record the thought of a pretty butterfly -- but not the ability to create an independent thinking machine that can replicate your every thought.

The key word here is "our" capability. It might be in a robot's capability.

Dr. Fink says, "What it would take to [build an autonomous, self-aware robot is to] find the right sets of building blocks and a recipe of how to put it together and let the system take over and build and modify itself. You let the system learn, adapt and adopt what's happening in the environment.

"And then you may be very surprised as to what the outcome is. That's both exciting and potentially scary, because you don't know what the system would develop into."

More from around the web