Telling robots stories could be the best way to teach them human morality

Artificial intelligence still has a way to go, but once it does catch up with the human mind, we do want to avoid a Skynet-level situation. Turns out telling our future robot overlords a story could be the best way to teach them about morality.

Mark Riedl, director of the Entertainment Intelligence Lab at the Georgia Institute of Technology, has developed a new AI technique dubbed Quixote. The basic concept: Show robots stories that demonstrate normal, positive human behavior (i.e. kindness, telling the truth, being polite, etc.), crowdsource the concept, and a good artificial intelligence can start to figure out what it’s “like” to be human. As Popular Science notes, it’s almost like a “Choose Your Own Adventure” story, except the robots are learning how to act.

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” Riedl said via Popular Science. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

To help the positive concepts stick, the Quixote approach would reward the right decisions with positive reinforcement and “punish” the wrong decisions with negative reinforcement. Basically like teaching a child: Let the AI know when it chooses correctly, and it’ll add that tidbit to the database and continue to make smarter (i.e. more human) decisions in the future.

The approach is genius in its simplicity. Humanity has used stories to break down moral concepts for millennia, going all the way back to before the parables of biblical times. Hey, if we want to build robots that can “think” like humans, it stands to reason we might need to teach them that way, too.

(Via Popular Science)

More from around the web