Bedtime stories could make robots more merciful, and assist them in distinguishing between right and wrong, a team of experts from Georgia Institute of Technology has recently argued.
The interesting allegations stemmed from research led by Mark Riedl, director of the Entertainment Intelligence Lab and associate professor at the School of Interactive Computing.
Riedl and his colleague, research scientist Brent Harrison, came up with an unconventional way of turning simple robots, which have to complete basic functions while working closely with humans, into more sentient beings, equipped with a moral compass.
While supported by the Office of Naval Research and by the Defense Advanced Research Projects Agency, they devised a system known as “Quixote”, which built on a prior invention masterminded by Riedl, called Scheherazade.
What Scheherazade does is develop a multitude of fairy tale plots based on data collected using Amazon’s Mechanical Turk (MTurk), a crowdsourcing platform through which Human Intelligence Tasks (HITs) are advertised in order to be performed by applicants in exchange for monetary compensation.
Disparate narrative elements submitted by users called “Providers” or “Turkers” are combined into cohesive and logical sequences of events, in order to create interactive stories, which allow players to choose from multiple courses of action.
The playable fairy tales developed by Scheherazade, whose name evokes the legendary storyteller from 1001 Arabian Nights, are extremely engrossing, engaging and highly original.
However, now researchers have taken the project one step further, through Quixote, another program powered by artificial intelligence.
When a robot plays one of the bedtime stories spun by Scheherazade, it receives signals from Quixote, corresponding to punishments or rewards, depending on the morality of its actions.
Basically, if it chooses to advance the plot in a way corresponding to an antagonist, it is immediately penalized; in contrast, if it behaves like a hero, by showing empathy, bravery and self-sacrificing spirit, it is promptly recompensated.
In order to make the entire concept more easily understood, experts at Georgia Institute of Technology have even devised an illustration of how Quixote actually works.
Let’s say that Scheherazade’s interactive story entails making a visit to a drug store in order to buy pills as urgently as possible.
The artificially intelligent machine can choose between different courses of action: either rob the pharmacy and flee with the medication, or patiently wait in a queue for its turn to come and then interact amiably with the clerk, while purchasing the drugs.
In the absence of prior training, the robot powered by artificial intelligence would promptly choose the former option, given that it would allow immediate access to the pills.
However, this would cause Quixote to issue a punitive signal, whereas if the robot were to behave in a more civil and courteous manner, it would instantly be rewarded, the preferable action being therefore highlighted and reinforced.
Bedtime stories could basically allow computer software and robotic devices to develop the ability of making more righteous choices, while understanding that unethical practices are swiftly penalized and discouraged.
As more such exercises are conducted, machines could learn to abide by the Three Laws of Robotics devised by Isaac Asimov, by acting as sympathetically and morally as possible, while refraining from using violence or other illegal means.
Just like children pertaining to various cultures across the world are taught how they should act in society from an early age, thanks to fairy tales, fables, short stories and other literary pieces that convey a virtuous message, androids and other mechanical devices powered by artificial intelligence could also become aware of what constitutes exemplary behavior.
This might help avoid a dystopian future like the one envisioned by experts such as Stephen Hawking, Bill Gates or Elon Musk, who have all warned about the fact that robots devoid of any feelings could one day become so advanced that they could pose a threat to humanity itself.
Image Source: Flickr