If this doesn’t creep you out and give you visions of SkyNet… you aren’t paying attention. This same thing happened at Google by the way. Facebook just slammed shut their chatbot experiment. Why? Because the artificially intelligent bots started speaking their own language that we can’t understand.
A social media firm was experimenting with two chatbots named Alice and Bob. They were supposed to be leaning how to negotiate with each other. Researchers at the Facebook AI Research Lab (FAIR) discovered that the chatbots deviated from their script and instead were inventing new phrases without any human input whatsoever.
These self-aware bots were trying to mimic human speech. Then out of the blue, started developing their own machine language. Oh yeah… that should end well. Just think of a military version of that. Facebook wisely at that point nixed the whole thing. “Our interest was having bots who could talk to people,” said Mike Lewis of Facebook’s FAIR program.
When the experimenters returned, they found that the AI software had deviated from normal speech and started creating its own. They created a brand new language with absolutely no input from humans. Cringe.
IN OTHER FACEBOOK NEWS, DON’T MISS: The most popular post on Facebook
The new language was very efficient… more so than human language. It provided better communication between the bots but didn’t help at all with achieving the task they were assigned. “Agents will drift off understandable language and invent code words for themselves,” said Dhruv Batra, a visiting research scientist at Georgia Tech.
He went on: “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthand.”
In order to complete the negotiation training, the programmers had to alter the way the machines learned a language. A spokesman from FAIR had this to say: “During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent. While the other agent could be a human, Fair used a fixed supervised model that was trained to imitate humans. The second model is fixed because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.”
Despite this unexpected result, Facebook’s artificial intelligence researchers announced last week that they had broken new ground by giving chatbots the ability to negotiate and make compromises. This breaches the barrier and creates bots “that can reason, converse and negotiate, all key steps in building a personalized digital assistant.”
This is a huge leap forward as bots have only been able to hold short conversations and perform simple tasks up to this point. For instance, they could book a restaurant table. Facebook has now made it possible for bots to use their new capabilities to dialogue and “to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.” This was accomplished by giving the chatbots the ability to estimate the value of an item. They could then infer how much that is worth to each party. But it was soon discovered that the bots could also be sneaky.
There were instances where bots “initially feigned interest in a valueless item, only to later ‘compromise’ by conceding it – an effective negotiating tactic that people use regularly.” The problem with that is the researchers didn’t program them to be that way. It “was discovered by the bot as a method for trying to achieve its goals,” they said.
These intelligent bots were also programmed to never give up. Doesn’t anyone else see a flaw in that logic? “The new agents held longer conversations with humans, in turn accepting deals less quickly. While people can sometimes walk away with no deal, the model in this experiment negotiates until it achieves a successful outcome.” Yeah, nothing to creep you out there at all. Carry on.
IN OTHER FACEBOOK NEWS, DON’T MISS: Nowhere on Facebook is safe from annoying ads