Today, many people use “virtual assistants,” such as the iPhone’s Siri or Amazon’s Alexa, to perform simple tasks and provide answers to straightforward questions. So-called chatbots, or bots, grease the wheels of everyday life by giving directions, looking up arcane facts, providing customer service, and much more. The best bots can also carry out lengthy conversations using preprogrammed responses.
But chatbots’ ability to carry out more complex tasks, such as price negotiations, has been limited by their inability to communicate and reason like humans. That may be changing, new research by Facebook suggests.
A convincing counterpart
Facebook’s research team, led by Mike Lewis, set out to train chatbots to conduct back-and-forth text negotiations over multiple issues with a human being or with another chatbot in Facebook’s Messenger app.
The researchers began by asking pairs of human beings to divide multiple objects (such as two books, one hat, and three balls) between them in online negotiation simulations. As in a typical real-world negotiation, the people did not know how much their counterpart valued each item. After collecting the scripts from these negotiations, the research team had chatbots study them and try to learn to imitate the human negotiators.
To train the bots to “think” multiple steps ahead, the researchers had them practice thousands of negotiations. The bots were also trained to negotiate using “humanlike language,” according to Facebook’s report on the study.
After all that training, the bots’ newfound bargaining skills were tested in online negotiation simulations with actual human beings. Their results? Great in one respect and not bad in another.
First, the bots largely passed for humans. “Most people did not realize they were talking to a bot rather than another person,” Facebook reports, “showing that the bots had learned to hold fluent conversations in English in this domain.” Second, the bots were moderately effective: The best of them performed just as well as human negotiators—that is, achieving “better deals about as often as worse deals.”
The researchers concluded that the bots “not only can speak English but also think intelligently about what to say.” More specifically, the bots were capable of inferring how much their human counterparts valued items and of “thinking” multiple steps ahead.
Currently, Facebook’s negotiating bots can carry out only relatively rudimentary negotiations, but the company has publicly released the code for the bots. Other companies are likely to continue advancing the technology.
The possibility that individuals and organizations will begin farming out negotiation to bots raises several interesting issues. First, the fact that the bots in the Facebook study learned to negotiate by studying human interactions raises the question of whether they incorporated flawed strategies into their repertoires. Myriad cognitive, emotional, and motivational biases prevent us humans from doing our best at the bargaining table. To name just a few barriers to rational negotiating, overconfidence, anger, and short-term desires can all lead us to act against our best interests.
In the future, might bots be trained to negotiate better than humans? If so, individual consumers could be at a disadvantage relative to bots developed by corporations, a concern that Thomas Smyth, cofounder and CEO of chatbot money-management company Trim, raised in an interview with the website The Verge. Companies could gain an edge by compiling data from all of their negotiations with individual consumers and identify ways to outsmart them.
In addition, the Facebook researchers found evidence that bots may stumble upon ethically questionable negotiating behavior on their own. Specifically, they learned to bluff—that is, to show interest in items they didn’t value and later offer “compromises” on those items.
Smart negotiators make tradeoffs on issues they value little for those they value more. But the fact that the bots learned on their own to engage in mildly deceptive behavior raises two important questions: (1) Will companies that build bots allow them to engage in unethical (if not illegal) behavior? and (2) How far away from ethical behavior will bots stray on their own?
A potential time-saver
It’s unlikely that we will find ourselves entrusting chatbots to negotiate our salaries, sell our homes, or hammer out the details of a merger anytime soon. And issues of power and ethics will need to be addressed for negotiating bots to gain consumers’ trust.
But before long, we may feel comfortable entrusting bots to conduct more mundane online bargaining conversations on our behalf, such as negotiating meeting times with coworkers or haggling over the price of multiple pieces of furniture—freeing us up to spend more time on the deals that matter most.