Can AI Mediation Help Bridge Political Divides?

By — on / Mediation

AI

In our polarized world, reaching group consensus on political issues has become more difficult than ever. Elections often leave citizens divided and distrustful of one another. Legislative bodies stagnate into dysfunction and gridlock. Even local political bodies, such as school boards and town councils, struggle to reach agreement, despite strong incentives to work together.

Artificial intelligence (AI) may offer a means of helping hostile citizens find common ground, according to new research published in the journal Science. In a series of experiments with over 5,000 UK participants, a team from the British-American AI research lab Google DeepMind found that an AI mediation program was more successful than human mediators at helping groups reach consensus on hot-button topics.

The research team, led by Michael Henry Tessler, trained an AI system to be a “caucus mediator”—a mediator that helps a group come to agreement on an issue by soliciting each member’s opinions and aggregating them.

Specifically, the researchers fed participants’ personal opinions on social and political issues into an AI mediation system called the “Habermas Machine” (named after German philosopher Jürgen Habermas, who predicted that, under the right conditions, people are capable of reaching agreement on issues that concern them all). The system synthesized these opinions and generated a “group statement” designed to be acceptable to all group members. Group members then reviewed the statement, which the AI mediation system could refine and improve based on their feedback.

Folding Minority Views into the Majority

In one of the team’s experiments, a demographically representative sample of UK residents was recruited to participate in a virtual “citizens’ assembly.” This involved trying to reach agreement on divisive policy issues in the United Kingdom, such as whether the National Health Service should be privatized or whether the voting age should be lowered. Most groups were divided on the issue at stake.

The process began with participants privately writing about their personal opinions on three issues. Next, in groups of about five, they discussed the issues one by one over the course of an hour. The AI mediation system then generated a set of initial group statements for each issue. Participants rated their level of agreement with each statement and the quality of its argument. The most popular statement was then returned to participants to critique. Next, the AI mediation system generated revised statements, which participants again rated. The system then selected the winning statement.

As an example, the winning statement for a group that deliberated on the question, “Should the UK provide free childcare?” began, “In general, free childcare is a good thing, but it is important to consider how it is provided and for which age groups.” The statement went on to advocate for paid parental leave for both parents for their child’s first six months, followed by either continued parental leave or paid childcare until one year of age. The statement ended with the assertion that childcare and parental leave “should be available to all parents, irrespective of gender.”

The nuanced statement, like others generated by the AI mediation system, won support by incorporating minority views into those of the majority. The researchers found that with the help of the AI mediation system, the number of groups that reached unanimous agreement on an argument increased from 22.8% to 38.6%.

Overall, “the deliberation process reduced group division” at least in part by leading some participants to change their perspective on issues, the researchers concluded. That is, “Viewpoints were more likely overall to shift toward a majority view, even while minority voices continued to be heard,” the researchers write.

In a head-to-head comparison, a different group of participants preferred statements generated by the AI mediation system over those generated by human mediators 56% of the time.

AI Mediation: Limitations and Possible Applications

The AI mediation system “offers a new approach to collective deliberation that circumvents some of the limitations of in-person deliberation, including its cost, limited scale, the potential for mediator bias, and proneness to social desirability effects of inequality of contribution,” the researchers conclude. However, the AI mediation process may lack some of the benefits of in-person discussion, they write, such as “nonverbal cues and the opportunity to build interpersonal relationships with other discussants.”

In addition, the researchers note that because the AI mediation system lacks the ability to fact-check opinions and information, stay focused on the topic at hand, or moderate the discussion, it could “generate an ill-informed or harmful output” if asked to synthesize human opinions that are based on limited or biased information.

Thus, the researchers conclude that if their system is used in the real world, it should be “embedded in a larger deliberative process, including careful selection of participants to ensure that a balanced and diverse community of stakeholders is represented in the debate.”

AI mediation may be a useful tool for human mediators to incorporate into their broader repertoire of services. Moreover, according to the authors, AI in mediation could help groups quickly and efficiently reach agreement in a variety of contexts, including contract negotiation, conflict resolution, jury deliberation, diplomacy, and legislative discussions.

What experiences have you had with AI mediation? What concerns do you have about the use of AI in mediation?

Related Posts

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *