Will LLMs become the ultimate mediators for better and for worse? DeepMind researchers and Reddit users seem to agree on that
Date:
Tue, 22 Oct 2024 20:14:10 +0000
Description:
Researchers at Google DeepMind have explored the use of AI to mediate political disputes - and the results were surprising.
FULL STORY ======================================================================
AI experts believe large language models (LLMs) could serve a purpose as mediators in certain scenarios where agreements cant be reached between individuals.
A recent study by researchers at Google DeepMind sought to explore the potential for LLMs to be used in this regard, particularly in terms of
solving incendiary disputes amidst the contentious political climate
globally.
Finding agreements through a free exchange of views is often difficult, the study authors noted . Collective deliberation can be slow, difficult to
scale, and unequally attentive to different voices. Winning over the group
As part of the project, the team at DeepMind trained a series of LLMs dubbed Habermas Machines (HM) to act as mediators. These models were trained specifically to identify common, overlapping beliefs between individuals on either end of the political spectrum.
Topics covered by the LLM included divisive issues such as immigration, Brexit, minimum wages, universal childcare, and climate change.
Using participants personal opinions and critiques, the AI mediator iteratively generates and refines statements that express common ground among the group on social or political issues, the authors wrote.
The project also saw volunteers engage with the model, which drew upon the opinions and perspectives of each individual on certain political topics.
Summarized documents on volunteer political views were then collated by the model, which provided further context to help bridge divides.
The results were very promising, with the study revealing volunteers rated statements made by the HM higher than those made by human statements on the same issues.
Moreover, after volunteers were split into groups to further discuss these topics, researchers discovered that participants were less divided on these issues after reading statements from the HMs compared to human mediator documents.
Group opinion statements generated by the Habermas Machine were consistently preferred by group members over those written by human mediators and received higher ratings from external judges for quality, clarity, informativeness,
and perceived fairness, researchers concluded.
AI-mediated deliberation also reduced division within groups, with participants reported stances converging toward a common position on the
issue after deliberation; this result did not occur when discussants directly exchanged views, unmediated.
The study noted that support for the majority position on certain topics increased after AI-supported deliberation. However, the HMs demonstrably incorporated minority critiques into revised statements.
What this suggests, researchers said, is that during AI-mediated
deliberation, the views of groups of discussants tended to move in a similar direction on controversial issues.
These shifts were not attributable to biases in the AI, suggesting that the deliberation process genuinely aided the emergence of shared perspectives on potentially polarizing social and political issues. AI mediation in domestic disputes can be a tricky balancing act
There are already real-world examples of LLMs being used to solve disputes, particularly in relationships, with some users on Reddit having reported
using the ChatGPT , for example.
One user reported their partner used the chatbot every time they have a disagreement and that this was causing friction.
Me (25) and my girlfriend (28) have been dating for the past 8 months. Weve had a couple of big arguments and some smaller disagreements recently, the user wrote. Each time we argue my girlfriend will go away and discuss the argument with ChatGPT, even doing so in the same room sometimes.
Notably, the user found on these occasions, their partner could come back
with a well constructed argument breaking down everything said or done during a previous argument.
Its this aspect of the situation thats caused significant tension though.
Ive explained to her that I dont like her doing so as it can feel like Im being ambushed with thoughts and opinions from a robot, they wrote. Its
nearly impossible for a human being to remember every small detail and break it down bit by bit, but AI has no issue doing so.
Whenever I've voiced my upset I've been told that ChatGPT says youre insecure or ChatGPT says you dont have the emotional bandwidth to understand what Im saying . More from TechRadar Pro These are the best AI writers around today Five ways AI and data is making the difference in the Premier League
Microsoft CEO says AI has to be trustworthy to be useful
======================================================================
Link to news story:
https://www.techradar.com/pro/will-llms-become-the-ultimate-mediators-for-bett er-and-for-worse-deepmind-researchers-and-reddit-users-seem-to-agree-on-that
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)