July 27, 2024
Automated System Enhances Collaboration Between Humans and AI Assistants

Automated System Enhances Collaboration Between Humans and AI Assistants

MIT and the MIT-IBM Watson AI Lab have developed an automated system that can teach users when to collaborate with an AI assistant. The system aims to address the challenge of determining when to trust an AI model’s advice and when to ignore it. Through a customized onboarding process, the system helps users learn how to effectively collaborate with the AI assistant by automatically generating rules and providing natural language descriptions. The researchers found that this onboarding procedure led to a 5% improvement in accuracy when humans and AI collaborated on an image prediction task. The results highlight the importance of providing proper training when using AI tools.

Unlike many other tools that come with tutorials, AI tools often lack training on how to use them effectively. The researchers aimed to tackle this problem by developing a methodological and behavioral approach to onboarding. The fully automated system learns to create the onboarding process based on data from the human and AI performing a specific task. It can be adapted to different tasks and can be used in various scenarios where humans and AI models work together, such as social media content moderation, writing, and programming.

The researchers believe that this onboarding process will be crucial for training medical professionals. Doctors making treatment decisions with the help of AI could benefit from similar training methods. The onboarding process could reshape the way continuing medical education is conducted and how clinical trials are designed.

Existing onboarding methods for human-AI collaboration often rely on training materials produced by human experts for specific use cases, making them difficult to scale up. Additionally, explanations provided by AI models regarding their confidence in each decision have been found to be rarely helpful. The researchers’ automated system overcomes these limitations by learning from data and building the onboarding process accordingly.

The system first collects data on the human and AI performing a specific task. Using an algorithm, the system identifies regions where the human collaborates incorrectly with the AI. These regions capture instances where the human trusted the AI’s prediction but the prediction was wrong, and vice versa. The system then uses a large language model to describe each region as a rule, using natural language. These rules are used to create training exercises, where users practice collaborating with the AI and receive feedback on their performance.

The researchers tested their system on two tasks: detecting traffic lights in blurry images and answering multiple-choice questions from various domains. They compared different onboarding methods and found that the researchers’ onboarding procedure significantly improved users’ accuracy on the traffic light prediction task. However, onboarding was less effective for the question-answering task, likely because the AI model provided explanations with each answer.

The researchers concluded that providing recommendations alone without onboarding had a detrimental effect on users’ performance. Users became confused and took more time to make predictions. An onboarding process without recommendations, on the other hand, improved users’ accuracy without slowing them down.

In the future, the researchers plan to conduct larger studies to evaluate the short- and long-term effects of onboarding. They also aim to leverage unlabeled data for the onboarding process and develop methods to reduce the number of regions without omitting important examples. Ultimately, the goal is to establish effective training procedures that promote successful collaboration between humans and AI assistants.

*Note:
1.      Source: Coherent Market Insights, Public sources, Desk research
2.      We have leveraged AI tools to mine information and compile it