JULY 8, 2024 by Josh Rhoten, Colorado State University

Collected at: https://techxplore.com/news/2024-07-ai-groups-stay-effective-classroom.html

Small teams—no matter the project they are working on—are constantly sharing information with each other about goals, obstacles and next steps. By talking through options, the group builds a shared understanding of the work. It’s a complicated and decidedly human process that can get messy quickly if one member is not paying attention or simply misunderstands a key point and fails to clarify it while the group pushes forward.

Now, researchers at Colorado State University have developed a model that could enable an artificially intelligent agent to monitor and even potentially referee those interactions to encourage better collaboration. The work is part of an ongoing effort in the field to better integrate an understanding of nonverbal behavior, speech and direct action into human-AI and human-robot collaboration scenarios. The findings are detailed in a new paper recently published in the Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation.

The paper explores the concept of “common ground tracking,” which consists of identifying and monitoring the shared beliefs and open questions a group has when talking about a task. This is a difficult challenge for an AI system that requires careful tracking of both the verbal and nonverbal interactions by the group, which can interweave in subtle ways. It also requires developing an understanding of the team’s thought process over time and how best to intervene if it strays off task.

Assistant Professor Nikhil Krishnaswamy led the work at CSU through the Department of Computer Science. His team used an experiment in which a three-person group tried to collaboratively determine the weights of a set of blocks with a balance scale. Careful transcription of the participants’ utterances during that process, such as, “I think the block goes here,” and gestures like pointing, provided a dataset of interactions that highlight collaboration in action.

Krishnaswamy’s team then used that data to train its deep neural model for common ground tracking. He said the model is the first instance of real-time, dynamic tracking in this kind of multiparty setting, and it could integrate with other AI systems to provide team support in many scenarios.

“Our model focuses on the activities in a group that single out good collaboration, such as voice, words or behaviors which were tracked independently and in real time,” he said. “It also lets us see how propositions and statements are surfaced in the group through those behaviors and how—or if—they are accepted as fact by others. This kind of research builds needed literacy around these kinds of human-robot systems and the ways people may engage with them.”

Developing the foundation for future human-robot interaction

Krishnaswamy said it is easy for most humans to ascribe belief and intentions to interpret or predict behavior, but it is something AI systems still struggle with. He added that it was not solely a computational problem but was instead a question of better linking concepts from psychology and other disciplines to improving the underlying models driving the AI.

Krishnaswamy said the paper and broader topic are part of ongoing joint research through the AI Institute for Student-AI Teaming. That group is studying interrelated questions about the use of AI in the classroom, including how these systems can support educators or help elevate voices from diverse backgrounds or unique perspectives.

“The goal is not to develop an AI that gives students the right answer, but instead supports better collaboration and facilitates discussion, because that oftentimes leads to better answers,” Krishnaswamy said.

Work on the paper was done in partnership with researchers at Brandeis University through the institute. Krishnaswamy said the two universities were also working on a related project titled “Friction for Accountability in Conversational Transactions” for the Defense Advanced Research Projects Agency (DARPA). That work seeks to offer the same sort of AI support in chaotic environments such as war zones.

One example Krishnaswamy used regarding that effort was of an AI agent re-sharing key information that may have been missed during an exchange in a loud environment, or prompting team leaders to verify key details, such as enemy position, that could be based on false or incomplete information. Understanding the “common ground” as it relates to group cohesion will be key for AI operating in those kinds of situations, said Krishnaswamy.

Krishnaswamy said his team will continue to improve its model over the next few months. One potential avenue of future research, he said, would be developing an understanding of how certain types of communication can bias group decision-making.

“This project is a nice combination of data-driven work and formal modeling with clear applications. Our next step is to consider contradictions or different environmental factors to see how the model reacts,” he said.

More information: Ibrahim Khalil Khebour et al, Common Ground Tracking in Multimodal Dialogue (2024)

Leave a Reply

Your email address will not be published. Required fields are marked *

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments