Abstract
Recent advancements in Artificial Intelligence (AI) driven mainly by Large Language Models (LLMs), have reignited the debate regarding the extent to which machines and humans understand each other. However, establishing and verifying a shared understanding is not a trivial task, especially between agents that operate using uninterpretable or inaccessible internal representations, e.g. humans or artificial agents using deep neural networks like LLMs. This dissertation describes the process through which two agents can learn to understand each other, provides application examples, and highlights limitations. Special focus is given to hybrid agent populations that include both humans and artificial agents. In the first part of the dissertation, we propose tangible task-oriented understanding definitions, where task performance is used to indirectly measure levels of shared understanding between agents. The process through which two agents learn to understand each other is captured in a framework, tailored for hybrid agent populations. The framework is used to analyze shared (task-oriented) establishment studies from various fields, exhibiting its utility and generalizability. In the second part, we provide methods for establishing shared understanding between agents and demonstrate their application. The first method shows how an artificial agent, i.e. a student, can be intuitively tutored on how to perform a task by being provided with individual task-performing examples. The second method demonstrates how two agents can describe a query to each other, using candidate results to form examples. In the demonstrated experiments, agents that operate using different ontologies establish a shared (task-oriented) understanding and answer queries that require their collaboration. In the third part, we investigate whether shared understanding, is always the reason behind successful coordination. Our experiments show that agents who learn to communicate and coordinate based on observations with very small overlap may end up coordinating due to persistent misunderstandings; the successful misunderstandings. In this case, the agents use and interpret the communication signals differently. We find that strongly interconnected agent groups of at least three agents demonstrate an emergent property which prevents the emergence of successful misunderstandings, even under conditions of limited shared observations. The presented work advances the development of autonomous agents that can learn to perform new tasks in hybrid open multi-agent systems.
| Original language | English |
|---|---|
| Qualification | PhD |
| Awarding Institution |
|
| Supervisors/Advisors |
|
| Award date | 4 Feb 2026 |
| DOIs | |
| Publication status | Published - 4 Feb 2026 |
Fingerprint
Dive into the research topics of 'Establishing Task-oriented Understanding Between Agents'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver