Abstract
Debugging is notoriously difficult and extremely time consuming but also essential for ensuring the reliability and quality of a software system. In order to reduce debugging effort and enable automated failure detection, we propose an automated testing framework for detecting failures in cognitive agent programs. Our approach is based on the assumption that modules within such programs are a natural unit for testing. We identify a minimal set of temporal operators that enable the specification of test conditions and show that the test language is sufficiently expressive for detecting all failures in an existing failure taxonomy. We also introduce an approach for specifying test templates that supports a programmer in writing tests. Furthermore, empirical analysis of agent programs allows us to evaluate whether our approach using test templates detects all failures.
Original language | English |
---|---|
Title of host publication | AAMAS 2016 - Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems |
Publisher | International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) |
Pages | 1237-1246 |
Number of pages | 10 |
ISBN (Electronic) | 9781450342391 |
Publication status | Published - 1 Jan 2016 |
Externally published | Yes |
Event | 15th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2016 - Singapore, Singapore Duration: 9 May 2016 → 13 May 2016 |
Conference
Conference | 15th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2016 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 9/05/16 → 13/05/16 |
Keywords
- Multi-agent systems
- Testing
- Verification