Abstract
Deductive Mastermind is a deductive reasoning game that is implemented in the online educational game system Math Garden. A good understanding of the difficulty of Deductive Mastermind game instances is essential for optimizing the learning experience of players. The available empirical difficulty ratings, based on speed and accuracy, provide robust estimations but do not explain why certain game instances are easy or hard. In previous work a logic-based model was proposed that successfully predicted these difficulty ratings. We add to this work by providing a model based on a different logical principle—that of eliminating hypotheses (dynamic epistemic logic) instead of reasoning by cases (analytical tableaux system)—that can predict the empirical difficulty ratings equally well. We show that the informational content of the different feedbacks given in game instances is a core predictor for cognitive difficulty ratings and that this is irrespective of the specific logic used to formalize the game.
Original language | English |
---|---|
Title of host publication | CogSci 2018 - 40th Annual Cognitive Science Society Meeting [Proceedings] |
Subtitle of host publication | Changing/Minds |
Editors | C. Kalish, M. Rau, J. Zhu |
Publisher | Cognitive Science Society |
Pages | 2789-2794 |
Number of pages | 6 |
ISBN (Electronic) | 9780991196784 |
ISBN (Print) | 9781510872059 |
Publication status | Published - 2018 |