TY - JOUR
T1 - Machine learning-based classification of viewing behavior using a wide range of statistical oculomotor features
AU - Kootstra, Timo
AU - Teuwen, Jonas
AU - Goudsmit, Jeroen
AU - Nijboer, Tanja
AU - Dodd, Michael
AU - Van der Stigchel, Stefan
PY - 2020/9/2
Y1 - 2020/9/2
N2 - Since the seminal work of Yarbus, multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data. For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.
AB - Since the seminal work of Yarbus, multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data. For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.
UR - http://www.scopus.com/inward/record.url?scp=85090272488&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090272488&partnerID=8YFLogxK
U2 - 10.1167/jov.20.9.1
DO - 10.1167/jov.20.9.1
M3 - Article
C2 - 32876676
AN - SCOPUS:85090272488
SN - 1534-7362
VL - 20
SP - 1
EP - 15
JO - Journal of Vision
JF - Journal of Vision
IS - 9
ER -