You prime what you code: The fAIM model of priming of pop-out

Wouter Kruijne*, Martijn Meeter

*Corresponding author for this work

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Our visual brain makes use of recent experience to interact with the visual world, and efficiently select relevant information. This is exemplified by speeded search when target- and distractor features repeat across trials versus when they switch, a phenomenon referred to as intertrial priming. Here, we present fAIM, a computational model that demonstrates how priming can be explained by a simple feature-weighting mechanism integrated into an established model of bottom-up vision. In fAIM, such modulations in feature gains are widespread and not just restricted to one or a few features. Consequentially, priming effects result from the overall tuning of visual features to the task at hand. Such tuning allows the model to reproduce priming for different types of stimuli, including for typical stimulus dimensions such as ‘color’ and for less obvious dimensions such as ‘spikiness’ of shapes. Moreover, the model explains some puzzling findings from the literature: it shows how priming can be found for target-distractor stimulus relations rather than for their absolute stimulus values per se, without an explicit representation of relations. Similarly, it simulates effects that have been taken to reflect a modulation of priming by an observers’ goals—without any representation of goals in the model. We conclude that priming is best considered as a consequence of a general adaptation of the brain to visual input, and not as a peculiarity of visual search.

Original languageEnglish
Article numbere0187556
JournalPLoS ONE
Volume12
Issue number11
DOIs
Publication statusPublished - 1 Nov 2017

Fingerprint Dive into the research topics of 'You prime what you code: The fAIM model of priming of pop-out'. Together they form a unique fingerprint.

Cite this