Extrafoveal attentional capture by object semantics

Antje Nuthmann, Floor de Groot, Falk Huettig, Christian N.L. Olivers

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.

Original languageEnglish
Pages (from-to)e0217051
JournalPLoS ONE
Volume14
Issue number5
DOIs
Publication statusPublished - 1 Jan 2019

Fingerprint

Semantics
Display devices
Eye movements
Saccades
cognition
Cognition
Experiments

Cite this

Nuthmann, Antje ; de Groot, Floor ; Huettig, Falk ; Olivers, Christian N.L. / Extrafoveal attentional capture by object semantics. In: PLoS ONE. 2019 ; Vol. 14, No. 5. pp. e0217051.
@article{7a246e962db94f22b778e7eacd329572,
title = "Extrafoveal attentional capture by object semantics",
abstract = "There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.",
author = "Antje Nuthmann and {de Groot}, Floor and Falk Huettig and Olivers, {Christian N.L.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1371/journal.pone.0217051",
language = "English",
volume = "14",
pages = "e0217051",
journal = "PLoS ONE",
issn = "1932-6203",
publisher = "Public Library of Science",
number = "5",

}

Extrafoveal attentional capture by object semantics. / Nuthmann, Antje; de Groot, Floor; Huettig, Falk; Olivers, Christian N.L.

In: PLoS ONE, Vol. 14, No. 5, 01.01.2019, p. e0217051.

Research output: Contribution to JournalArticleAcademicpeer-review

TY - JOUR

T1 - Extrafoveal attentional capture by object semantics

AU - Nuthmann, Antje

AU - de Groot, Floor

AU - Huettig, Falk

AU - Olivers, Christian N.L.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.

AB - There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.

UR - http://www.scopus.com/inward/record.url?scp=85066821674&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85066821674&partnerID=8YFLogxK

U2 - 10.1371/journal.pone.0217051

DO - 10.1371/journal.pone.0217051

M3 - Article

VL - 14

SP - e0217051

JO - PLoS ONE

JF - PLoS ONE

SN - 1932-6203

IS - 5

ER -