Cross-modal associations between vision, touch, and audition influence visual search through top-down attention, not bottom-up capture

Emily Orchard-Mills, David Alais, Erik Van der Burg

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Recently, Guzman-Martinez, Ortega, Grabowecky, Mossbridge, and Suzuki (Current Biology: CB, 22(5), 383-388, 2012) reported that observers could systematically match auditory amplitude modulations and tactile amplitude modulations to visual spatial frequencies, proposing that these cross-modal matches produced automatic attentional effects. Using a series of visual search tasks, we investigated whether informative auditory, tactile, or bimodal cues can guide attention toward a visual Gabor of matched spatial frequency (among others with different spatial frequencies). These cues improved visual search for some but not all frequencies. Auditory cues improved search only for the lowest and highest spatial frequencies, whereas tactile cues were more effective and frequency specific, although less effective than visual cues. Importantly, although tactile cues could produce efficient search when informative, they had no effect when uninformative. This suggests that cross-modal frequency matching occurs at a cognitive rather than sensory level and, therefore, influences visual search through voluntary, goal-directed behavior, rather than automatic attentional capture. © 2013 Psychonomic Society, Inc.
Original languageEnglish
Pages (from-to)1892-1905
JournalAttention, Perception, and Psychophysics
Volume75
Issue number8
DOIs
Publication statusPublished - Nov 2013
Externally publishedYes

Fingerprint

Dive into the research topics of 'Cross-modal associations between vision, touch, and audition influence visual search through top-down attention, not bottom-up capture'. Together they form a unique fingerprint.

Cite this