How humans combine simultaneous proprioceptive and visual position information

R.J. van Beers, A.C. Sittig, J.J. Denier van der Gon

    Research output: Contribution to JournalArticleAcademicpeer-review

    Abstract

    To enable us to study how humans combine simultaneously present visual and proprioceptive position information, we had subjects perform a matching task. Seated at a table, they placed their left hand under the table concealing it from their gaze. They then had to match the proprioceptively perceived position of the left hand using only proprioceptive, only visual or both proprioceptive and visual information. We analysed the variance of the indicated positions in the various conditions. We compared the results with the predictions of a model in which simultaneously present visual and proprioceptive position information about the same object is integrated in the most effective way. The results are in disagreement with the model: the variance of the condition with both visual and proprioceptive information is smaller than expected from the variances of the other conditions. This means that the available information was integrated in a highly effective way. Furthermore, the results suggest that additional information was used. This information might have been visual information about body parts other than the fingertip or it might have been visual information about the environment.
    Original languageEnglish
    Pages (from-to)253-261
    Number of pages9
    JournalExperimental Brain Research
    Volume111
    Issue number2
    DOIs
    Publication statusPublished - 1996

    Fingerprint

    Dive into the research topics of 'How humans combine simultaneous proprioceptive and visual position information'. Together they form a unique fingerprint.

    Cite this