Abstract
To enable us to study how humans combine simultaneously present visual and proprioceptive position information, we had subjects perform a matching task. Seated at a table, they placed their left hand under the table concealing it from their gaze. They then had to match the proprioceptively perceived position of the left hand using only proprioceptive, only visual or both proprioceptive and visual information. We analysed the variance of the indicated positions in the various conditions. We compared the results with the predictions of a model in which simultaneously present visual and proprioceptive position information about the same object is integrated in the most effective way. The results are in disagreement with the model: the variance of the condition with both visual and proprioceptive information is smaller than expected from the variances of the other conditions. This means that the available information was integrated in a highly effective way. Furthermore, the results suggest that additional information was used. This information might have been visual information about body parts other than the fingertip or it might have been visual information about the environment.
Original language | English |
---|---|
Pages (from-to) | 253-261 |
Number of pages | 9 |
Journal | Experimental Brain Research |
Volume | 111 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1996 |