Spatial Perception

As we move through the world, new visual, auditory, vestibular, and somatosensory inputs are continuously presented to the brain. Given such constantly changing input, it is remarkable how easily we are able to keep track of where things are. We can reach for an object, or look at it, or even kick it without making a conscious effort to assess its location in space. Spatial perception is the faculty that allows us to do so. Sensory and motor information are used together to construct an internal representation of the space we perceive. The nature of this representation and the neural mechanisms underlying it have become a topic of great interest in cognitive neuroscience (Stein 1992; Milner and Goodale 1995). Both neuropsychological studies in humans and neurophysiological studies in nonhuman primates have yielded important insights into how the brain builds spatial representations (Colby 1998).

The dramatic impairments of spatial perception that result from damage to the parietal lobe indicate that this part of cortex plays a critical role in spatial functioning. The most striking of these deficits is the tendency to ignore or neglect objects in particular regions of space. A patient with hemineglect as a result of a right parietal lobe lesion may fail to notice objects on the left, including food on the left side of a plate or words on the left side of a page. Two aspects of neglect are particularly interesting with respect to spatial perception and representation.

First, neglect occurs in all sensory modalities (Bisiach and Vallar 1988). The multimodal nature of neglect indicates that what has been damaged is not simply a set of sensory maps but a high-level, supramodal representation of space. Second, neglect occurs with respect to a variety of spatial reference frames. A patient with right parietal damage is typically unaware of objects on the left side of space but left may be defined with respect to the body, or with respect to the line of sight, or with respect to the object to which the patient is attending. Moreover, these spatial impairments are dynamic, changing from moment to moment in accord with changes in body posture (Moscovitch and Behrmann 1994) and task demands (Behrmann and Tipper 1994). (See VISUAL NEGLECT and MODELING NEUROPSYCHOLOGICAL DEFICITS.)

The variety of deficits observed following parietal lobe damage in humans suggests that parietal cortex contains more than one kind of spatial representation. To understand more precisely how parietal cortex contributes to spatial perception and action, several groups of investigators have recorded from single neurons in alert monkeys trained to perform spatial tasks. Physiologists have sought to specify the sensory and motor conditions under that parietal neurons are activated, using tasks that typically require a hand or eye movement toward a visual target. This work in monkeys has provided direct evidence that parietal cortex contains multiple representations of space (Colby and Duhamel 1991, 1996). Parietal areas differ in their inputs, in the modalities and stimulus features represented, and in their outputs, projecting to separate regions of frontal cortex and subcortical structures, such as the superior colliculus. Of particular interest are parietal areas that contain bimodal somatosensory and visual neurons. These are obvious candidates for integrating information from different modalities into unified representations of space. The specific response properties in these areas and their projections to premotor cortex suggest that one function of parietal cortex is to perform the sensory-to-motor-coordinate transformations required for generating action. Parietal projections to posterior cingulate and parahippocampal cortex may also provide the sensory information essential to the construction of allocentric spatial maps in the HIPPOCAMPUS (O'Keefe and Nadel 1978). The emerging consensus is that neurons in separate areas within parietal cortex encode object location relative to a variety of reference frames (Arbib 1991; Colby and Duhamel 1991, 1996; Jeannerod et al. 1995; Rizzolatti, Fogassi, and Gallese 1997; Gross and Graziano 1995; Olson and Gettner 1995). (See MULTISENSORY INTEGRATION and VISUAL PROCESSING STREAMS.)

Contrasting types of spatial representations exist in two adjacent areas in monkey parietal cortex, the ventral intraparietal area (VIP) and the lateral intraparietal area (LIP). Area VIP, in the fundus of the sulcus, is distinguished from neighboring parietal areas by a preponderance of direction-selective visual neurons (Colby, Duhamel, and Goldberg 1993). In this respect, VIP neurons resemble those in other dorsal stream visual areas that process stimulus motion, especially areas MT and MST (see MOTION, PERCEPTION OF). An unexpected finding in VIP is that the majority of these visual neurons also respond to somatosensory stimuli, such as light touch (Colby and Duhamel 1991; Duhamel, Colby, and Goldberg 1998). These neurons are truly bimodal, in the sense that they can be driven equally well by either a visual or a somatosensory stimulus. Most VIP neurons have somatosensory receptive fields restricted to the head and face. The somatosensory and visual receptive fields of individual neurons match in location, in size, and even in their preferred direction of motion .

The observation of matching visual and somatosensory receptive fields raises the question of what happens to the relative locations of these fields in a single cell when the eyes move. If the visual receptive field were simply retinotopic, it would move when the eyes do. And if the somatosensory receptive field were purely somatotopic, it would be unchanged by eye position. There could not be a consistent correspondence in location if both receptive fields were defined solely with respect to the receptor surfaces. The answer is that some VIP visual receptive fields shift their location on the retina when the eyes move (Colby et al. 1993; Duhamel et al. 1997). A neuron that responds best to a visual stimulus approaching the mouth, and that has a somatosensory receptive field around the mouth, responds best to a stimulus moving on a trajectory toward the mouth regardless of where the monkey is fixating. This indicates that both the visual and somatosensory receptive fields are defined with respect to the skin surface. The receptive fields of these VIP neurons are head-centered: they respond to a certain portion of the skin surface and to the visual stimulus aligned with it, no matter what part of the retina is activated. In sum, these neurons encode bimodal sensory information in a head-centered representation of space. Similar neurons are found in the specific region of premotor cortex to which area VIP projects (Fogassi et al. 1992).

In contrast to area VIP, area LIP contains an eye-centered (oculocentric) representation of space. Neurons in area LIP have retinotopic receptive fields and are activated by sensory, motor, and cognitive events related to the receptive field (Andersen et al. 1990; Goldberg, Colby, and Duhamel 1990; Robinson, Goldberg, and Stanton 1978). Activity in LIP cannot be characterized as reflecting a simple sensory or motor signal. Rather, LIP neurons encode salient spatial locations; activity reflects the degree to which spatial attention has been allocated to the location of the receptive field (Colby, Duhamel, and Golberg 1995; Colby and Duhamel 1996).

Neural representations of space are maintained over time and the brain must solve the problem of updating them when a receptor surface is moved. Every time we move our eyes, each object in our surroundings activates a new set of retinal neurons. Despite this constant change, we experience the world as stable. This perceptual stability has long been understood to reflect the fact that what we perceive is not a direct impression of the external world but a construction, or internal representation, of it. It is this internal representation that is updated in conjunction with eye movements. Neurons in area LIP contribute to updating the internal image (Duhamel, Colby, and Goldberg 1992a). The experiment illustrated shows that an LIP neuron is activated when the monkey makes an eye movement that brings the receptive field to a screen location that previously contained a stimulus. The neuron's response is to the memory trace of the earlier stimulus: no stimulus was ever physically present in the receptive field, either before or after the saccade. The explanation for this surprising response is that the memory trace of the stimulus was updated, or remapped, at the time of the saccade. Nearly all LIP neurons show evidence of remapping the memory trace of a stimulus from the coordinates of the initial eye position to the coordinates of the final eye position.

The significance of this result is in what it tells us about spatial representation in area LIP. It shows that the internal image is dynamic and is always centered on the current position of the fovea. Instead of creating a spatial representation in purely retinotopic coordinates, tied solely to the specific neurons initially activated by the stimulus, area LIP constructs a representation in eye-centered coordinates. The distinction is a subtle one but very important for generating accurate spatial behavior. Maintaining visual information in eye-centered coordinates tells the monkey not just where the stimulus was on the retina when it first appeared but where it would be now, in relation to the current position of the fovea, if it were still visible. The result is that the monkey always has accurate spatial information with which it could program an eye movement toward a real or remembered target. Compared to a head-centered or world-centered representation, an eye-centered representation has the advantage that it is already in the coordinates of an effector system that could be used to acquire the target visually. Studies of patients indicate that the process of remapping and the construction of an eye-centered representation is selectively impaired by parietal lobe damage (Duhamel et al. 1992b; Heide et al. 1995). (See OCULOMOTOR CONTROL and AFFORDANCES.)

Figure 1

Figure 1 Remapping of memory trace activity in area LIP. Responses of one LIP neuron in three conditions. Left panel: during fixation, the neuron responds to the onset of a stimulus in the receptive field. Center: response following a saccade that moves the receptive field onto a stimulus. Right: response following a saccade that moves the receptive field onto a previously stimulated location. The stimulus is presented for only 50 msec and is extinguished before the saccade begins so that no stimulus is ever physically present in the receptive field. The response is to a memory trace that has been remapped from the coordinates of the initial eye position to those of the final eye position. Modified from Duhamel et al. 1992a.

Our current understanding of the neural basis of spatial perception can be summarized as follows. First, parietal cortex contains multiple representations of space. These are instantiated in several discrete areas that have been defined on the basis of anatomical connections and neuronal response properties. Second, parietal neurons in each of these areas have complex response profiles, with sensitivity to multiple stimulus dimensions and, often, multiple stimulus modalities. Single neurons exhibit motor and cognitive activity in addition to sensory responses. Third, the spatial representation in each area can best be understood in terms of the effector system to which it is related: area VIP is most strongly connected to premotor regions controlling head movements, while area LIP projects to oculomotor structures. Finally, spatial representations in parietal cortex are dynamically updated in conjunction with self-generated movements. The primary insight gained from neuropsychological and neurophysiological studies of parietal cortex is that our unitary experience of space emerges from a diversity of spatial representations.

See also

Additional links

-- Carol L. Colby


Andersen, R. A., R. M. Bracewell, S. Barash, J. W. Gnadt, and L. Fogassi. (1990). Eye position effects on visual, memory and saccade-related activity in areas LIP and 7a of macaque. J. Neurosci. 10:1176-1196.

Arbib, M. (1991). Interaction of multiple representations of space in the brain. In J. Paillard, Ed., Brain and Space. Oxford: Oxford University Press, pp. 379-403.

Behrmann, M., and S. P. Tipper. (1994). Object-based attentional mechanisms: Evidence from patients with unilateral neglect. In C. Umilta and M. Moscovitch, Eds., Attention and Performance, vol. 15. Cambridge, MA: MIT Press, pp. 351-375.

Bisiach, E., and G. Vallar. (1988). Hemineglect in humans. In F. Boller and J. Grafman, Eds., Handbook of Neuropsychology, vol. 1. Amsterdam: Elsevier, pp. 195-222.

Colby, C. L. (1998). Action-oriented spatial reference frames in cortex. Neuron 20:1-10.

Colby, C. L., and J. -R. Duhamel. (1991). Heterogeneity of extrastriate visual areas and multiple parietal areas in the macaque monkey. Neuropsychologia 29:517-537.

Colby, C. L., and J. -R. Duhamel. (1996). Spatial representations for action in parietal cortex. Cogn. Brain Res. 5:105-115.

Colby, C. L., J. -R. Duhamel, and M. E. Goldberg. (1993). Ventral intraparietal area of the macaque: Anatomic location and visual response properties. J. Neurophysiol. 69:902-91vt4.

Colby, C. L., J. -R. Duhamel, and M. E. Goldberg. (1995). Oculocentric spatial representation in parietal cortex. Cereb. Cortex 5:470-481.

Colby, C. L., J. -R. Duhamel, and M. E. Goldberg. (1996). Visual, presaccadic and cognitive activation of single neurons in monkey lateral intraparietal area. J. Neurophysiol. 76:2841-2852.

Duhamel, J. -R., F. Bremmer, S. BenHamed, and W. Graf. (1997). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature 389:845-848.

Duhamel, J. -R., C. L. Colby, and M. E. Goldberg. (1992a). The updating of the representation of visual space in parietal cortex by intended eye movements. Science 255:90-92.

Duhamel, J. -R., C. L. Colby, and M. E. Goldberg. (1998). Ventral intraparietal area of the macaque: Convergent visual and somatic response properties. J. Neurophysiol. 79:126-136.

Duhamel, J. -R., M. E. Goldberg, E. J. FitzGibbon, A. Sirigu, and J. Grafman. (1992b). Saccadic dysmetria in a patient with a right frontoparietal lesion: The importance of corollary discharge for accurate spatial behavior. Brain 115:1387-1402.

Fogassi, L., V. Gallese, G. di Pelligrino, L. Fadiga, M. Gentilucci, G. Luppino, M. Matelli, A. Pedotti, and G. Rizzolatti. (1992). Space coding by premotor cortex. Exp. Brain Res. 89:686-690.

Goldberg, M. E., C. L. Colby, and J. -R. Duhamel. (1990). The representation of visuomotor space in the parietal lobe of the monkey. Cold Spring Harbor Symp. Quant. Biol. 60:729-739.

Gross, C. G., and M. S. A. Graziano. (1995). Multiple representations of space in the brain. Neuroscientist 1:43-50.

Heide, W., M. Zimmermann, E. Blankenburg, and D. Kompf. (1995). Cortical control of double-step saccades: Implications for spatial orientation. Ann. Neurol. 38:739-748.

Jeannerod, M., M. A. Rizzolatti, G. Arbib, and H. Sakata. (1995). Grasping objects: the cortical mechanisms of visuomotor transformation. Trends Neurosci 18:314-320.

Milner, A. D., and M. A. Goodale. (1995). The Visual Brain in Action. Oxford: Oxford University Press.

Moscovitch, M., and M. Behrmann. (1994). Coding of spatial information in the somatosensory system: Evidence from patients with neglect following parietal lobe damage. J. Cogn. Neurosci. 6:151-155.

O'Keefe, J., and L. Nadel. (1978). The Hippocampus as a Cognitive Map. Oxford: Clarendon Press.

Olson, C. R., and S. N. Gettner. (1995). Object-centered direction selectivity in the macaque supplementary eye field. Science 269:985-988.

Rizzolatti, G., L. Fogassi, and V. Gallese. (1997). Parietal cortex: From sight to action. Curr. Opin. Neurobiol. 7:562-567.

Robinson, D. L., M. E. Goldberg, and G. B. Stanton. (1978). Parietal association cortex in the primate: Sensory mechanisms and behavioral modulation. J. Neurophysiol. 41:910-932.

Stein, J. F. (1992). The representation of egocentric space in the posterior parietal cortex. Behav. Brain Sci. 15:691-700.

Further Readings

Andersen, R. A., L. H. Bradley, D. C. Snyder, and J. Xing. (1997). Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annu. Rev. Neurosci. 20:303-330.

Behrmann, M., and S. P. Tipper. (1994). Object-based attentional mechanisms: Evidence from patients with unilateral neglect. In C. Umilta and M. Moscovitch, Eds., Attention and Performance, vol. 15. Cambridge, MA: MIT Press, pp. 351-375.

Bremmer, F., J.-R. Duhamel, B. S. Hamed, and W. Graf. (1997). The representation of movement in near extra-personal space in the macaque ventral intraparietal area (VIP). In P. Thier and O. Karnath, Eds., Contribution of the Parietal Lobe to Orientation in Three-Dimensional Space. Berlin: Springer, pp. 619-631.

Caminiti, R., S. Ferraina, and P. B. Johnson. (1996). The sources of visual information to the primate frontal lobe: A novel role for the superior parietal lobule. Cerebral Cortex 6:319-328.

Farah, M. J., J. L. Brunn, A. B. Wong, M. A. Wallace, and P. A. Carpenter. (1990). Frames of reference for allocating attention to space. Cogn. Neuropsych. 28:335-347.

Karnath, H. O., P. Schenkel, and B. Fischer. (1991). Trunk orientation as the determining factor of the 'contralateral' deficit in the neglect syndrome and as the physical anchor of the internal representation of body orientation in space. Brain 114:1997-2014.

O'Keefe, J. (1993). Hippocampus, theta and spatial memory. Curr. Opin. Neurobiol. 3:917-924.

Olson, C. R., and S. N. Gettner. (1996). Brain representation of object-centered space. Curr. Opin. Neurobiol. 6:165-170.

Rizzolatti, G., L. Riggio, and B. M. Sheliga. (1994). Space and selective attention. In C. Umilta and M. Moscovitch, Eds., Attention and Performance, vol. 15. Cambridge, MA: MIT Press, pp. 231-265.

Sakata, H., M. Murata, A. Taira, and S. Mine. (1995). Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey. Cerebral Cortex. 5:429-438.

Tipper, S. P., C. Lortie, and G. C. Baylis. (1992). Selective reaching: Evidence for action-centered attention. J. Exp. Psychol. Hum. Percept. Perform. 18:891-905 .