Waves collection

     Waves collection
Project for an inexistent piece of art
 Giovanni Casu


Art as pure flux of information through multisensory channel combined and processed – 

In this piece all physical objects are instrument/tools to furnish input in different sensorial channel on human body-  The  experience : Alls the stimulus are generate in the same moment - the intensity of each signal/imput is modify from really low to a quite intense pick – The Pick will be present for 3 sec.  each 20 second – the rest 17 second the input are increasing/decreasing  slowly – The experience will furnish an artificial synesteatic feeling, and developed can research to all the potential expressive , “linguistic”, artistic combination of the different stimulus- The subject after several simultaneous, rhythmic “experience”, would also “analyze” his own conscious feelings, the role of attention  and awareness, how different stimulus dynamically interact between each other- time effect, memory etc. The experience is not about communicate “something ” but to build a new kind of communication system based on integration between different sensory channels. The system have multiple possible development: all the stimulus can be set to a particular concept  as  “yellow” – or “ rhythm “…etc.. or more complex combination between visual stimulus a  physical object touch to the “perceiver” – Those different combination can create a new language with infinity of application. Esthetical experience can be prepared with special combination ( artists welcome! ) and then propose to the public.  List of stimulus: 1) sight (ophthalmoception),  video glasses – a basic or more complex visual stimulus ( 3 sec. movie or just pure yellow light as example) .2)  hearing (audioception), headphone – a basic or more complex audio stimulus ( 3 sec. opera or just a pure sound. As example.) 3) taste (gustaoception), a drop of taste liquid is projected simultaneously to the other stimulus.4) smell (olfacoception or olfacception), a pray Olfactive signal is projected simultaneously to the other stimulus.5) and touch (tactioception), 6) other senses include temperature (thermoception), 6) pain (nociception),7) balance (equilibrioception) 8) Electric signalnesthetic sense (proprioception),

Art as  pure experience-
Art as pure flux of information through multi-sensory channel combined and processed –

In this pieces all physical objects are instrument/tools to furnish input in different sensorial channel on human body- The first experience : Alls the stimulus are generate in the same moment the intensity of each signal /imput go from really low to a quite intense pick – the Pick will be present for 3 sec.  each 20 second – the rest 17 second the input are increasing/decreasing  slowly – but The experience will furnish an artificial synesteatic feeling, and developed can research to all the potential expressive , “linguistic”, artistic combination of the different stimulus- The subject after several simultaneous, rhythmic “experience”, would also “analyze” his own conscious feelings, the role of attention  and awareness, how different stimulus dynamically interact between each other- time effect, memory etc.
The experience is not about communicate “something ” but to build a new kind of communication system based on integration between different sensory channels.
The system have multiple possible development: all the stimulus can be set to a particular concept  as  “yellow” – or “ rhythm “…etc..
or more complex combination between visual stimulus a  physical object touch to the “perceiver” – Those different combination can create a new language with infinity of application.
Esthetical experience can be prepared with special combination ( artists welcome! ) and then propose to the public.

List of stimulus:

1) sight (ophthalmoception),  video glasses – a basic or more complex visual stimulus ( 3 sec. movie or just pure yellow light as example)

2)  hearing (audioception), headphone – a basic or more complex audio stimulus ( 3 sec. opera or just a pure sound. As example.)

3) taste (gustaoception), a drop of taste liquid is projected simultaneously to the other stimulus

4) smell (olfacoception or olfacception), a pray Olfactive signal is projected simultaneously to the other stimulus

5) and touch (tactioception),

6) other senses include temperature (thermoception),

6) pain (nociception),
7) balance (equilibrioception)

8) Electric signal
9) kinesthetic sense (proprioception)

Sight or vision is the capability of the eye(s) to focus and detect images of visible light on photoreceptors in the retina of each eye that generates electrical nerve impulses for varying colors, hues, and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light, but do not distinguish colors. Cones distinguish colors, but are less sensitive to dim light. There is some disagreement as to whether this constitutes one, two or three senses. Neuroanatomists generally regard it as two senses, given that different receptors are responsible for the perception of color and brightness. Perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information, This is called visual memory. The inability to see is called blindness.
Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications.
Hearing or audition is the sense of sound perception. Hearing is all about vibration. Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibrations propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[3] with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness. Sound can also be detected as vibrations conducted through the body by tactition. Lower frequencies than can be heard are detected this way.
Taste (or, the more formal term, gustation; adjectival form: "gustatory") is one of the traditional five senses. It refers to the capability to detect the taste of substances such as food, certain minerals, and poisons, etc. The sense of taste is often confused with the "sense" of flavour, which is a combination of taste and smell perception. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. The sensation of taste can be categorized into five basic tastes: sweetness, bitterness, sourness, saltiness and umami. Other tastes such as calcium [4] and free fatty acids [5] may be other basic tastes but have yet to receive widespread acceptance. The recognition and awareness of umami is a relatively recent development in Western cuisine. MSG produces a strong umami taste so much so that it is said to taste soapy by itself
Smell or olfaction is the other "chemical" sense. Unlike taste, there are hundreds of olfactory receptors (388 according to one source[8]), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what we perceive as the molecule's smell. In the brain, olfaction is processed by the olfactory system. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones.[9]
Touch or somatosensory, also called tactition or mechanoreception, is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord.[10] The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
Balance and acceleration
Vestibular system
Balance, equilibrioception, or vestibular sense is the sense that allows an organism to sense body movement, direction, and acceleration, and to attain and maintain postural equilibrium and balance. The organ of equilibrioception is the vestibular labyrinthine system found in both of the inner ears. In technical terms, this organ is responsible for two senses of angular momentum acceleration and linear acceleration (which also senses gravity), but they are known together as equilibrioception.
The vestibular nerve conducts information from sensory receptors in three ampulla that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
Thermoception is the sense of heat and the absence of heat (cold) by the skin and including internal skin passages, or, rather, the heat flux (the rate of heat flow) in these areas. There are specialized receptors for cold (declining temperature) and to heat. The cold receptors play an important part in the dog's sense of smell, telling wind direction. The heat receptors are sensitive to infrared radiation and can occur in specialized organs for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
Kinesthetic sense
Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the relative positions of the parts of the body. Neurologists test this sense by telling patients to close their eyes and touch their own nose with the tip of a finger. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.[11]
Nociception (physiological pain) signals nerve-damage or damage to tissue. The three types of pain receptors are cutaneous (skin), somatic (joints and bones), and visceral (body organs). It was previously believed that pain was simply the overloading of pressure receptors, but research in the first half of the 20th century indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch. Pain was once considered an entirely subjective experience, but recent studies show that pain is registered in the anterior cingulate gyrus of the brain. The main function of pain is to warn us about dangers. For example, humans avoid touching a sharp needle or hot object or extending an arm beyond a safe limit because it hurts, and thus is dangerous. Without pain, people could do many dangerous things without realizing it.
General introduction
Multi-modal perception is a scientific term that describes how humans form coherent, valid, and robust perception by processing sensory stimuli from various modalities. Surrounded by multiple objects and receiving multiple sensory stimulations, the brain is faced with the decision of how to categorize the stimuli resulting from different objects or events in the physical world. The nervous system is thus responsible for whether to integrate or segregate certain groups of temporally coincident sensory signals based on the degree of spatial and structural congruence of those stimulations. Multi-modal perception has been widely studied in cognitive science, behavioral science, and neuroscience.
Stimuli and Sensory modalities
There are four attributes of stimulus: modality, intensity, location, and duration. To process stimuli, different cerebral cortex would handle different modalities. The cortex, such as visual cortex, auditory cortex, and sensorimotor cortex, can transfer the low level sensory modalities to high-level features through mapped sensory systems, such as visual system, auditory system, somatosensory system, gustatory system, and Olfactory system. With these system,
Binding Problem
The relationship between binding problem and multimodal perception can be simulated as the question and solution. Binding problem was a serious of questions on how human generate a unified coherent perception from physical world. It was investigated initially in the visual domain (color, motion, depth, and form), then in the auditory domain, and recently in the multi cortical areas. Therefore, binding problem is central to multi sensory perception.
In the visual domain, if color, motion, depth, and form, are processed independently, where does the unified coherent conscious experience of the visual world come in? This is known as the binding problem and is usually studied entirely within visual processes, however it is clear that the binding problem is central to multisensory perception.
However, considerations of how unified conscious representations are formed are not the full focus of multimodal Integration research. It is obviously important for the senses to interact in order to maximize how efficiently people interact with the environment. For perceptual experience and behavior to benefit from the simultaneous stimulation of multiple sensory modalities, integration of the information from these modalities is necessary. Some of the mechanisms mediating this phenomenon and its subsequent effects on cognitive and behavioural processes will be examined hereafter. Perception is often defined as ones conscious experience, and thereby combines inputs from all relevant senses and prior knowledge. Perception is also defined and studied in terms of feature extraction, which is several hundred milliseconds away from conscious experience. Notwithstanding the existence of Gestalt psychology schools that advocate a holistic approach to the operation of the brain, the physiological processes underlying the formation of percepts and conscious experience have been vastly understudied. Nevertheless, burgeoning neuroscience research continues to enrich our understanding of the many details of the brain, including neural structures implicated in multisensory integration such as the superior colliculus (SC) and various cortical structures such as the superior temporal gyrus (GT) and visual and auditory association areas. Although the structure and function of the SC are well known, the cortex and the relationship between its constituent parts are presently the subject of much investigation. Concurrently, the recent impetus on integration has enabled investigation into perceptual phenomena such as the ventriloquism effect, rapid localization of stimuli and the McGurk effect; culminating in a more thorough understanding of the human brain and its functions.
Studies of sensory processing in humans and other animals has traditionally been performed one sense at a time, and to the present day, numerous academic societies and journals are largely restricted to considering sensory modalities separately ('Vision Research', 'Hearing Research' etc.). However, while there is also a long and parallel history of multisensory research (e.g., as early as Stratton's (1897) experiments on the somatosensory effects of wearing vision-distorting prism glasses), this research field has recently gained enormously in interest and popularity.
Example of spacial congruent and structural congruent
When we hear a car honk, we would determine which car triggers the honk by which car we see is the spatially closest to the honk. It’s a spacial congruent example by combining visual and auditorial stimuli. On the other hand, The sound and the pictures of a TV program would be integrated as structural congruent by combining visual and auditorial stimuli. However, if the sound and the pictures were not meaningfully fit, we would segregate the two stimuli. Therefore, whether spacial or structural congruent should not only combine the stimuli but also be determined by understanding.

Theories and approaches
Visual dominance
Literature in spatial crossmodal biases suggest that visual modality often biases information from other senses. Landam.S and Ulrik B. study conclude that when varying the degree of spatial congruency vision dominates what we hear. This behavior is also known as the ventriloquous effect.
Modality appropriateness
Welch and Warren (1980) asserted that multisensory processes followed a modality appropriateness hypothesis, in which due to visual dominance of spatial tasks, also known as visual capture, one will always depend on vision over audition or tactition to solve spatial problems. Thus, auditory stimuli can not at all influence ones perception of the location of a visual stimulus. Concurrently, audition was considered dominant toward temporal tasks.
However, more recent studies have generated results that contradict this hypothesis. Alais and Burr (2004), found that following progressive degradation in the quality of a visual stimulus, participants’ perception of spatial location was determined progressively more by a simultaneous auditory cue. reached a similar finding. However, they also progressively changed the temporal uncertainty of the auditory cue; eventually concluding that it is the uncertainty of individual modalities that determine to what extent information from each modality is considered when forming a percept. This conclusion is similar in some respects to the ‘inverse effectiveness rule’. The extent to which multisensory integration occurs may vary according to the ambiguity of the relevant stimuli.
Optimal integration
Bayesian integration
The theory of Bayesian integration is based on the fact that the brain must deal with a number of inputs, which vary in reliability.[2] In dealing with these inputs, it must construct a coherent representation of the world that corresponds to reality. The Bayesian integration view is that the brain uses a form of Bayesian inference.[3] This view has been backed up by computational modeling of such a Bayesian inference from signals to coherent representation, which shows similar characteristics to integration in the brain.[3]
Cue combination v.s. causal inference models
With the assumption of independence between various sources, traditional cue combination model is successful in modality integration. However, depending on the discrepancies between modalities, there might be different forms of stimuli fusion: integration, partial integration, and segregation. To fully understand the other two types, we have to use causal inference model without the assumption as cue combination model. This freedom gives us general combination of any numbers of signals and modalities by using Bayes’ rule to make
The hierarchical v.s. non-hierarchical models
The difference between two models is that hierarchical model can explicitly make causal inference to predict certain stimulus while non-hierarchical model can only predict joint probability of stimuli. However, hierarchical model is actually a special case of non-hierarchical model by setting joint prior as a weighted average of the prior to common and independent causes, each weighted by their prior probability. Based on the correspondence of these two models, we can also say that hierarchical is a mixture modal of non-hierarchical model.
The independence of likelihoods and priors
For Bayesian model, the prior and likelihood generally represent the statistics of the environment and the sensory representations. The independence of priors and likelihoods is not assured since the prior may vary with likelihood only by the representations. However, the independence has been proved by Shams with series of parameter control in multi sensory perception experiment.
Principles of multisensory integration
The contributions of Barry Stein, Alex Meredith, and their colleagues (e.g., Stein & Meredith 1993) are widely considered to be the groundbreaking work in the modern field of multisensory integration. Through detailed long-term study of the neurophysiology of the superior colliculus, they distilled three general principles by which multisensory integration may best be described.
•    The spatial rule[4] states that multisensory integration is more likely or stronger when the constituent unisensory stimuli arise from approximately the same location.
•    The temporal rule[5] states that multisensory integration is more likely or stronger when the constituent unisensory stimuli arise at approximately the same time.
•    The principle of inverse effectiveness[6] states that multisensory integration is more likely or stronger when the constituent unisensory stimuli evoke relatively weak responses when presented in isolation.
Perceptual and behavioral consequences
A unimodal approach dominated scientific literature until the beginning of this century. Although this enabled rapid progression of neural mapping, and an improved understanding of neural structures, the investigation of perception remained relatively stagnant. The recent revitalized enthusiasm into perceptual research is indicative of a substantial shift away from reductionism and toward gestalt methodologies. Gestalt theory, dominant in the late 19th and early 20th centuries espoused two general principles: the ‘principle of totality’ in which conscious experience must be considered globally, and the ‘principle of psychophysical isomorphism’ which states that perceptual phenomena are correlated with cerebral activity. These ideas are particularly relevant in the current climate and have driven researchers to investigate the behavioural benefits of multisensory integration.
Decreasing sensory uncertainty
It has been widely acknowledged that uncertainty in sensory domains results in an increased dependence of multisensory integration (Alais & Burr, 2004). Hence, it follows that cues from multiple modalities that are both temporally and spatially synchronous are viewed neurally and perceptually as emanating from the same source. The degree of synchrony that is required for this ‘binding’ to occur is currently being investigated in a variety of approaches. It should be noted here that the integrative function only occurs to a point beyond which the subject can differentiate them as two opposing stimuli. Concurrently, a significant intermediate conclusion can be drawn from the research thus far. Multisensory stimuli that are bound into a single percept, are also bound on the same receptive fields of multisensory neurons in the SC and cortex (Alais & Burr, 2004).
Increasing stimulus detection
Decreasing reaction time
Responses to multiple simultaneous sensory stimuli can be faster than responses to the same stimuli presented in isolation. Hershenson (1962) presented a light and tone simultaneously and separately, and asked human participants to respond as rapidly as possible to them. As the asynchrony between the onsets of both stimuli was varied, it was observed that for certain degrees of asynchrony, reaction times were decreased. These levels of asynchrony were quite small, perhaps reflecting the temporal window that exists in multimodal neurons of the SC. Further studies have analysed the reaction times of saccadic eye movements (Hughs et al., 1994); and more recently correlated these findings to neural phenomena (Wallace, 2004).
Multisensory illusions
McGurk effect
It has been found that two converging bimodal stimuli can produce a perception that is not only different in magnitude than the sum of its parts, but also quite different in quality. In a classic study labeled the McGurk effect,[7] a person’s phoneme production was dubbed with a video of that person speaking a different phoneme. The end result was the perception of a third, different phoneme. McGurk and MacDonald (1976) explained that phonemes such as ba, da, ka, ta, ga and pa can be divided into four groups, those that can be visually confused, i.e. (da, ga, ka, ta) and (ba and pa), and those that can be audibly confused. Hence, when ba – voice and ga lips are processed together, the visual modality sees ga or da, and the auditory modality hears ba or da, combining to form the percept da.
Ventriloquism has been used as the evidence for the modality appropriateness hypothesis. Ventriloquism describes the situation in which auditory location perception is shifted toward a visual cue. The original study describing this phenomenon was conducted by Howard and Templeton, (1966) after which several studies have replicated and built upon the conclusions they reached.[8] In conditions in which the visual cue is unambiguous, visual capture reliably occurs. Thus to test the influence of sound on perceived location, the visual stimulus must be progressively degraded.[9] Furthermore, given that auditory stimuli are more attuned to temporal changes, recent studies have tested the ability of temporal characteristics to influence the spatial location of visual stimuli. Some types of EVP - Electronic voice phenomenon, mainly the ones using sound bubles are considered a kind of modern ventriloquism technique and is played by the use of sophisticated software, computers and sound equipment.
double-flash illusion
The double flash illusion was reported as the first illusion to show that visual stimuli can be qualitatively altered by audio stimuli.[10] In the standard paradigm participants are presented combinations of one to four flashes accompanied by zero to 4 beeps. They were then asked to say how many flashes they perceived. Participants perceived illusory flashes when there were more flashes than beeps. fMRI studies have shown that there is crossmodal activation in early, low level visual areas, which was qualitatively similar to the perception of a real flash. This suggests that the illusion reflects subjective perception of the extra flash.[11] Further, studies suggest that timing of multimodal activation in unisensory cortexes is too fast to be mediated by a higher order integration suggesting feed forward or lateral connections.[12] One study has revealed the same effect but from vision to audition, as well as fission rather than fusion effects, although the level of the auditory stimulus was reduced to make it less salient for those illusions affecting audition.[13]
Rubber hand illusion
In the rubber hand illusion (RHI) (Botvinick & Cohen, 1998), human participants view a dummy hand being stroked with a paintbrush, while they feel a series of identical brushstrokes applied to their own hand, which is hidden from view. If this visual and tactile information is applied synchronously, and if the visual appearance and position of the dummy hand is similar to one's own hand, then people may feel that the touches on their own hand are coming from the dummy hand, and even that the dummy hand is, in some way, their own hand. This is an early form of body transfer illusion. The RHI is an illusion of vision, touch, and posture (proprioception), but a similar illusion can also be induced with touch and proprioception (Ehrsson, Holmes, & Passingham, 2005). The very first report of this kind of illusion may have been as early as 1937 (Tastevin, 1937).
Body transfer illusion
Body transfer illusion involves the use of typically, virtual reality devices to induce the illusion in the subject
Neural mechanisms
Superior colliculus
The superior colliculus (SC) or Optic tectum (OT) is part of the tectum, located in the midbrain, superior to the brainstem and inferior to the thalamus. It contains seven layers of alternating white and grey matter, of which the superficial contain topographic maps of the visual field; and deeper layers contain overlapping spatial maps of the visual, auditory and somatosensory modalities (Affifi & Bergman, 2005). The structure receives afferents directly from the retina, as well as from various regions of the cortex (primarily the occipital lobe), the spinal cord and the inferior colliculus. It sends efferents to the spinal cord, cerebellum, thalamus and occipital lobe via the lateral geniculate nucleus (LGN). The structure contains a high proportion of multisensory neurons and plays a role in the motor control of orientation behaviours of the eyes, ears and head.[14]
Receptive fields from somatosensory, visual and auditory modalities converge in the deeper layers to form a two-dimensional multisensory map of the external world. Here, objects straight ahead are represented caudally and objects on the periphery are represented rosterally. Similarly, locations in superior sensory space are represented medially, and inferior locations are represented laterally (Stein & Meredith 1993).
However, in contrast to simple convergence, the SC integrates information to create an output that differs from the sum of its inputs. Following a phenomenon labelled the ‘spatial rule’, neurons are excited if stimuli from multiple modalities fall on the same or adjacent receptive fields, but are inhibited if the stimuli fall on disparate fields (Giard & Peronnet, 1999). Excited neurons may then proceed to innervate various muscles and neural structures to orient an individual’s behaviour and attention toward the stimulus. Neurons in the SC also adhere to the ‘temporal rule’, in which stimulation must occur within close temporal proximity to excite neurons. However, due to the varying processing time between modalities and the relatively slower speed of sound to light, it has been found the neurons may be optimally excited when stimulated some time apart (Miller & D’Esposito, 2005).
Single neurons in the macaque putamen have been shown to have visual and somatosensory responses (Graziano & Gross, 1994/1995) closely related to those in the polysensory zone of the premotor cortex and area 7b in the parietal lobe.
Cortical areas
Multisensory neurons exist in a large number of locations, often integrated with unimodal neurons. They have recently been discovered in areas previously thought to be modality specific, such as the somatosensory cortex; as well as in clusters at the borders between the major cerebral lobes, such as the occipito-parietal space and the occipito-temporal space (Wallace, Ramachandran & Stein, 2004; Wallace, 2004).
However, in order to undergo such physiological changes, there must exist continuous connectivity between these multisensory structures. It is generally agreed that information flow within the cortex follows a hierarchical configuration (Clavagnier, Falchier & Kennedy, 2004). Hubel and Wiesel (as cited in Clavagnier et al., 2004) showed that receptive fields and thus the function of cortical structures, as one proceeds out from V1 along the visual pathways, become increasingly complex and specialized. From this it was postulated that information flowed outwards in a feed forward fashion; the complex end products eventually binding to form a percept. However, via fMRI and intracranial recording technologies, it has been observed that the activation time of successive levels of the hierarchy does not correlate with a feed forward structure. That is, late activation has been observed in the striate cortex, markedly after activation of the prefrontal cortex in response to the same stimulus (Foxe & Simpson, 2002).
Complementing this, afferent nerve fibres have been found that project to early visual areas such as the lingual gyrus from late in the dorsal (action) and ventral (perception) visual streams, as well as from the auditory association cortex (Macaluso, Frith & Driver, 2000). Feedback projections have also been observed in the opossum directly from the auditory association cortex to V1.[15] This last observation currently highlights a point of controversy within the neuroscientific community. Sadato et al. (2004) concluded, in line with Bernstein et al. (2002), that the primary auditory cortex (A1) was functionally distinct from the auditory association cortex, in that it was void of any interaction with the visual modality. They hence concluded that A1 would not at all be effected by cross modal plasticity. This concurs with Jones and Powell’s (1970) contention that primary sensory areas are connected only to other areas of the same modality.
In contrast, the dorsal auditory pathway, projecting from the temporal lobe is largely concerned with processing spatial information, and contains receptive fields that are topographically organized. Fibers from this region project directly to neurons governing corresponding receptive fields in V1.[15] The perceptual consequences of this have not yet been empirically acknowledged. However, it can be hypothesized that these projections may be the precursors of increased acuity and emphasis of visual stimuli in relevant areas of perceptual space. Consequently, this finding rejects Jones and Powell’s (1970) hypothesis and thus is in conflict with Sadato et al.’s (2004) findings. A resolution to this discrepancy includes the possibility that primary sensory areas can not be classified as a single group, and thus may be far more different from what was previously thought. Regardless, further research is necessary for a definitive resolution.
Frontal lobe
Area F4 in macaques
Area F5 in macaques
Polysensory zone of premotor cortex (PZ) in macaques
Occipital lobe
Primary visual cortex (V1)
Lingual gyrus in humans
Lateral occipital complex (LOC), including lateral occipital tactile visual area (LOtv)
Parietal lobe
Ventral intraparietal sulcus (VIP) in macaques
Lateral intraparietal sulcus (LIP) in macaques
Area 7b in macaques
Second somatosensory cortex (SII)
 Temporal lobe
Primary auditory cortex (A1)
Superior temporal cortex (STG/STS/PT) Audio visual cross modal interactions are known to occur in the auditory association cortex which lies directly inferior to the Sylvian fissure in the temporal lobe (Sadato et al., 2004). Plasticity was observed in the superior temporal gyrus (STG) by Petitto et al. (2000). Here, it was found that the STG was more active during stimulation in native deaf signers compared to hearing non signers. Concurrently, further research has revealed differences in the activation of the Planum temporale (PT) in response to non linguistic lip movements between the hearing and deaf; as well as progressively increasing activation of the auditory association cortex as previously deaf participants gain hearing experience via a cochlear implant (Sadato et al., 2004).
Anterior ectosylvian sulus (AES) in cats
Rostral lateral suprasylvian sulcus (rLS) in cats
Cortical-subcortical interactions
The most significant interaction between these two systems (corticotectal interactions) is the connection between the anterior ectosylvian sulcus (AES), which lies at the junction of the parietal, temporal and frontal lobes, and the SC. The AES is divided into three unimodal regions with multimodal neurons at the junctions between these sections (Jiang & Stein, 2003). Neurons from the unimodal regions project to the deep layers of the SC and influence the multiplicative integration effect. That is, although they can receive inputs from all modalities as normal, the SC can not enhance or depress the effect of multimodal stimulation without input from the AES (Jiang & Stein, 2003).
Concurrently, the multisensory neurons of the AES, although also integrally connected to unimodal AES neurons, are not directly connected to the SC. This pattern of division is reflected in other areas of the cortex, resulting in the observation that cortical and tectal multisensory systems are somewhat dissociated (Wallace, Meredith & Stein, 1993). Stein, London, Wilkinson and Price (1996) analysed the perceived luminance of an LED in the context of spatially disparate auditory distracters of various types. A significant finding was that a sound increased the perceived brightness of the light, regardless of their relative spatial locations, provided the light’s image was projected onto the fovea. Here, the apparent lack of the spatial rule, further differentiates cortical and tectal multisensory neurons. Little empirical evidence exists to justify this dichotomy. Nevertheless, cortical neurons governing perception, and a separate sub cortical system governing action (orientation behavior) is synonymous with the perception action hypothesis of the visual stream (Goodale & Milner, 1995). Further investigation into this field is necessary before any substantial claims can be made.
Dual "what" and "where" multisensory routes
Research suggests the existence of two multisensory routes for "what" and "where". The "what" route identifying the identity of things involving area Brodmann area 9 in the right inferior frontal gyrus and right middle frontal gyrus, Brodmann area 13 and Brodmann area 45 in the right insula-inferior frontal gyrus area, and Brodmann area 13 bilaterally in the insula. The "where" route detecting their spatial attributes involving the Brodmann area 40 in the right and left inferior parietal lobule and the Brodmann area 7 in the right precuneus-superior parietal lobule and Brodmann area 7 in the left superior parietal lobule.[16]
Development of multimodal operations
All species equipped with multiple sensory systems, utilize them in an integrative manner to achieve action and perception (Stein & Meredith 1993). However, in most species, especially higher mammals, the ability to integrate develops in parallel with physical and cognitive maturity. Classically, two opposing views that are principally modern manifestations of the nativist/empiricist dichotomy have been put forth. The integration (empiricist) view states that at birth, sensory modalities are not at all connected. Hence, it is only through active exploration that plastic changes can occur in the nervous system to initiate holistic perceptions and actions. Conversely, the differentiation (nativist) perspective asserts that the young nervous system is highly interconnected; and that during development, modalities are gradually differentiated as relevant connections are rehearsed and the irrelevant are discarded (Lewkowicz & Kraebel, 2004).
Using the SC as a model, the nature of this dichotomy can be analysed. In the newborn cat, deep layers of the SC contain only neurons responding to the somatosensory modality. Within a week, auditory neurons begin to occur, but it is not until two weeks after birth that the first multimodal neurons appear. Further changes continue, with the arrival of visual neurons after three weeks, until the SC has achieved its fully mature structure after three to four months. Concurrently in species of monkey, newborns are endowed with a significant complement of multisensory cells; however, along with cats there is no integration effect apparent until much later (Wallace, 2004). This delay is thought to be the result of the relatively slower development of cortical structures including the AES; which as stated above, is essential for the existence of the integration effect (Jiang & Stein, 2003).
Furthermore, it was found by Wallace (2004) that cats raised in a light deprived environment had severely underdeveloped visual receptive fields in deep layers of the SC. Although, receptive field size has been shown to decrease with maturity, the above finding suggests that integration in the SC is a function of experience. Nevertheless, the existence of visual multimodal neurons, despite a complete lack of visual experience, highlights the apparent relevance of nativist viewpoints. Multimodal development in the cortex has been studied to a lesser extent, however a similar study to that presented above was performed on cats whose optic nerves had been severed. These cats displayed a marked improvement in their ability to localize stimuli through audition; and consequently also showed increased neural connectivity between V1 and the auditory cortex (Clavagnier et al., 2004). Such plasticity in early childhood allows for greater adaptability, and thus more normal development in other areas for those with a sensory deficit.
In contrast, following the initial formative period, the SC does not appear to display any neural plasticity. Despite this, habituation and sensititisation over the long term is known to exist in orientation behaviors. This apparent plasticity in function has been attributed to the adaptability of the AES. That is, although neurons in the SC have a fixed magnitude of output per unit input, and essentially operate an all or nothing response, the level of neural firing can be more finely tuned by variations in input by the AES.
Although there is evidence for either perspective of the integration/differentiation dichotomy, a significant body of evidence also exists for a combination of factors from either view. Thus, analogous to the broader nativist/empiricist argument, it is apparent that rather than a dichotomy, there exists a continuum, such that the integration and differentiation hypotheses are extremes at either end.