

The following stage computes a coherence matrix that summarizes the pair-wise correlations between all channels making up the cortical representation. The model consists of a representational stage of early and cortical auditory processing that creates a multidimensional depiction of various sound attributes such as pitch, location, and spectral resolution.

The focus, however, will be on a computational realization of this idea and a discussion of the insights learned from simulations to disentangle complex sound sources such as speech and music. Experimental neurophysiological evidence in support of this hypothesis will be presented. Furthermore, we postulate that only when attention is directed toward a particular feature (e.g., pitch or location) do all other temporally coherent features of that source (e.g., timbre and location) become bound together as a stream that is segregated from the incoherent features of other sources. Here, we propose instead that stream formation depends primarily on temporal coherence between responses that encode various features of a sound source. Some studies have concluded that sounds are heard as separate streams when they activate well-separated populations of central auditory neurons, and that this process is largely pre-attentive.

The neural underpinnings of this perceptual feat remain mysterious. Humans and other animals can attend to one of multiple sounds, and follow it selectively over time.
