There exist two different approaches to self-organizing maps (SOMs).
One approach, rooted in theoretical neuroscience, uses SOMs as computational models of biological cortex.
The other approach, taken in computer science and engineering, views SOMs as tools suitable to perform, for example, data visualization and pattern classification tasks.
While the first approach emphasizes fidelity to neurobiological data, the latter stresses computational efficiency and effectiveness.
In the research reported here, I developed and studied a class of SOMs that incorporates the multiple, simultaneous winner nodes implicit in many biologically-oriented SOMs, but determines the winners using the same efficient one-shot algorithm employed by computationally-oriented, single-winner SOMs.
This was achieved by generalizing single-winner SOMs, using localized competitions.
The resulting one-shot multi-winner SOM was found to support the formation of multiple adjacent, mirror-symmetric topographic maps.
It constitutes the first computational model of mirror-image map formation, and raises questions about the role of Hebbian-type synaptic changes in the formation of mirror-symmetric maps that are often observed in the sensory neocortex of many species, including humans.
The model unexpectedly predicted the occasional occurrence of adjacent, rotationally symmetric maps.
It is natural to speculate that such atypically oriented maps might contribute to abnormal cortical information processing in some neurodevelopmental disorders.
Traditional SOMs lack applicability to problems where the inputs are not single patterns, but temporal sequences of patterns.
Several SOM extensions have been proposed as a remedy, but there is no standard for processing temporal sequences with SOMs.
I focused on the task of learning unique spatial representations for non-trivial sets of temporal sequences.
The one-shot multi-winner SOM extended by temporally-asymmetric Hebbian synapses proved effective when applied to this task.
The learned representations retained information about sequence similarity.
The feature maps that formed show that temporal sequence processing and map formation are not mutually exclusive.
Since the sequence processing one-shot multi-winner SOM was trained with phonetic transcriptions of spoken words, the results can be related to the internalization of spoken words during language acquisition.
A final redesign of the network and the subsequent multi-objective optimization of its parameters using a genetic algorithm produced a more effective system