18 research outputs found
BIOLOGICALLY-INFORMED COMPUTATIONAL MODELS OF HARMONIC SOUND DETECTION AND IDENTIFICATION
Harmonic sounds or harmonic components of sounds are often fused into a single percept by the auditory system. Although the exact neural mechanisms for harmonic sensitivity remain unclear, it arises presumably in the auditory cortex because subcortical neurons typically prefer only a single frequency. Pitch sensitive units and harmonic template units found in awake marmoset auditory cortex are sensitive to temporal and spectral periodicity, respectively. This thesis is a study of possible computational mechanisms underlying cortical harmonic selectivity.
To examine whether harmonic selectivity is related to statistical regularities of natural sounds, simulated auditory nerve responses to natural sounds were used in principal component analysis in comparison with independent component analysis, which yielded harmonic-sensitive model units with similar population distribution as real cortical neurons in terms of harmonic selectivity metrics. This result suggests that the variability of cortical harmonic selectivity may provide an efficient population representation of natural sounds.
Several network models of spectral selectivity mechanisms are investigated. As a side study, adding synaptic depletion to an integrate-and-fire model could explain the observed modulation-sensitive units, which are related to pitch-sensitive units but cannot account for precise temporal regularity. When a feed-forward network is trained to detect harmonics, the result is always a sieve, which is excited by integer multiples of the fundamental frequency and inhibited by half-integer multiples. The sieve persists over a wide variety of conditions including changing evaluation criteria, incorporating Dale’s principle, and adding a hidden layer. A recurrent network trained by Hebbian learning produces harmonic-selective by a novel dynamical mechanism that could be explained by a Lyapunov function which favors inputs that match the learned frequency correlations. These model neurons have sieve-like weights like the harmonic template units when probed by random harmonic stimuli, despite there being no sieve pattern anywhere in the network’s weights.
Online stimulus design has the potential to facilitate future experiments on nonlinear sensory neurons. We accelerated the sound-from-texture algorithm to enable online adaptive experimental design to maximize the activities of sparsely responding cortical units. We calculated the optimal stimuli for harmonic-selective units and investigated model-based information-theoretic method for stimulus optimization
Fundamentals
Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters
Using Interior Point Methods for Large-scale Support Vector Machine training
Support Vector Machines (SVMs) are powerful machine learning techniques for classification
and regression, but the training stage involves a convex quadratic optimization program
that is most often computationally expensive. Traditionally, active-set methods have been
used rather than interior point methods, due to the Hessian in the standard dual formulation
being completely dense. But as active-set methods are essentially sequential, they may not
be adequate for machine learning challenges of the future. Additionally, training time may be
limited, or data may grow so large that cluster-computing approaches need to be considered.
Interior point methods have the potential to answer these concerns directly. They scale
efficiently, they can provide good early approximations, and they are suitable for parallel
and multi-core environments. To apply them to SVM training, it is necessary to address
directly the most computationally expensive aspect of the algorithm. We therefore present an
exact reformulation of the standard linear SVM training optimization problem that exploits
separability of terms in the objective. By so doing, per-iteration computational complexity
is reduced from O(n3) to O(n). We show how this reformulation can be applied to many
machine learning problems in the SVM family.
Implementation issues relating to specializing the algorithm are explored through extensive
numerical experiments. They show that the performance of our algorithm for large dense
or noisy data sets is consistent and highly competitive, and in some cases can out perform all
other approaches by a large margin. Unlike active set methods, performance is largely unaffected
by noisy data. We also show how, by exploiting the block structure of the augmented
system matrix, a hybrid MPI/Open MP implementation of the algorithm enables data and
linear algebra computations to be efficiently partitioned amongst parallel processing nodes
in a clustered computing environment.
The applicability of our technique is extended to nonlinear SVMs by low-rank approximation
of the kernel matrix. We develop a heuristic designed to represent clusters using a
small number of features. Additionally, an early approximation scheme reduces the number of samples that need to be considered. Both elements improve the computational efficiency
of the training phase.
Taken as a whole, this thesis shows that with suitable problem formulation and efficient
implementation techniques, interior point methods are a viable optimization technology to
apply to large-scale SVM training, and are able to provide state-of-the-art performance
Fundamentals
Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters