6,038 research outputs found
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals
Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN
Integrating user-centred design in the development of a silent speech interface based on permanent magnetic articulography
Abstract: A new wearable silent speech interface (SSI) based on Permanent Magnetic Articulography (PMA) was developed with the involvement of end users in the design process. Hence, desirable features such as appearance, port-ability, ease of use and light weight were integrated into the prototype. The aim of this paper is to address the challenges faced and the design considerations addressed during the development. Evaluation on both hardware and speech recognition performances are presented here. The new prototype shows a com-parable performance with its predecessor in terms of speech recognition accuracy (i.e. ~95% of word accuracy and ~75% of sequence accuracy), but significantly improved appearance, portability and hardware features in terms of min-iaturization and cost
Improving neural networks by preventing co-adaptation of feature detectors
When a large feedforward neural network is trained on a small training set,
it typically performs poorly on held-out test data. This "overfitting" is
greatly reduced by randomly omitting half of the feature detectors on each
training case. This prevents complex co-adaptations in which a feature detector
is only helpful in the context of several other specific feature detectors.
Instead, each neuron learns to detect a feature that is generally helpful for
producing the correct answer given the combinatorially large variety of
internal contexts in which it must operate. Random "dropout" gives big
improvements on many benchmark tasks and sets new records for speech and object
recognition
Detection of Mines in Acoustic Images using Higher Order Spectral Features
A new pattern-recognition algorithm detects approximately 90% of the mines hidden in the Coastal Systems Station Sonar0, 1, and 3 databases of cluttered acoustic images, with about 10% false alarms. Similar to other approaches, the algorithm presented here includes processing the images with an adaptive Wiener filter (the degree of smoothing depends on the signal strength in a local neighborhood) to remove noise without destroying the structural information in the mine shapes, followed by a two-dimensional FIR filter designed to suppress noise and clutter, while enhancing the target signature. A double peak pattern is produced as the FIR filter passes over mine highlight and shadow regions. Although the location, size, and orientation of this pattern within a region of the image can vary, features derived from higher order spectra (HOS) are invariant to translation, rotation, and scaling, while capturing the spatial correlations of mine-like objects. Classification accuracy is improved by combining features based on geometrical properties of the filter output with features based on HOS. The highest accuracy is obtained by fusing classification based on bispectral features with classification based on trispectral features
Far-field subwavelength acoustic imaging by deep learning
Seeing and recognizing an object whose size is much smaller than the
illumination wavelength is a challenging task for an observer placed in the far
field, due to the diffraction limit. Recent advances in near and far field
microscopy have offered several ways to overcome this limitation; however, they
often use invasive markers and require intricate equipment with complicated
image post-processing. On the other hand, a simple marker-free solution for
high-resolution imaging may be found by exploiting resonant metamaterial lenses
that can convert the subwavelength image information contained in the
near-field of the object to propagating field components that can reach the far
field. Unfortunately, resonant metalenses are inevitably sensitive to
absorption losses, which has so far largely hindered their practical
applications. Here, we solve this vexing problem and show that this limitation
can be turned into an advantage when metalenses are combined with deep learning
techniques. We demonstrate that combining deep learning with lossy metalenses
allows recognizing and imaging largely subwavelength features directly from the
far field. Our acoustic learning experiment shows that, despite being thirty
times smaller than the wavelength of sound, the fine details of images can be
successfully reconstructed and recognized in the far field, which is crucially
enabled by the presence of absorption. We envision applications in acoustic
image analysis, feature detection, object classification, or as a novel
noninvasive acoustic sensing tool in biomedical applications
- …