2,247 research outputs found
Analyzing the Performance of Multilayer Neural Networks for Object Recognition
In the last two years, convolutional neural networks (CNNs) have achieved an
impressive suite of results on standard recognition datasets and tasks.
CNN-based features seem poised to quickly replace engineered representations,
such as SIFT and HOG. However, compared to SIFT and HOG, we understand much
less about the nature of the features learned by large CNNs. In this paper, we
experimentally probe several aspects of CNN feature learning in an attempt to
help practitioners gain useful, evidence-backed intuitions about how to apply
CNNs to computer vision problems.Comment: Published in European Conference on Computer Vision 2014 (ECCV-2014
End to End Deep Neural Network Frequency Demodulation of Speech Signals
Frequency modulation (FM) is a form of radio broadcasting which is widely
used nowadays and has been for almost a century. We suggest a
software-defined-radio (SDR) receiver for FM demodulation that adopts an
end-to-end learning based approach and utilizes the prior information of
transmitted speech message in the demodulation process. The receiver detects
and enhances speech from the in-phase and quadrature components of its base
band version. The new system yields high performance detection for both
acoustical disturbances, and communication channel noise and is foreseen to
out-perform the established methods for low signal to noise ratio (SNR)
conditions in both mean square error and in perceptual evaluation of speech
quality score
Can a connectionist model explain the processing of regularly and irregularly inflected words in German as L1 and L2?
The connectionist model is a prevailing model of the structure and functioning of the cognitive system of the processing of morphology. According to this model, the morphology of regularly and irregularly inflected words (e.g., verb participles and noun plurals) is processed in the same cognitive network. A validation of the connectionist model of the processing of morphology in German as L2 has yet to be achieved. To investigate L2-specific aspects, we compared a group of L1 speakers of German with speakers of German as L2. L2 and L1 speakers of German were assigned to their respective group by their reaction times in picture naming prior to the central task. The reaction times in the lexical decision task of verb participles and noun plurals were largely consistent with the assumption of the connectionist model. Interestingly, speakers of German as L2 showed a specific advantage for irregular compared with regular verb participles
Supervised Learning in Multilayer Spiking Neural Networks
The current article introduces a supervised learning algorithm for multilayer
spiking neural networks. The algorithm presented here overcomes some
limitations of existing learning algorithms as it can be applied to neurons
firing multiple spikes and it can in principle be applied to any linearisable
neuron model. The algorithm is applied successfully to various benchmarks, such
as the XOR problem and the Iris data set, as well as complex classifications
problems. The simulations also show the flexibility of this supervised learning
algorithm which permits different encodings of the spike timing patterns,
including precise spike trains encoding.Comment: 38 pages, 4 figure
Microstructure identification via detrended fluctuation analysis of ultrasound signals
We describe an algorithm for simulating ultrasound propagation in random
one-dimensional media, mimicking different microstructures by choosing physical
properties such as domain sizes and mass densities from probability
distributions. By combining a detrended fluctuation analysis (DFA) of the
simulated ultrasound signals with tools from the pattern-recognition
literature, we build a Gaussian classifier which is able to associate each
ultrasound signal with its corresponding microstructure with a very high
success rate. Furthermore, we also show that DFA data can be used to train a
multilayer perceptron which estimates numerical values of physical properties
associated with distinct microstructures.Comment: Submitted to Phys. Rev.
3D freeform surfaces from planar sketches using neural networks
A novel intelligent approach into 3D freeform surface reconstruction from planar sketches is proposed. A multilayer perceptron (MLP) neural network is employed to induce 3D freeform surfaces from planar freehand curves. Planar curves were used to represent the boundaries of a freeform surface patch. The curves were varied iteratively and sampled to produce training data to train and test the neural network. The obtained results demonstrate that the network successfully learned the inverse-projection map and correctly inferred the respective surfaces from fresh curves
A Comparison of the Use of Binary Decision Trees and Neural Networks in Top Quark Detection
The use of neural networks for signal vs.~background discrimination in
high-energy physics experiment has been investigated and has compared favorably
with the efficiency of traditional kinematic cuts. Recent work in top quark
identification produced a neural network that, for a given top quark mass,
yielded a higher signal to background ratio in Monte Carlo simulation than a
corresponding set of conventional cuts. In this article we discuss another
pattern-recognition algorithm, the binary decision tree. We have applied a
binary decision tree to top quark identification at the Tevatron and found it
to be comparable in performance to the neural network. Furthermore,
reservations about the "black box" nature of neural network discriminators do
not apply to binary decision trees; a binary decision tree may be reduced to a
set of kinematic cuts subject to conventional error analysis.Comment: 14pp. Plain TeX + mtexsis.tex (latter available through 'get
mtexsis.tex'.) Two postscript files avail. by emai
Neural NILM: Deep Neural Networks Applied to Energy Disaggregation
Energy disaggregation estimates appliance-by-appliance electricity
consumption from a single meter that measures the whole home's electricity
demand. Recently, deep neural networks have driven remarkable improvements in
classification performance in neighbouring machine learning fields such as
image classification and automatic speech recognition. In this paper, we adapt
three deep neural network architectures to energy disaggregation: 1) a form of
recurrent neural network called `long short-term memory' (LSTM); 2) denoising
autoencoders; and 3) a network which regresses the start time, end time and
average power demand of each appliance activation. We use seven metrics to test
the performance of these algorithms on real aggregate power data from five
appliances. Tests are performed against a house not seen during training and
against houses seen during training. We find that all three neural nets achieve
better F1 scores (averaged over all five appliances) than either combinatorial
optimisation or factorial hidden Markov models and that our neural net
algorithms generalise well to an unseen house.Comment: To appear in ACM BuildSys'15, November 4--5, 2015, Seou
Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition
Good old on-line back-propagation for plain multi-layer perceptrons yields a
very low 0.35% error rate on the famous MNIST handwritten digits benchmark. All
we need to achieve this best result so far are many hidden layers, many neurons
per layer, numerous deformed training images, and graphics cards to greatly
speed up learning.Comment: 14 pages, 2 figures, 4 listing
- …