2,304 research outputs found
Machine learning in spectral domain
Deep neural networks are usually trained in the space of the nodes, by
adjusting the weights of existing links via suitable optimization protocols. We
here propose a radically new approach which anchors the learning process to
reciprocal space. Specifically, the training acts on the spectral domain and
seeks to modify the eigenvectors and eigenvalues of transfer operators in
direct space. The proposed method is ductile and can be tailored to return
either linear or non linear classifiers. The performance are competitive with
standard schemes, while allowing for a significant reduction of the learning
parameter space. Spectral learning restricted to eigenvalues could be also
employed for pre-training of the deep neural network, in conjunction with
conventional machine-learning schemes. Further, it is surmised that the nested
indentation of eigenvectors that defines the core idea of spectral learning
could help understanding why deep networks work as well as they do
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition
Online handwritten Chinese text recognition (OHCTR) is a challenging problem
as it involves a large-scale character set, ambiguous segmentation, and
variable-length input sequences. In this paper, we exploit the outstanding
capability of path signature to translate online pen-tip trajectories into
informative signature feature maps using a sliding window-based method,
successfully capturing the analytic and geometric properties of pen strokes
with strong local invariance and robustness. A multi-spatial-context fully
convolutional recurrent network (MCFCRN) is proposed to exploit the multiple
spatial contexts from the signature feature maps and generate a prediction
sequence while completely avoiding the difficult segmentation problem.
Furthermore, an implicit language model is developed to make predictions based
on semantic context within a predicting feature sequence, providing a new
perspective for incorporating lexicon constraints and prior knowledge about a
certain language in the recognition procedure. Experiments on two standard
benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with
correct rates of 97.10% and 97.15%, respectively, which are significantly
better than the best result reported thus far in the literature.Comment: 14 pages, 9 figure
Supervised learning of an opto-magnetic neural network with ultrashort laser pulses
The explosive growth of data and its related energy consumption is pushing
the need to develop energy-efficient brain-inspired schemes and materials for
data processing and storage. Here, we demonstrate experimentally that Co/Pt
films can be used as artificial synapses by manipulating their magnetization
state using circularly-polarized ultrashort optical pulses at room temperature.
We also show an efficient implementation of supervised perceptron learning on
an opto-magnetic neural network, built from such magnetic synapses.
Importantly, we demonstrate that the optimization of synaptic weights can be
achieved using a global feedback mechanism, such that the learning does not
rely on external storage or additional optimization schemes. These results
suggest there is high potential for realizing artificial neural networks using
optically-controlled magnetization in technologically relevant materials, that
can learn not only fast but also energy-efficient.Comment: 9 pages, 4 figure
- …