8,758 research outputs found
Word Recognition with Deep Conditional Random Fields
Recognition of handwritten words continues to be an important problem in
document analysis and recognition. Existing approaches extract hand-engineered
features from word images--which can perform poorly with new data sets.
Recently, deep learning has attracted great attention because of the ability to
learn features from raw data. Moreover they have yielded state-of-the-art
results in classification tasks including character recognition and scene
recognition. On the other hand, word recognition is a sequential problem where
we need to model the correlation between characters. In this paper, we propose
using deep Conditional Random Fields (deep CRFs) for word recognition.
Basically, we combine CRFs with deep learning, in which deep features are
learned and sequences are labeled in a unified framework. We pre-train the deep
structure with stacked restricted Boltzmann machines (RBMs) for feature
learning and optimize the entire network with an online learning algorithm. The
proposed model was evaluated on two datasets, and seen to perform significantly
better than competitive baseline models. The source code is available at
https://github.com/ganggit/deepCRFs.Comment: 5 pages, published in ICIP 2016. arXiv admin note: substantial text
overlap with arXiv:1412.339
Particle Gibbs for Bayesian Additive Regression Trees
Additive regression trees are flexible non-parametric models and popular
off-the-shelf tools for real-world non-linear regression. In application
domains, such as bioinformatics, where there is also demand for probabilistic
predictions with measures of uncertainty, the Bayesian additive regression
trees (BART) model, introduced by Chipman et al. (2010), is increasingly
popular. As data sets have grown in size, however, the standard
Metropolis-Hastings algorithms used to perform inference in BART are proving
inadequate. In particular, these Markov chains make local changes to the trees
and suffer from slow mixing when the data are high-dimensional or the best
fitting trees are more than a few layers deep. We present a novel sampler for
BART based on the Particle Gibbs (PG) algorithm (Andrieu et al., 2010) and a
top-down particle filtering algorithm for Bayesian decision trees
(Lakshminarayanan et al., 2013). Rather than making local changes to individual
trees, the PG sampler proposes a complete tree to fit the residual. Experiments
show that the PG sampler outperforms existing samplers in many settings
Exploring efficient neural architectures for linguistic-acoustic mapping in text-to-speech
Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.Peer ReviewedPostprint (published version
- …