11,945 research outputs found
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition
Online handwritten Chinese text recognition (OHCTR) is a challenging problem
as it involves a large-scale character set, ambiguous segmentation, and
variable-length input sequences. In this paper, we exploit the outstanding
capability of path signature to translate online pen-tip trajectories into
informative signature feature maps using a sliding window-based method,
successfully capturing the analytic and geometric properties of pen strokes
with strong local invariance and robustness. A multi-spatial-context fully
convolutional recurrent network (MCFCRN) is proposed to exploit the multiple
spatial contexts from the signature feature maps and generate a prediction
sequence while completely avoiding the difficult segmentation problem.
Furthermore, an implicit language model is developed to make predictions based
on semantic context within a predicting feature sequence, providing a new
perspective for incorporating lexicon constraints and prior knowledge about a
certain language in the recognition procedure. Experiments on two standard
benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with
correct rates of 97.10% and 97.15%, respectively, which are significantly
better than the best result reported thus far in the literature.Comment: 14 pages, 9 figure
Learning Representations from Persian Handwriting for Offline Signature Verification, a Deep Transfer Learning Approach
Offline Signature Verification (OSV) is a challenging pattern recognition
task, especially when it is expected to generalize well on the skilled
forgeries that are not available during the training. Its challenges also
include small training sample and large intra-class variations. Considering the
limitations, we suggest a novel transfer learning approach from Persian
handwriting domain to multi-language OSV domain. We train two Residual CNNs on
the source domain separately based on two different tasks of word
classification and writer identification. Since identifying a person signature
resembles identifying ones handwriting, it seems perfectly convenient to use
handwriting for the feature learning phase. The learned representation on the
more varied and plentiful handwriting dataset can compensate for the lack of
training data in the original task, i.e. OSV, without sacrificing the
generalizability. Our proposed OSV system includes two steps: learning
representation and verification of the input signature. For the first step, the
signature images are fed into the trained Residual CNNs. The output
representations are then used to train SVMs for the verification. We test our
OSV system on three different signature datasets, including MCYT (a Spanish
signature dataset), UTSig (a Persian one) and GPDS-Synthetic (an artificial
dataset). On UT-SIG, we achieved 9.80% Equal Error Rate (EER) which showed
substantial improvement over the best EER in the literature, 17.45%. Our
proposed method surpassed state-of-the-arts by 6% on GPDS-Synthetic, achieving
6.81%. On MCYT, EER of 3.98% was obtained which is comparable to the best
previously reported results
Offline Signature Verification by Combining Graph Edit Distance and Triplet Networks
Biometric authentication by means of handwritten signatures is a challenging
pattern recognition task, which aims to infer a writer model from only a
handful of genuine signatures. In order to make it more difficult for a forger
to attack the verification system, a promising strategy is to combine different
writer models. In this work, we propose to complement a recent structural
approach to offline signature verification based on graph edit distance with a
statistical approach based on metric learning with deep neural networks. On the
MCYT and GPDS benchmark datasets, we demonstrate that combining the structural
and statistical models leads to significant improvements in performance,
profiting from their complementary properties
- …