459 research outputs found
Context-based Multi-stage Offline Handwritten Mathematical Symbol Recognition using Deep Learning
We propose a multi-stage machine learning (ML) architecture to improve the accuracy of offline handwritten mathematical symbol recognition. In the first stage, we train and assemble multiple deep convolutional neural networks to classify isolated mathematical symbols. However, certain ambiguous symbols are hard to classify without the context information of the mathematical expressions where the symbols belong. In the second stage, we train a deep convolutional neural network that further classifies the ambiguous symbols based on the context information of the symbols. To further improve the classification accuracy, in the third stage, we develop a set of rules to classify the ambiguity or otherwise the syntax of the mathematical expressions will be violated. We evaluate the proposed method by using the Competition on Recognition of Online Handwritten Mathematical Expressions (CROHME) dataset. The proposed method results the state-of-the-art accuracy of 94.04%, which is 1.62% improvement compared with the previous single-stage approach
Symbol detection in online handwritten graphics using Faster R-CNN
Symbol detection techniques in online handwritten graphics (e.g. diagrams and
mathematical expressions) consist of methods specifically designed for a single
graphic type. In this work, we evaluate the Faster R-CNN object detection
algorithm as a general method for detection of symbols in handwritten graphics.
We evaluate different configurations of the Faster R-CNN method, and point out
issues relative to the handwritten nature of the data. Considering the online
recognition context, we evaluate efficiency and accuracy trade-offs of using
Deep Neural Networks of different complexities as feature extractors. We
evaluate the method on publicly available flowchart and mathematical expression
(CROHME-2016) datasets. Results show that Faster R-CNN can be effectively used
on both datasets, enabling the possibility of developing general methods for
symbol detection, and furthermore, general graphic understanding methods that
could be built on top of the algorithm.Comment: Submitted to DAS-201
Multi-Scale Attention with Dense Encoder for Handwritten Mathematical Expression Recognition
Handwritten mathematical expression recognition is a challenging problem due
to the complicated two-dimensional structures, ambiguous handwriting input and
variant scales of handwritten math symbols. To settle this problem, we utilize
the attention based encoder-decoder model that recognizes mathematical
expression images from two-dimensional layouts to one-dimensional LaTeX
strings. We improve the encoder by employing densely connected convolutional
networks as they can strengthen feature extraction and facilitate gradient
propagation especially on a small training set. We also present a novel
multi-scale attention model which is employed to deal with the recognition of
math symbols in different scales and save the fine-grained details that will be
dropped by pooling operations. Validated on the CROHME competition task, the
proposed method significantly outperforms the state-of-the-art methods with an
expression recognition accuracy of 52.8% on CROHME 2014 and 50.1% on CROHME
2016, by only using the official training dataset
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition
Online handwritten Chinese text recognition (OHCTR) is a challenging problem
as it involves a large-scale character set, ambiguous segmentation, and
variable-length input sequences. In this paper, we exploit the outstanding
capability of path signature to translate online pen-tip trajectories into
informative signature feature maps using a sliding window-based method,
successfully capturing the analytic and geometric properties of pen strokes
with strong local invariance and robustness. A multi-spatial-context fully
convolutional recurrent network (MCFCRN) is proposed to exploit the multiple
spatial contexts from the signature feature maps and generate a prediction
sequence while completely avoiding the difficult segmentation problem.
Furthermore, an implicit language model is developed to make predictions based
on semantic context within a predicting feature sequence, providing a new
perspective for incorporating lexicon constraints and prior knowledge about a
certain language in the recognition procedure. Experiments on two standard
benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with
correct rates of 97.10% and 97.15%, respectively, which are significantly
better than the best result reported thus far in the literature.Comment: 14 pages, 9 figure
Query-Driven Global Graph Attention Model for Visual Parsing: Recognizing Handwritten and Typeset Math Formulas
We present a new visual parsing method based on standard Convolutional Neural Networks (CNNs) for handwritten and typeset mathematical formulas. The Query-Driven Global Graph Attention (QD-GGA) parser employs multi-task learning, using a single feature representation for locating, classifying, and relating symbols. QD-GGA parses formulas by first constructing a Line-Of-Sight (LOS) graph over the input primitives (e.g handwritten strokes or connected components in images). Second, class distributions for LOS nodes and edges are obtained using query-specific feature filters (i.e., attention) in a single feed-forward pass. This allows end-to-end structure learning using a joint loss over primitive node and edge class distributions. Finally, a Maximum Spanning Tree (MST) is extracted from the weighted graph using Edmonds\u27 Arborescence Algorithm. The model may be run recurrently over the input graph, updating attention to focus on symbols detected in the previous iteration. QD-GGA does not require additional grammar rules and the language model is learned from the sets of symbols/relationships and the statistics over them in the training set.
We benchmark our system against both handwritten and typeset state-of-the-art math recognition systems. Our preliminary results show that this is a promising new approach for visual parsing of math formulas. Using recurrent execution, symbol detection is near perfect for both handwritten and typeset formulas: we obtain a symbol f-measure of over 99.4% for both the CROHME (handwritten) and INFTYMCCDB-2 (typeset formula image) datasets. Our method is also much faster in both training and execution than state-of-the-art
RNN-based formula parsers. The unlabeled structure detection of QDGGA is competitive with encoder-decoder models, but QD-GGA symbol and relationship classification is weaker. We believe this may be addressed through increased use of spatial features and global context
- …