949 research outputs found
A Robust and Efficient Three-Layered Dialogue Component for a Speech-to-Speech Translation System
We present the dialogue component of the speech-to-speech translation system
VERBMOBIL. In contrast to conventional dialogue systems it mediates the
dialogue while processing maximally 50% of the dialogue in depth. Special
requirements like robustness and efficiency lead to a 3-layered hybrid
architecture for the dialogue module, using statistics, an automaton and a
planner. A dialogue memory is constructed incrementally.Comment: Postscript file, compressed and uuencoded, 15 pages, to appear in
Proceedings of EACL-95, Dublin
Cascaded Segmentation-Detection Networks for Word-Level Text Spotting
We introduce an algorithm for word-level text spotting that is able to
accurately and reliably determine the bounding regions of individual words of
text "in the wild". Our system is formed by the cascade of two convolutional
neural networks. The first network is fully convolutional and is in charge of
detecting areas containing text. This results in a very reliable but possibly
inaccurate segmentation of the input image. The second network (inspired by the
popular YOLO architecture) analyzes each segment produced in the first stage,
and predicts oriented rectangular regions containing individual words. No
post-processing (e.g. text line grouping) is necessary. With execution time of
450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the
highest score to date among published algorithms on the ICDAR 2015 Incidental
Scene Text dataset benchmark.Comment: 7 pages, 8 figure
Phonetic Searching
An improved method and apparatus is disclosed which uses probabilistic techniques to map an input search string with a prestored audio file, and recognize certain portions of a search string phonetically. An improved interface is disclosed which permits users to input search strings, linguistics, phonetics, or a combination of both, and also allows logic functions to be specified by indicating how far separated specific phonemes are in time.Georgia Tech Research Corporatio
Robust audio indexing for Dutch spoken-word collections
Abstract—Whereas the growth of storage capacity is in accordance with widely acknowledged predictions, the possibilities to index and access the archives created is lagging behind. This is especially the case in the oral history domain and much of the rich content in these collections runs the risk to remain inaccessible for lack of robust search technologies. This paper addresses the history and development of robust audio indexing technology for searching Dutch spoken-word collections and compares Dutch audio indexing in the well-studied broadcast news domain with an oral-history case-study. It is concluded that despite significant advances in Dutch audio indexing technology and demonstrated applicability in several domains, further research is indispensable for successful automatic disclosure of spoken-word collections
PGNet: Real-time Arbitrarily-Shaped Text Spotting with Point Gathering Network
The reading of arbitrarily-shaped text has received increasing research
attention. However, existing text spotters are mostly built on two-stage
frameworks or character-based methods, which suffer from either Non-Maximum
Suppression (NMS), Region-of-Interest (RoI) operations, or character-level
annotations. In this paper, to address the above problems, we propose a novel
fully convolutional Point Gathering Network (PGNet) for reading
arbitrarily-shaped text in real-time. The PGNet is a single-shot text spotter,
where the pixel-level character classification map is learned with proposed
PG-CTC loss avoiding the usage of character-level annotations. With a PG-CTC
decoder, we gather high-level character classification vectors from
two-dimensional space and decode them into text symbols without NMS and RoI
operations involved, which guarantees high efficiency. Additionally, reasoning
the relations between each character and its neighbors, a graph refinement
module (GRM) is proposed to optimize the coarse recognition and improve the
end-to-end performance. Experiments prove that the proposed method achieves
competitive accuracy, meanwhile significantly improving the running speed. In
particular, in Total-Text, it runs at 46.7 FPS, surpassing the previous
spotters with a large margin.Comment: 10 pages, 8 figures, AAAI 202
Mask TextSpotter: An End-to-End Trainable Neural Network for Spotting Text with Arbitrary Shapes
Recently, models based on deep neural networks have dominated the fields of
scene text detection and recognition. In this paper, we investigate the problem
of scene text spotting, which aims at simultaneous text detection and
recognition in natural images. An end-to-end trainable neural network model for
scene text spotting is proposed. The proposed model, named as Mask TextSpotter,
is inspired by the newly published work Mask R-CNN. Different from previous
methods that also accomplish text spotting with end-to-end trainable deep
neural networks, Mask TextSpotter takes advantage of simple and smooth
end-to-end learning procedure, in which precise text detection and recognition
are acquired via semantic segmentation. Moreover, it is superior to previous
methods in handling text instances of irregular shapes, for example, curved
text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the
proposed method achieves state-of-the-art results in both scene text detection
and end-to-end text recognition tasks.Comment: To appear in ECCV 201
TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision
End-to-end text spotting is a vital computer vision task that aims to
integrate scene text detection and recognition into a unified framework.
Typical methods heavily rely on Region-of-Interest (RoI) operations to extract
local features and complex post-processing steps to produce final predictions.
To address these limitations, we propose TextFormer, a query-based end-to-end
text spotter with Transformer architecture. Specifically, using query embedding
per text instance, TextFormer builds upon an image encoder and a text decoder
to learn a joint semantic understanding for multi-task modeling. It allows for
mutual training and optimization of classification, segmentation, and
recognition branches, resulting in deeper feature sharing without sacrificing
flexibility or simplicity. Additionally, we design an Adaptive Global
aGgregation (AGG) module to transfer global features into sequential features
for reading arbitrarily-shaped texts, which overcomes the sub-optimization
problem of RoI operations. Furthermore, potential corpus information is
utilized from weak annotations to full labels through mixed supervision,
further improving text detection and end-to-end text spotting results.
Extensive experiments on various bilingual (i.e., English and Chinese)
benchmarks demonstrate the superiority of our method. Especially on TDA-ReCTS
dataset, TextFormer surpasses the state-of-the-art method in terms of 1-NED by
13.2%.Comment: MIR 2023, 15 page
- …