14,116 research outputs found
Unconstrained Scene Text and Video Text Recognition for Arabic Script
Building robust recognizers for Arabic has always been challenging. We
demonstrate the effectiveness of an end-to-end trainable CNN-RNN hybrid
architecture in recognizing Arabic text in videos and natural scenes. We
outperform previous state-of-the-art on two publicly available video text
datasets - ALIF and ACTIV. For the scene text recognition task, we introduce a
new Arabic scene text dataset and establish baseline results. For scripts like
Arabic, a major challenge in developing robust recognizers is the lack of large
quantity of annotated data. We overcome this by synthesising millions of Arabic
text images from a large vocabulary of Arabic words and phrases. Our
implementation is built on top of the model introduced here [37] which is
proven quite effective for English scene text recognition. The model follows a
segmentation-free, sequence to sequence transcription approach. The network
transcribes a sequence of convolutional features from the input image to a
sequence of target labels. This does away with the need for segmenting input
image into constituent characters/glyphs, which is often difficult for Arabic
script. Further, the ability of RNNs to model contextual dependencies yields
superior recognition results.Comment: 5 page
Enhancing Compressed Sensing 4D Photoacoustic Tomography by Simultaneous Motion Estimation
A crucial limitation of current high-resolution 3D photoacoustic tomography
(PAT) devices that employ sequential scanning is their long acquisition time.
In previous work, we demonstrated how to use compressed sensing techniques to
improve upon this: images with good spatial resolution and contrast can be
obtained from suitably sub-sampled PAT data acquired by novel acoustic scanning
systems if sparsity-constrained image reconstruction techniques such as total
variation regularization are used. Now, we show how a further increase of image
quality can be achieved for imaging dynamic processes in living tissue (4D
PAT). The key idea is to exploit the additional temporal redundancy of the data
by coupling the previously used spatial image reconstruction models with
sparsity-constrained motion estimation models. While simulated data from a
two-dimensional numerical phantom will be used to illustrate the main
properties of this recently developed
joint-image-reconstruction-and-motion-estimation framework, measured data from
a dynamic experimental phantom will also be used to demonstrate their potential
for challenging, large-scale, real-world, three-dimensional scenarios. The
latter only becomes feasible if a carefully designed combination of tailored
optimization schemes is employed, which we describe and examine in more detail
- …