324,525 research outputs found
Rotation-invariant features for multi-oriented text detection in natural images.
Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes
FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
In this paper, a neural network based real-time speech recognition (SR)
system is developed using an FPGA for very low-power operation. The implemented
system employs two recurrent neural networks (RNNs); one is a
speech-to-character RNN for acoustic modeling (AM) and the other is for
character-level language modeling (LM). The system also employs a statistical
word-level LM to improve the recognition accuracy. The results of the AM, the
character-level LM, and the word-level LM are combined using a fairly simple
N-best search algorithm instead of the hidden Markov model (HMM) based network.
The RNNs are implemented using massively parallel processing elements (PEs) for
low latency and high throughput. The weights are quantized to 6 bits to store
all of them in the on-chip memory of an FPGA. The proposed algorithm is
implemented on a Xilinx XC7Z045, and the system can operate much faster than
real-time.Comment: Accepted to SiPS 201
Framework for Electroencephalography-based Evaluation of User Experience
Measuring brain activity with electroencephalography (EEG) is mature enough
to assess mental states. Combined with existing methods, such tool can be used
to strengthen the understanding of user experience. We contribute a set of
methods to estimate continuously the user's mental workload, attention and
recognition of interaction errors during different interaction tasks. We
validate these measures on a controlled virtual environment and show how they
can be used to compare different interaction techniques or devices, by
comparing here a keyboard and a touch-based interface. Thanks to such a
framework, EEG becomes a promising method to improve the overall usability of
complex computer systems.Comment: in ACM. CHI '16 - SIGCHI Conference on Human Factors in Computing
System, May 2016, San Jose, United State
Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals
An electroencephalography (EEG) based Brain Computer Interface (BCI) enables
people to communicate with the outside world by interpreting the EEG signals of
their brains to interact with devices such as wheelchairs and intelligent
robots. More specifically, motor imagery EEG (MI-EEG), which reflects a
subjects active intent, is attracting increasing attention for a variety of BCI
applications. Accurate classification of MI-EEG signals while essential for
effective operation of BCI systems, is challenging due to the significant noise
inherent in the signals and the lack of informative correlation between the
signals and brain activities. In this paper, we propose a novel deep neural
network based learning framework that affords perceptive insights into the
relationship between the MI-EEG data and brain activities. We design a joint
convolutional recurrent neural network that simultaneously learns robust
high-level feature presentations through low-dimensional dense embeddings from
raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various
artifacts such as background activities. The proposed approach has been
evaluated extensively on a large- scale public MI-EEG dataset and a limited but
easy-to-deploy dataset collected in our lab. The results show that our approach
outperforms a series of baselines and the competitive state-of-the- art
methods, yielding a classification accuracy of 95.53%. The applicability of our
proposed approach is further demonstrated with a practical BCI system for
typing.Comment: 10 page
A Robust Real-Time Automatic License Plate Recognition Based on the YOLO Detector
Automatic License Plate Recognition (ALPR) has been a frequent topic of
research due to many practical applications. However, many of the current
solutions are still not robust in real-world situations, commonly depending on
many constraints. This paper presents a robust and efficient ALPR system based
on the state-of-the-art YOLO object detector. The Convolutional Neural Networks
(CNNs) are trained and fine-tuned for each ALPR stage so that they are robust
under different conditions (e.g., variations in camera, lighting, and
background). Specially for character segmentation and recognition, we design a
two-stage approach employing simple data augmentation tricks such as inverted
License Plates (LPs) and flipped characters. The resulting ALPR approach
achieved impressive results in two datasets. First, in the SSIG dataset,
composed of 2,000 frames from 101 vehicle videos, our system achieved a
recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better
than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%,
respectively) and considerably outperforming previous results (81.80%). Second,
targeting a more realistic scenario, we introduce a larger public dataset,
called UFPR-ALPR dataset, designed to ALPR. This dataset contains 150 videos
and 4,500 frames captured when both camera and vehicles are moving and also
contains different types of vehicles (cars, motorcycles, buses and trucks). In
our proposed dataset, the trial versions of commercial systems achieved
recognition rates below 70%. On the other hand, our system performed better,
with recognition rate of 78.33% and 35 FPS.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
An Open Source Testing Tool for Evaluating Handwriting Input Methods
This paper presents an open source tool for testing the recognition accuracy
of Chinese handwriting input methods. The tool consists of two modules, namely
the PC and Android mobile client. The PC client reads handwritten samples in
the computer, and transfers them individually to the Android client in
accordance with the socket communication protocol. After the Android client
receives the data, it simulates the handwriting on screen of client device, and
triggers the corresponding handwriting recognition method. The recognition
accuracy is recorded by the Android client. We present the design principles
and describe the implementation of the test platform. We construct several test
datasets for evaluating different handwriting recognition systems, and conduct
an objective and comprehensive test using six Chinese handwriting input methods
with five datasets. The test results for the recognition accuracy are then
compared and analyzed.Comment: 5 pages, 3 figures, 11 tables. Accepted to appear at ICDAR 201
- …