1,757 research outputs found
Indonesian Plate Number Identification Using YOLACT and Mobilenetv2 in the Parking Management System
A vehicle registration plate is used for vehicle identity. In recent years, technology to identify plate numbers automatically or known as Automatic License Plate Recognition (ALPR) has grown over time. Convolutional Neural Network and  YOLACT are used to do plate number recognition from a video. The number plate recognition process consists of 3 stages. The first stage determines the coordinates of the number plate area on a video frame using YOLACT. The second stage is to separate each character inside the plat number using morphological operations, horizontal projection, and topological structural. The third stage is recognizing each character candidate using CNN MobileNetV2. To reduce computation time by only take several frames in the video, frame sampling is performed. This experiment study uses frame sampling, YOLACT epoch, MobileNet V2 epoch, and the ratio of validation data as parameters. The best results are with 250ms frame sampling succeed to reduce computational times up to 78%, whereas the accuracy is affected by the MobileNetV2 model with 100 epoch and ratio of split data validation 0,1 which results in 83,33% in average accuracy. Frame sampling can reduce computational time however higher frame sampling value causes the system fails to obtain plate region area
License Plate Recognition using Convolutional Neural Networks Trained on Synthetic Images
In this thesis, we propose a license plate recognition system and study the feasibility
of using synthetic training samples to train convolutional neural networks for a
practical application.
First we develop a modular framework for synthetic license plate generation; to
generate different license plate types (or other objects) only the first module needs
to be adapted. The other modules apply variations to the training samples such as
background, occlusions, camera perspective projection, object noise and camera
acquisition noise, with the aim to achieve enough variation of the object that the
trained networks will also recognize real objects of the same class.
Then we design two convolutional neural networks of low-complexity for license
plate detection and character recognition. Both are designed for simultaneous
classification and localization by branching the networks into a classification and a
regression branch and are trained end-to-end simultaneously over both branches, on
only our synthetic training samples.
To recognize real license plates, we design a pipeline for scale invariant license
plate detection with a scale pyramid and a fully convolutional application of the
license plate detection network in order to detect any number of license plates and
of any scale in an image. Before character classification is applied, potential plate
regions are un-skewed based on the detected plate location in order to achieve an as
optimal representation of the characters as possible. The character classification is
also performed with a fully convolutional sweep to simultaneously find all characters
at once.
Both the plate and the character stages apply a refinement classification where
initial classifications are first centered and rescaled. We show that this simple, yet
effective trick greatly improves the accuracy of our classifications, and at a small
increase of complexity. To our knowledge, this trick has not been exploited before.
To show the effectiveness of our system we first apply it on a dataset of photos
of Italian license plates to evaluate the different stages of our system and which
effect the classification thresholds have on the accuracy. We also find robust training
parameters and thresholds that are reliable for classification without any need for
calibration on a validation set of real annotated samples (which may not always be
available) and achieve a balanced precision and recall on the set of Italian license
plates, both in excess of 98%.
Finally, to show that our system generalizes to new plate types, we compare our
system to two reference system on a dataset of Taiwanese license plates. For this, we
only modify the first module of the synthetic plate generation algorithm to produce
Taiwanese license plates and adjust parameters regarding plate dimensions, then we
train our networks and apply the classification pipeline, using the robust parameters,
on the Taiwanese reference dataset. We achieve state-of-the-art performance on plate
detection (99.86% precision and 99.1% recall), single character detection (99.6%)
and full license reading (98.7%)
A Robust Real-Time Automatic License Plate Recognition Based on the YOLO Detector
Automatic License Plate Recognition (ALPR) has been a frequent topic of
research due to many practical applications. However, many of the current
solutions are still not robust in real-world situations, commonly depending on
many constraints. This paper presents a robust and efficient ALPR system based
on the state-of-the-art YOLO object detector. The Convolutional Neural Networks
(CNNs) are trained and fine-tuned for each ALPR stage so that they are robust
under different conditions (e.g., variations in camera, lighting, and
background). Specially for character segmentation and recognition, we design a
two-stage approach employing simple data augmentation tricks such as inverted
License Plates (LPs) and flipped characters. The resulting ALPR approach
achieved impressive results in two datasets. First, in the SSIG dataset,
composed of 2,000 frames from 101 vehicle videos, our system achieved a
recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better
than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%,
respectively) and considerably outperforming previous results (81.80%). Second,
targeting a more realistic scenario, we introduce a larger public dataset,
called UFPR-ALPR dataset, designed to ALPR. This dataset contains 150 videos
and 4,500 frames captured when both camera and vehicles are moving and also
contains different types of vehicles (cars, motorcycles, buses and trucks). In
our proposed dataset, the trial versions of commercial systems achieved
recognition rates below 70%. On the other hand, our system performed better,
with recognition rate of 78.33% and 35 FPS.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
Vehicle-Rear: A New Dataset to Explore Feature Fusion for Vehicle Identification Using Convolutional Neural Networks
This work addresses the problem of vehicle identification through
non-overlapping cameras. As our main contribution, we introduce a novel dataset
for vehicle identification, called Vehicle-Rear, that contains more than three
hours of high-resolution videos, with accurate information about the make,
model, color and year of nearly 3,000 vehicles, in addition to the position and
identification of their license plates. To explore our dataset we design a
two-stream CNN that simultaneously uses two of the most distinctive and
persistent features available: the vehicle's appearance and its license plate.
This is an attempt to tackle a major problem: false alarms caused by vehicles
with similar designs or by very close license plate identifiers. In the first
network stream, shape similarities are identified by a Siamese CNN that uses a
pair of low-resolution vehicle patches recorded by two different cameras. In
the second stream, we use a CNN for OCR to extract textual information,
confidence scores, and string similarities from a pair of high-resolution
license plate patches. Then, features from both streams are merged by a
sequence of fully connected layers for decision. In our experiments, we
compared the two-stream network against several well-known CNN architectures
using single or multiple vehicle features. The architectures, trained models,
and dataset are publicly available at https://github.com/icarofua/vehicle-rear
- …