4 research outputs found
Licence Plate Detection Using Machine Learning
License Plate Recognition (LPR) is one of the tough tasks in the field of computer vision. Although it has been around for quite a while, there still lies the challenges when we have to deal with; the harsh environmental conditions like snowy, rainfall, windy, low light conditions etc. as well as the condition of the plates which includes the bent, rotated, broken plates. The performance of the recognition and detection frameworks take a significant hit when it is concerned with these conditional effects on the license plate. In this paper, we introduced a model to improve our accuracy based on the Chinese Car Parking Dataset (CCPD) using 2 separate convolutional neural networks. The first CNN will be able to detect the bounding boxes for the license plate detection using Non-Maximus Suppression (NMS) to find the most probable bounding area whereas the second one will take these bounding boxes and use the spatial attenuation network and character recognition model to successfully recognize the license plate. First, we train the CNN to detect the license plates, then use the second CNN to recognize the characters. The overall recognition accuracy was found to be 89% in the CCPD dataset
Towards End-to-end Car License Plate Location and Recognition in Unconstrained Scenarios
Benefiting from the rapid development of convolutional neural networks, the
performance of car license plate detection and recognition has been largely
improved. Nonetheless, challenges still exist especially for real-world
applications. In this paper, we present an efficient and accurate framework to
solve the license plate detection and recognition tasks simultaneously. It is a
lightweight and unified deep neural network, that can be optimized end-to-end
and work in real-time. Specifically, for unconstrained scenarios, an
anchor-free method is adopted to efficiently detect the bounding box and four
corners of a license plate, which are used to extract and rectify the target
region features. Then, a novel convolutional neural network branch is designed
to further extract features of characters without segmentation. Finally,
recognition task is treated as sequence labelling problems, which are solved by
Connectionist Temporal Classification (CTC) directly. Several public datasets
including images collected from different scenarios under various conditions
are chosen for evaluation. A large number of experiments indicate that the
proposed method significantly outperforms the previous state-of-the-art methods
in both speed and precision