88 research outputs found

    A Robust Real-Time Automatic License Plate Recognition Based on the YOLO Detector

    Full text link
    Automatic License Plate Recognition (ALPR) has been a frequent topic of research due to many practical applications. However, many of the current solutions are still not robust in real-world situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector. The Convolutional Neural Networks (CNNs) are trained and fine-tuned for each ALPR stage so that they are robust under different conditions (e.g., variations in camera, lighting, and background). Specially for character segmentation and recognition, we design a two-stage approach employing simple data augmentation tricks such as inverted License Plates (LPs) and flipped characters. The resulting ALPR approach achieved impressive results in two datasets. First, in the SSIG dataset, composed of 2,000 frames from 101 vehicle videos, our system achieved a recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%, respectively) and considerably outperforming previous results (81.80%). Second, targeting a more realistic scenario, we introduce a larger public dataset, called UFPR-ALPR dataset, designed to ALPR. This dataset contains 150 videos and 4,500 frames captured when both camera and vehicles are moving and also contains different types of vehicles (cars, motorcycles, buses and trucks). In our proposed dataset, the trial versions of commercial systems achieved recognition rates below 70%. On the other hand, our system performed better, with recognition rate of 78.33% and 35 FPS.Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 201

    Do We Train on Test Data? The Impact of Near-Duplicates on License Plate Recognition

    Full text link
    This work draws attention to the large fraction of near-duplicates in the training and test sets of datasets widely adopted in License Plate Recognition (LPR) research. These duplicates refer to images that, although different, show the same license plate. Our experiments, conducted on the two most popular datasets in the field, show a substantial decrease in recognition rate when six well-known models are trained and tested under fair splits, that is, in the absence of duplicates in the training and test sets. Moreover, in one of the datasets, the ranking of models changed considerably when they were trained and tested under duplicate-free splits. These findings suggest that such duplicates have significantly biased the evaluation and development of deep learning-based models for LPR. The list of near-duplicates we have found and proposals for fair splits are publicly available for further research at https://raysonlaroca.github.io/supp/lpr-train-on-test/Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 202

    A Benchmark for Iris Location and a Deep Learning Detector Evaluation

    Full text link
    The iris is considered as the biometric trait with the highest unique probability. The iris location is an important task for biometrics systems, affecting directly the results obtained in specific applications such as iris recognition, spoofing and contact lenses detection, among others. This work defines the iris location problem as the delimitation of the smallest squared window that encompasses the iris region. In order to build a benchmark for iris location we annotate (iris squared bounding boxes) four databases from different biometric applications and make them publicly available to the community. Besides these 4 annotated databases, we include 2 others from the literature. We perform experiments on these six databases, five obtained with near infra-red sensors and one with visible light sensor. We compare the classical and outstanding Daugman iris location approach with two window based detectors: 1) a sliding window detector based on features from Histogram of Oriented Gradients (HOG) and a linear Support Vector Machines (SVM) classifier; 2) a deep learning based detector fine-tuned from YOLO object detector. Experimental results showed that the deep learning based detector outperforms the other ones in terms of accuracy and runtime (GPUs version) and should be chosen whenever possible.Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 201

    Robust Iris Segmentation Based on Fully Convolutional Networks and Generative Adversarial Networks

    Full text link
    The iris can be considered as one of the most important biometric traits due to its high degree of uniqueness. Iris-based biometrics applications depend mainly on the iris segmentation whose suitability is not robust for different environments such as near-infrared (NIR) and visible (VIS) ones. In this paper, two approaches for robust iris segmentation based on Fully Convolutional Networks (FCNs) and Generative Adversarial Networks (GANs) are described. Similar to a common convolutional network, but without the fully connected layers (i.e., the classification layers), an FCN employs at its end a combination of pooling layers from different convolutional layers. Based on the game theory, a GAN is designed as two networks competing with each other to generate the best segmentation. The proposed segmentation networks achieved promising results in all evaluated datasets (i.e., BioSec, CasiaI3, CasiaT4, IITD-1) of NIR images and (NICE.I, CrEye-Iris and MICHE-I) of VIS images in both non-cooperative and cooperative domains, outperforming the baselines techniques which are the best ones found so far in the literature, i.e., a new state of the art for these datasets. Furthermore, we manually labeled 2,431 images from CasiaT4, CrEye-Iris and MICHE-I datasets, making the masks available for research purposes.Comment: Accepted for presentation at the Conference on Graphics, Patterns and Images (SIBGRAPI) 201

    UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios

    Full text link
    Recently, ocular biometrics in unconstrained environments using images obtained at visible wavelength have gained the researchers' attention, especially with images captured by mobile devices. Periocular recognition has been demonstrated to be an alternative when the iris trait is not available due to occlusions or low image resolution. However, the periocular trait does not have the high uniqueness presented in the iris trait. Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region. Also, to address the within-class variability caused by lighting and attributes in the periocular region, it is of paramount importance to use datasets with images of the same subject captured in distinct sessions. As the datasets available in the literature do not present all these factors, in this work, we present a new periocular dataset containing samples from 1,122 subjects, acquired in 3 sessions by 196 different mobile devices. The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest. We also performed an extensive benchmark with several Convolutional Neural Network (CNN) architectures and models that have been employed in state-of-the-art approaches based on Multi-class Classification, Multitask Learning, Pairwise Filters Network, and Siamese Network. The results achieved in the closed- and open-world protocol, considering the identification and verification tasks, show that this area still needs research and development

    Melissocenótica (Apoidea, Anthophila) no Parque Florestal dos Pioneiros, Maringá, PR (sul do Brasil): Parte II. Utilização de recursos florais

    Get PDF
    Melissocoenotic studies (Hymenoptera, Anthophila) in the Parque Florestal dos Pioneiros, Maringá, PR (southern Brazil): II. Utilization of the floral resourcesMelissocenótica (Apoidea, Anthophila) no Parque Florestal dos Pioneiros, Maringá, PR (sul do Brasil): Parte II. Utilização de recursos florais1S. LAROCA 2J. F. BARBOSA 3& J. RODRIGUES

    Melissocenótica (Hymenoptera, Anthophila) no Parque Florestal dos Pioneiros, Maringá, PR (sul do Brasil) — I. Abundância relativa e diversidade

    Get PDF
    Melissocoenotics (Hymenoptera, Anthophila) in the Parque Florestal dos Pioneiros, Maringá, PR (southern Brazil) — I. Relative abundance and diversityMelissocenótica (Hymenoptera, Anthophila) no Parque Florestal dos Pioneiros, Maringá, PR. (sul do Brasil) — I. Abundância relativa e diversidade </htm

    Tissue Doppler Imaging can be useful to distinguish pathological from physiological left ventricular hypertrophy: a study in master athletes and mild hypertensive subjects

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Transthoracic echocardiography left ventricular wall thickness is often increased in master athletes and it results by intense physical training. Left Ventricular Hypertrophy can also be due to a constant pressure overload. Conventional Pulsed Wave (PW) Doppler analysis of diastolic function sometimes fails to distinguish physiological from pathological LVH.</p> <p>The aim of this study is to evaluate the role of Pulsed Wave Tissue Doppler Imaging in differentiating pathological from physiological LVH in the middle-aged population.</p> <p>Methods</p> <p>we selected a group of 80 master athletes, a group of 80 sedentary subjects with essential hypertension and an apparent normal diastolic function at standard PW Doppler analysis. The two groups were comparable for increased left ventricular wall thickness and mass index (134.4 ± 19.7 vs 134.5 ± 22.1 gr/m2; p > .05). Diastolic function indexes using the PW technique were in the normal range for both.</p> <p>Results</p> <p>Pulsed Wave TDI study of diastolic function immediately distinguished the two groups. While in master athletes the diastolic TDI-derived parameters remained within normal range (E' 9.4 ± 3.1 cm/sec; E/E' 7.8 ± 2.1), in the hypertensive group these parameters were found to be constantly altered, with mean values and variation ranges always outside normal validated limits (E' 7.2 ± 2.4 cm/sec; E/E' 10.6 ± 3.2), and with E' and E/E' statistically different in the two groups (p < .001).</p> <p>Conclusion</p> <p>Our study showed that the TDI technique can be an easy and validated method to assess diastolic function in differentiating normal from pseudonormal diastolic patterns and it can distinguish physiological from pathological LVH emphasizing the eligibility certification required by legal medical legislation as in Italy.</p
    • …
    corecore