28 research outputs found

    Vehicle-Rear: A New Dataset to Explore Feature Fusion for Vehicle Identification Using Convolutional Neural Networks

    Full text link
    This work addresses the problem of vehicle identification through non-overlapping cameras. As our main contribution, we introduce a novel dataset for vehicle identification, called Vehicle-Rear, that contains more than three hours of high-resolution videos, with accurate information about the make, model, color and year of nearly 3,000 vehicles, in addition to the position and identification of their license plates. To explore our dataset we design a two-stream CNN that simultaneously uses two of the most distinctive and persistent features available: the vehicle's appearance and its license plate. This is an attempt to tackle a major problem: false alarms caused by vehicles with similar designs or by very close license plate identifiers. In the first network stream, shape similarities are identified by a Siamese CNN that uses a pair of low-resolution vehicle patches recorded by two different cameras. In the second stream, we use a CNN for OCR to extract textual information, confidence scores, and string similarities from a pair of high-resolution license plate patches. Then, features from both streams are merged by a sequence of fully connected layers for decision. In our experiments, we compared the two-stream network against several well-known CNN architectures using single or multiple vehicle features. The architectures, trained models, and dataset are publicly available at https://github.com/icarofua/vehicle-rear

    Text recognition and 2D/3D object tracking

    Get PDF
    Orientadores: Jorge Stolfi, Neucimar Jer√īnimo LeiteTese (doutorado) - Universidade Estadual de Campinas, Instituto de Computa√ß√£oResumo: Nesta tese abordamos tr√™s problemas de vis√£o computacional: (1) detec√ß√£o e reconhecimento de objetos de texto planos em imagens de cenas reais; (2) rastreamento destes objetos de texto em v√≠deos digitais; e (3) o rastreamento de um objeto tridimensional r√≠gido arbitr√°rio com marcas conhecidas em um v√≠deo digital. N√≥s desenvolvemos, para cada um dos problemas, algoritmos inovadores, que s√£o pelo menos t√£o precisos e robustos quanto outros algoritmos estado-da-arte. Especificamente, para reconhecimento de texto n√≥s desenvolvemos (e validamos extensivamente) um novo descritor de imagem baseado em HOG especializado para escrita romana, que denominamos T-HOG, e mostramos sua contribui√ß√£o como um filtro em um detector de texto (SNOOPERTEXT). N√≥s tamb√©m melhoramos o algoritmo SNOOPERTEXT atrav√©s do uso da t√©cnica multiescala para tratar caracteres de tamanhos bastante variados e limitar a sensibilidade do algoritmo a v√°rios artefatos. Para rastreamento de texto, n√≥s descrevemos quatro estrat√©gias b√°sicas para combinar a detec√ß√£o e o rastreamento de texto, e desenvolvemos tamb√©m um rastreador espec√≠fico baseado em filtro de part√≠culas que explora o uso do reconhecedor T-HOG. Para o rastreamento de objetos r√≠gidos, n√≥s desenvolvemos um novo algoritmo preciso e robusto (AFFTRACK) que combina rastreamento de caracter√≠sticas por KLT com uma calibra√ß√£o de c√Ęmera melhorada. N√≥s testamos extensivamente nossos algoritmos com diversas bases de dados descritas na literatura. N√≥s tamb√©m desenvolvemos algumas bases de dados (publicamente dispon√≠veis) para a valida√ß√£o de algoritmos de detec√ß√£o e rastreamento de texto e de rastreamento de objetos r√≠gidos em v√≠deosAbstract: In this thesis we address three computer vision problems: (1) the detection and recognition of flat text objects in images of real scenes; (2) the tracking of such text objects in a digital video; and (3) the tracking an arbitrary three-dimensional rigid object with known markings in a digital video. For each problem we developed innovative algorithms, which are at least as accurate and robust as other state-of-the-art algorithms. Specifically, for text classification we developed (and extensively evaluated) a new HOG-based descriptor specialized for Roman script, which we call T-HOG, and showed its value as a post-filter for an existing text detector (SNOOPERTEXT). We also improved the SNOOPERTEXT algorithm by using the multi-scale technique to handle widely different letter sizes while limiting the sensitivity of the algorithm to various artifacts. For text tracking, we describe four basic ways of combining a text detector and a text tracker, and we developed a specific tracker based on a particle-filter which exploits the T-HOG recognizer. For rigid object tracking we developed a new accurate and robust algorithm (AFFTRACK) that combines the KLT feature tracker with an improved camera calibration procedure. We extensively tested our algorithms on several benchmarks well-known in the literature. We also created benchmarks (publicly available) for the evaluation of text detection and tracking and rigid object tracking algorithmsDoutoradoCi√™ncia da Computa√ß√£oDoutor em Ci√™ncia da Computa√ß√£

    Leveraging Model Fusion for Improved License Plate Recognition

    Full text link
    License Plate Recognition (LPR) plays a critical role in various applications, such as toll collection, parking management, and traffic law enforcement. Although LPR has witnessed significant advancements through the development of deep learning, there has been a noticeable lack of studies exploring the potential improvements in results by fusing the outputs from multiple recognition models. This research aims to fill this gap by investigating the combination of up to 12 different models using straightforward approaches, such as selecting the most confident prediction or employing majority vote-based strategies. Our experiments encompass a wide range of datasets, revealing substantial benefits of fusion approaches in both intra- and cross-dataset setups. Essentially, fusing multiple models reduces considerably the likelihood of obtaining subpar performance on a particular dataset/scenario. We also found that combining models based on their speed is an appealing approach. Specifically, for applications where the recognition task can tolerate some additional time, though not excessively, an effective strategy is to combine 4-6 models. These models may not be the most accurate individually, but their fusion strikes an optimal balance between accuracy and speed.Comment: Accepted for presentation at the Iberoamerican Congress on Pattern Recognition (CIARP) 202

    Do We Train on Test Data? The Impact of Near-Duplicates on License Plate Recognition

    Full text link
    This work draws attention to the large fraction of near-duplicates in the training and test sets of datasets widely adopted in License Plate Recognition (LPR) research. These duplicates refer to images that, although different, show the same license plate. Our experiments, conducted on the two most popular datasets in the field, show a substantial decrease in recognition rate when six well-known models are trained and tested under fair splits, that is, in the absence of duplicates in the training and test sets. Moreover, in one of the datasets, the ranking of models changed considerably when they were trained and tested under duplicate-free splits. These findings suggest that such duplicates have significantly biased the evaluation and development of deep learning-based models for LPR. The list of near-duplicates we have found and proposals for fair splits are publicly available for further research at https://raysonlaroca.github.io/supp/lpr-train-on-test/Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 202