181,282 research outputs found

    RODRIGUEZ-SERRANO, PERRONNIN: LABEL EMBEDDING FOR TEXT RECOGNITION 1 Label embedding for text recognition

    Get PDF
    The standard approach to recognizing text in images consists in first classifying local image regions into candidate characters and then combining them with high-level word models such as conditional random fields (CRF). This paper explores a new paradigm that departs from this bottom-up view. We propose to embed word labels and word images into a common Euclidean space. Given a word image to be recognized, the text recognition problem is cast as one of retrieval: find the closest word label in this space. This common space is learned using the Structured SVM (SSVM) framework by enforcing matching label-image pairs to be closer than non-matching pairs. This method presents the following advantages: it does not require costly pre- or post-processing operations, it allows for the recognition of never-seen-before words and the recognition process is efficient. Experiments are performed on two challenging datasets (one of license plates and one of scene text) and show that the proposed method is competitive with standard bottom-up approaches to text recognition. 1 Introduction and related wor

    Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark

    Full text link
    In this paper, we introduce a large Multi-Attribute and Language Search dataset for text-based person retrieval, called MALS, and explore the feasibility of performing pre-training on both attribute recognition and image-text matching tasks in one stone. In particular, MALS contains 1,510,330 image-text pairs, which is about 37.5 times larger than prevailing CUHK-PEDES, and all images are annotated with 27 attributes. Considering the privacy concerns and annotation costs, we leverage the off-the-shelf diffusion models to generate the dataset. To verify the feasibility of learning from the generated data, we develop a new joint Attribute Prompt Learning and Text Matching Learning (APTM) framework, considering the shared knowledge between attribute and text. As the name implies, APTM contains an attribute prompt learning stream and a text matching learning stream. (1) The attribute prompt learning leverages the attribute prompts for image-attribute alignment, which enhances the text matching learning. (2) The text matching learning facilitates the representation learning on fine-grained details, and in turn, boosts the attribute prompt learning. Extensive experiments validate the effectiveness of the pre-training on MALS, achieving state-of-the-art retrieval performance via APTM on three challenging real-world benchmarks. In particular, APTM achieves a consistent improvement of +6.96%, +7.68%, and +16.95% Recall@1 accuracy on CUHK-PEDES, ICFG-PEDES, and RSTPReid datasets by a clear margin, respectively

    Automatic Vehicle Number Plate Recognition for Vehicle Parking Management System

    Get PDF
    A license plate recognition (LPR) system is one type of intelligent transportation system (ITS). It is a type of technology in which the software enables computer system to read automatically the license number plate of vehicle from digital pictures. Reading automatically the number plate means converting the pixel information of digital image into the ASCII text of the number plate. This paper discuses a method for the vehicle number plate recognition from the image using mathematical morphological operations. The main objective is to use different morphological operations in such a way that the number plate of vehicle can be identified accurately. This is based on various operation such as image enhancement, morphological transformation, edge detection and extraction of number plate from vehicle image. After this segmentation is applied to recognize the characters present on number plate using template matching. This algorithm can recognize number plate quickly and accurately from the vehicles image. Keywords: ANPR, ITS, Image Enhancement, Edge Detection, Morphological Operation, Number Plate Extraction,  Template Matching

    Chinese Text Recognition with A Pre-Trained CLIP-Like Model Through Image-IDS Aligning

    Full text link
    Scene text recognition has been studied for decades due to its broad applications. However, despite Chinese characters possessing different characteristics from Latin characters, such as complex inner structures and large categories, few methods have been proposed for Chinese Text Recognition (CTR). Particularly, the characteristic of large categories poses challenges in dealing with zero-shot and few-shot Chinese characters. In this paper, inspired by the way humans recognize Chinese texts, we propose a two-stage framework for CTR. Firstly, we pre-train a CLIP-like model through aligning printed character images and Ideographic Description Sequences (IDS). This pre-training stage simulates humans recognizing Chinese characters and obtains the canonical representation of each character. Subsequently, the learned representations are employed to supervise the CTR model, such that traditional single-character recognition can be improved to text-line recognition through image-IDS matching. To evaluate the effectiveness of the proposed method, we conduct extensive experiments on both Chinese character recognition (CCR) and CTR. The experimental results demonstrate that the proposed method performs best in CCR and outperforms previous methods in most scenarios of the CTR benchmark. It is worth noting that the proposed method can recognize zero-shot Chinese characters in text images without fine-tuning, whereas previous methods require fine-tuning when new classes appear. The code is available at https://github.com/FudanVI/FudanOCR/tree/main/image-ids-CTR.Comment: ICCV 202

    Using Photorealistic Face Synthesis and Domain Adaptation to Improve Facial Expression Analysis

    Full text link
    Cross-domain synthesizing realistic faces to learn deep models has attracted increasing attention for facial expression analysis as it helps to improve the performance of expression recognition accuracy despite having small number of real training images. However, learning from synthetic face images can be problematic due to the distribution discrepancy between low-quality synthetic images and real face images and may not achieve the desired performance when the learned model applies to real world scenarios. To this end, we propose a new attribute guided face image synthesis to perform a translation between multiple image domains using a single model. In addition, we adopt the proposed model to learn from synthetic faces by matching the feature distributions between different domains while preserving each domain's characteristics. We evaluate the effectiveness of the proposed approach on several face datasets on generating realistic face images. We demonstrate that the expression recognition performance can be enhanced by benefiting from our face synthesis model. Moreover, we also conduct experiments on a near-infrared dataset containing facial expression videos of drivers to assess the performance using in-the-wild data for driver emotion recognition.Comment: 8 pages, 8 figures, 5 tables, accepted by FG 2019. arXiv admin note: substantial text overlap with arXiv:1905.0028

    Trainable Regularization in Dense Image Matching Problems

    Get PDF
    This study examines the development of specialized models designed to solve image-matching problems. The purpose of this research is to develop a technique based on energy tensor aggregation for dense image matching. This task is relevant within the framework of computer systems since image comparison makes it possible to solve current problems such as reconstructing a three-dimensional model of an object, creating a panorama scene, ensuring object recognition, etc. This paper examines in detail the key features of the image matching process based on the use of binocular stereo reconstruction and the features of calculating energies during this process, and establishes the main parts of the proposed method in the form of diagrams and formulas. This research develops a machine learning model that provides solutions to image matching problems for real data using parallel programming tools. A detailed description of the architecture of the convolutional recurrent neural network that underlies this method is given. Appropriate computational experiments were conducted to compare the results obtained with the methods proposed in the scientific literature. The method discussed in this article is characterized by better efficiency, both in terms of the speed of work execution and the number of possible errors. Doi: 10.28991/HIJ-2023-04-03-011 Full Text: PD
    • …
    corecore