177 research outputs found

    DCTNet : A Simple Learning-free Approach for Face Recognition

    Full text link
    PCANet was proposed as a lightweight deep learning network that mainly leverages Principal Component Analysis (PCA) to learn multistage filter banks followed by binarization and block-wise histograming. PCANet was shown worked surprisingly well in various image classification tasks. However, PCANet is data-dependence hence inflexible. In this paper, we proposed a data-independence network, dubbed DCTNet for face recognition in which we adopt Discrete Cosine Transform (DCT) as filter banks in place of PCA. This is motivated by the fact that 2D DCT basis is indeed a good approximation for high ranked eigenvectors of PCA. Both 2D DCT and PCA resemble a kind of modulated sine-wave patterns, which can be perceived as a bandpass filter bank. DCTNet is free from learning as 2D DCT bases can be computed in advance. Besides that, we also proposed an effective method to regulate the block-wise histogram feature vector of DCTNet for robustness. It is shown to provide surprising performance boost when the probe image is considerably different in appearance from the gallery image. We evaluate the performance of DCTNet extensively on a number of benchmark face databases and being able to achieve on par with or often better accuracy performance than PCANet.Comment: APSIPA ASC 201

    A new approach for centerline extraction in handwritten strokes: an application to the constitution of a code book

    Get PDF
    International audienceWe present in this paper a new method of analysis and decomposition of handwritten documents into glyphs (graphemes) and their associated code book. The different techniques that are involved in this paper are inspired by image processing methods in a large sense and mathematical models implying graph coloring. Our approaches provide firstly a rapid and detailed characterization of handwritten shapes based on dynamic tracking of the handwriting (curvature, thickness, direction, etc.) and also a very efficient analysis method for the categorization of basic shapes (graphemes). The tools that we have produced enable paleographers to study quickly and more accurately a large volume of manuscripts and to extract a large number of characteristics that are specific to an individual or an era

    Object Recognition and Clustering based on Latent Semantic Analysis (LSA)

    Get PDF
    Object Recognition and clustering are prime techniques in Computer Vision, Pattern Recognition, Artificial Intelligence and Robotics. Conventionally these techniques are implemented in Visual-Feature based methods. However, these methods have drawbacks they do not efficiently deal with the differences in shapes and colours of objects. Another method which uses semantic similarity to solve this kind of problem, i.e. Cosine Similarity method, but this method also has problems. The problems are synonymies and polysemies. In this paper we propose a method in which objects with different shapes and different colours which function similarly can be recognized and clustered. If the text printed on the object the semantic feature of that object is extracted and clustered according to semantic feature. Proposed method is based on semantic information so we conduct an experiment with the dataset of images which contains the packing cases of commercial products (e.g. Mobile, Laptop etc). Semantic information in dataset is retrieved using text extraction module and then the results of text extraction are passed through an Internet search module. Finally objects are described and clustered using the latent semantic analysis (LSA) module. The clustering results are more accurate than the Visual feature based method and cosine similarity based methods. DOI: 10.17762/ijritcc2321-8169.150512

    A study of holistic strategies for the recognition of characters in natural scene images

    Get PDF
    Recognition and understanding of text in scene images is an important and challenging task. The importance can be seen in the context of tasks such as assisted navigation for the blind, providing directions to driverless cars, e.g. Google car, etc. Other applications include automated document archival services, mining text from images, and so on. The challenge comes from a variety of factors, like variable typefaces, uncontrolled imaging conditions, and various sources of noise corrupting the captured images. In this work, we study and address the fundamental problem of recognition of characters extracted from natural scene images, and contribute three holistic strategies to deal with this challenging task. Scene text recognition (STR) has been a known problem in computer vision and pattern recognition community for over two decades, and is still an active area of research owing to the fact that the recognition performance has still got a lot of room for improvement. Recognition of characters lies at the heart of STR and is a crucial component for a reliable STR system. Most of the current methods heavily rely on discriminative power of local features, such as histograms of oriented gradient (HoG), scale invariant feature transform (SIFT), shape contexts (SC), geometric blur (GB), etc. One of the problems with such methods is that the local features are rasterized in an ad hoc manner to get a single vector for subsequent use in recognition. This rearrangement of features clearly perturbs the spatial correlations that may carry crucial information vis-á-vis recognition. Moreover, such approaches, in general, do not take into account the rotational invariance property that often leads to failed recognition in cases where characters in scene images do not occur in upright position. To eliminate this local feature dependency and the associated problems, we propose the following three holistic solutions: The first one is based on modelling character images of a class as a 3-mode tensor and then factoring it into a set of rank-1 matrices and the associated mixing coefficients. Each set of rank-1 matrices spans the solution subspace of a specific image class and enables us to capture the required holistic signature for each character class along with the mixing coefficients associated with each character image. During recognition, we project each test image onto the candidate subspaces to derive its mixing coefficients, which are eventually used for final classification. The second approach we study in this work lets us form a novel holistic feature for character recognition based on active contour model, also known as snakes. Our feature vector is based on two variables, direction and distance, cumulatively traversed by each point as the initial circular contour evolves under the force field induced by the character image. The initial contour design in conjunction with cross-correlation based similarity metric enables us to account for rotational variance in the character image. Our third approach is based on modelling a 3-mode tensor via rotation of a single image. This is different from our tensor based approach described above in that we form the tensor using a single image instead of collecting a specific number of samples of a particular class. In this case, to generate a 3D image cube, we rotate an image through a predefined range of angles. This enables us to explicitly capture rotational variance and leads to better performance than various local approaches. Finally, as an application, we use our holistic model to recognize word images extracted from natural scenes. Here we first use our novel word segmentation method based on image seam analysis to split a scene word into individual character images. We then apply our holistic model to recognize individual letters and use a spell-checker module to get the final word prediction. Throughout our work, we employ popular scene text datasets, like Chars74K-Font, Chars74K-Image, SVT, and ICDAR03, which include synthetic and natural image sets, to test the performance of our strategies. We compare results of our recognition models with several baseline methods and show comparable or better performance than several local feature-based methods justifying thus the importance of holistic strategies

    Document image enhancement

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Degradation-Invariant Music Indexing

    Full text link
    For music indexing robust to sound degradations and scalable for big music catalogs, this scientific report presents an approach based on audio descriptors relevant to the music content and invariant to sound transformations (noise addition, distortion, lossy coding, pitch/time transformations, or filtering e.g.). To achieve this task, one of the key point of the proposed method is the definition of high-dimensional audio prints, which are intrinsically (by design) robust to some sound degradations. The high dimensionality of this first representation is then used to learn a linear projection to a sub-space significantly smaller, which reduces again the sensibility to sound degradations using a series of discriminant analyses. Finally, anchoring the analysis times on local maxima of a selected onset function, an approximative hashing is done to provide a better tolerance to bit corruptions, and in the same time to make easier the scaling of the method

    Simultaneous-Fault Diagnosis of Automotive Engine Ignition Systems Using Prior Domain Knowledge and Relevance Vector Machine

    Get PDF
    Engine ignition patterns can be analyzed to identify the engine fault according to both the specific prior domain knowledge and the shape features of the patterns. One of the challenges in ignition system diagnosis is that more than one fault may appear at a time. This kind of problem refers to simultaneous-fault diagnosis. Another challenge is the acquisition of a large amount of costly simultaneous-fault ignition patterns for constructing the diagnostic system because the number of the training patterns depends on the combination of different single faults. The above problems could be resolved by the proposed framework combining feature extraction, probabilistic classification, and decision threshold optimization. With the proposed framework, the features of the single faults in a simultaneous-fault pattern are extracted and then detected using a new probabilistic classifier, namely, pairwise coupling relevance vector machine, which is trained with single-fault patterns only. Therefore, the training dataset of simultaneous-fault patterns is not necessary. Experimental results show that the proposed framework performs well for both single-fault and simultaneous-fault diagnoses and is superior to the existing approach

    OSPCV: Off-line Signature Verification using Principal Component Variances

    Get PDF
    Signature verification system is always the most sought after biometric verification system. Being a behavioral biometric feature which can always be imitated, the researcher faces a challenge in designing such a system, which has to counter intrapersonal and interpersonal variations. The paper presents a comprehensive way of off-line signature verification based on two features namely, the pixel density and the centre of gravity distance. The data processing consists of two parallel processes namely Signature training and Test signature analysis. Signature training involves extraction of features from the samples of database and Test signature analysis involves extraction of features from test signature and it’s comparison with those of trained values from database. The features are analyzed using Principal Component Analysis (PCA). The proposed work provides a feasible result and a notable improvement over the existing systems
    corecore