44 research outputs found

    Handwritten Devanagari numeral recognition

    Get PDF
    Optical character recognition (OCR) plays a very vital role in today’s modern world. OCR can be useful for solving many complex problems and thus making human’s job easier. In OCR we give a scanned digital image or handwritten text as the input to the system. OCR can be used in postal department for sorting of the mails and in other offices. Much work has been done for English alphabets but now a day’s Indian script is an active area of interest for the researchers. Devanagari is on such Indian script. Research is going on for the recognition of alphabets but much less concentration is given on numerals. Here an attempt was made for the recognition of Devanagari numerals. The main part of any OCR system is the feature extraction part because more the features extracted more is the accuracy. Here two methods were used for the process of feature extraction. One of the method was moment based method. There are many moment based methods but we have preferred the Tchebichef moment. Tchebichef moment was preferred because of its better image representation capability. The second method was based on the contour curvature. Contour is a very important boundary feature used for finding similarity between shapes. After the process of feature extraction, the extracted feature has to be classified and for the same Artificial Neural Network (ANN) was used. There are many classifier but we preferred ANN because it is easy to handle and less error prone and apart from that its accuracy is much higher compared to other classifier. The classification was done individually with the two extracted features and finally the features were cascaded to increase the accuracy

    Fast Computation of Sliding Discrete Tchebichef Moments and Its Application in Duplicated Regions Detection

    No full text
    International audienceComputational load remains a major concern when processing signals by means of sliding transforms. In this paper, we present an efficient algorithm for the fast computation of one-dimensional and two-dimensional sliding discrete Tchebichef moments. To do so, we first establish the relationships that exist between the Tchebichef moments of two neighboring windows taking advantage of Tchebichef polynomials’ properties. We then propose an original way to fast compute the moments of one window by utilizing the moment values of its previous window. We further theoretically establish the complexity of our fast algorithm and illustrate its interest within the framework of digital forensics and more precisely the detection of duplicated regions in an audio signal or an image. Our algorithm is used to extract local features of such a signal tampering. Experimental results show that its complexity is independent of the window size, validating the theory. They also exhibit that our algorithm is suitable to digital forensics and beyond to any applications based on sliding Tchebichef moments

    Construction of a complete set of orthogonal Fourier-Mellin moment invariants for pattern recognition applications

    No full text
    International audienceThe completeness property of a set of invariant descriptors is of fundamental importance from the theoretical as well as the practical points of view. In this paper, we propose a general approach to construct a complete set of orthogonal Fourier-Mellin moment (OFMM) invariants. By establishing a relationship between the OFMMs of the original image and those of the image having the same shape but distinct orientation and scale, a complete set of scale and rotation invariants is derived. The efficiency and the robustness to noise of the method for recognition tasks are shown by comparing it with some existing methods on several data sets

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Multi-Technique Fusion for Shape-Based Image Retrieval

    Get PDF
    Content-based image retrieval (CBIR) is still in its early stages, although several attempts have been made to solve or minimize challenges associated with it. CBIR techniques use such visual contents as color, texture, and shape to represent and index images. Of these, shapes contain richer information than color or texture. However, retrieval based on shape contents remains more difficult than that based on color or texture due to the diversity of shapes and the natural occurrence of shape transformations such as deformation, scaling and orientation. This thesis presents an approach for fusing several shape-based image retrieval techniques for the purpose of achieving reliable and accurate retrieval performance. An extensive investigation of notable existing shape descriptors is reported. Two new shape descriptors have been proposed as means to overcome limitations of current shape descriptors. The first descriptor is based on a novel shape signature that includes corner information in order to enhance the performance of shape retrieval techniques that use Fourier descriptors. The second descriptor is based on the curvature of the shape contour. This invariant descriptor takes an unconventional view of the curvature-scale-space map of a contour by treating it as a 2-D binary image. The descriptor is then derived from the 2-D Fourier transform of the 2-D binary image. This technique allows the descriptor to capture the detailed dynamics of the curvature of the shape and enhances the efficiency of the shape-matching process. Several experiments have been conducted in order to compare the proposed descriptors with several notable descriptors. The new descriptors not only speed up the online matching process, but also lead to improved retrieval accuracy. The complexity and variety of the content of real images make it impossible for a particular choice of descriptor to be effective for all types of images. Therefore, a data- fusion formulation based on a team consensus approach is proposed as a means of achieving high accuracy performance. In this approach a select set of retrieval techniques form a team. Members of the team exchange information so as to complement each other’s assessment of a database image candidate as a match to query images. Several experiments have been conducted based on the MPEG-7 contour-shape databases; the results demonstrate that the performance of the proposed fusion scheme is superior to that achieved by any technique individually

    Tchebichef Moment Based Hilbert Scan for Image Compression

    Get PDF
    Image compression is now essential for applications such as transmission and storage in data base, so we need to compress a vast amount of information whereas, the compressed ratio and quality of compressed image must be enhanced, for this reason, this paper develop a new algorithm that used a discrete orthogonal Tchebichef moment based Hilbert curve for image compression. The analyzed image was divided into 8×8 image sub-blocks, the Tchebichef moment has been applied to each one, and then the transformed coefficients 8×8 sub-block shall be reordered in Hilbert scan into a linear array, at this step Huffman coding is implemented. Experimental results show that this algorithm improves the coding efficiency on the one hand; and on the other hand the quality of reconstructed image is also not significantly decreased. Keywords: Huffman Coding, Tchebichef Moment Transforms, Orthogonal Moment Functions, Hilbert, zigzag scan

    An integrated formulation of zernike invariant for mining insect images

    Get PDF
    This paper presents mathematical integration of Zernike Moments and United Moment Invariant for extracting printed insect images.These features are further mining for granular information by investigating the variance of Interclass and intra-class. The results reveal that the proposed integrated formulation yield better analysis compared to convectional Zernike moments and United Moment Invariant
    corecore