136 research outputs found

    Color Image Analysis by Quaternion-Type Moments

    No full text
    International audienceIn this paper, by using the quaternion algebra, the conventional complex-type moments (CTMs) for gray-scale images are generalized to color images as quaternion-type moments (QTMs) in a holistic manner. We first provide a general formula of QTMs from which we derive a set of quaternion-valued QTM invariants (QTMIs) to image rotation, scale and translation transformations by eliminating the influence of transformation parameters. An efficient computation algorithm is also proposed so as to reduce computational complexity. The performance of the proposed QTMs and QTMIs are evaluated considering several application frameworks ranging from color image reconstruction, face recognition to image registration. We show they achieve better performance than CTMs and CTM invariants (CTMIs). We also discuss the choice of the unit pure quaternion influence with the help of experiments. appears to be an optimal choice

    Multi-Technique Fusion for Shape-Based Image Retrieval

    Get PDF
    Content-based image retrieval (CBIR) is still in its early stages, although several attempts have been made to solve or minimize challenges associated with it. CBIR techniques use such visual contents as color, texture, and shape to represent and index images. Of these, shapes contain richer information than color or texture. However, retrieval based on shape contents remains more difficult than that based on color or texture due to the diversity of shapes and the natural occurrence of shape transformations such as deformation, scaling and orientation. This thesis presents an approach for fusing several shape-based image retrieval techniques for the purpose of achieving reliable and accurate retrieval performance. An extensive investigation of notable existing shape descriptors is reported. Two new shape descriptors have been proposed as means to overcome limitations of current shape descriptors. The first descriptor is based on a novel shape signature that includes corner information in order to enhance the performance of shape retrieval techniques that use Fourier descriptors. The second descriptor is based on the curvature of the shape contour. This invariant descriptor takes an unconventional view of the curvature-scale-space map of a contour by treating it as a 2-D binary image. The descriptor is then derived from the 2-D Fourier transform of the 2-D binary image. This technique allows the descriptor to capture the detailed dynamics of the curvature of the shape and enhances the efficiency of the shape-matching process. Several experiments have been conducted in order to compare the proposed descriptors with several notable descriptors. The new descriptors not only speed up the online matching process, but also lead to improved retrieval accuracy. The complexity and variety of the content of real images make it impossible for a particular choice of descriptor to be effective for all types of images. Therefore, a data- fusion formulation based on a team consensus approach is proposed as a means of achieving high accuracy performance. In this approach a select set of retrieval techniques form a team. Members of the team exchange information so as to complement each other’s assessment of a database image candidate as a match to query images. Several experiments have been conducted based on the MPEG-7 contour-shape databases; the results demonstrate that the performance of the proposed fusion scheme is superior to that achieved by any technique individually

    Construction of a complete set of orthogonal Fourier-Mellin moment invariants for pattern recognition applications

    No full text
    International audienceThe completeness property of a set of invariant descriptors is of fundamental importance from the theoretical as well as the practical points of view. In this paper, we propose a general approach to construct a complete set of orthogonal Fourier-Mellin moment (OFMM) invariants. By establishing a relationship between the OFMMs of the original image and those of the image having the same shape but distinct orientation and scale, a complete set of scale and rotation invariants is derived. The efficiency and the robustness to noise of the method for recognition tasks are shown by comparing it with some existing methods on several data sets

    Shape-based invariant features extraction for object recognition

    No full text
    International audienceThe emergence of new technologies enables generating large quantity of digital information including images; this leads to an increasing number of generated digital images. Therefore it appears a necessity for automatic systems for image retrieval. These systems consist of techniques used for query specification and re-trieval of images from an image collection. The most frequent and the most com-mon means for image retrieval is the indexing using textual keywords. But for some special application domains and face to the huge quantity of images, key-words are no more sufficient or unpractical. Moreover, images are rich in content; so in order to overcome these mentioned difficulties, some approaches are pro-posed based on visual features derived directly from the content of the image: these are the content-based image retrieval (CBIR) approaches. They allow users to search the desired image by specifying image queries: a query can be an exam-ple, a sketch or visual features (e.g., colour, texture and shape). Once the features have been defined and extracted, the retrieval becomes a task of measuring simi-larity between image features. An important property of these features is to be in-variant under various deformations that the observed image could undergo. In this chapter, we will present a number of existing methods for CBIR applica-tions. We will also describe some measures that are usually used for similarity measurement. At the end, and as an application example, we present a specific ap-proach, that we are developing, to illustrate the topic by providing experimental results

    Method of Synthesized Phase Objects in the Optical Pattern Recognition Problem

    Get PDF
    To solve the pattern recognition problem, a method of synthesized phase objects (SPO-method) is suggested. The essence of the suggested method is that synthesized phase objects are used instead of real amplitude objects. The former is object-dependent phase distributions calculated using the iterative Fourier transform algorithm. The method is experimentally studied with an optical-digital Vanderlugt and joint Fourier transform 4F-correlators. The development of the SPO-method for the rotation invariant pattern recognition is considered as well. We present the comparative analysis of recognition results with the use of the conventional and proposed methods, estimate the sensitivity of the latter to distortions of the structure of objects, and determine the applicability limits. It is demonstrated that the SPO-method allows one: (a) to simplify the procedure of choice of recognition signs (criteria); (b) to obtain one-type δ-like recognition signals irrespective of the type of objects; and (c) to improve the signal-to-noise ratio for correlation signals by 20–30 dB on the average. To introduce recognition objects in a correlator, we use SLM LC-R 2500 and SLM HEO 1080 Pluto devices

    Revisiting Complex Moments For 2D Shape Representation and Image Normalization

    Full text link
    When comparing 2D shapes, a key issue is their normalization. Translation and scale are easily taken care of by removing the mean and normalizing the energy. However, defining and computing the orientation of a 2D shape is not so simple. In fact, although for elongated shapes the principal axis can be used to define one of two possible orientations, there is no such tool for general shapes. As we show in the paper, previous approaches fail to compute the orientation of even noiseless observations of simple shapes. We address this problem. In the paper, we show how to uniquely define the orientation of an arbitrary 2D shape, in terms of what we call its Principal Moments. We show that a small subset of these moments suffice to represent the underlying 2D shape and propose a new method to efficiently compute the shape orientation: Principal Moment Analysis. Finally, we discuss how this method can further be applied to normalize grey-level images. Besides the theoretical proof of correctness, we describe experiments demonstrating robustness to noise and illustrating the method with real images.Comment: 69 pages, 20 figure
    • …
    corecore