1,907 research outputs found

    Shape-based invariant features extraction for object recognition

    No full text
    International audienceThe emergence of new technologies enables generating large quantity of digital information including images; this leads to an increasing number of generated digital images. Therefore it appears a necessity for automatic systems for image retrieval. These systems consist of techniques used for query specification and re-trieval of images from an image collection. The most frequent and the most com-mon means for image retrieval is the indexing using textual keywords. But for some special application domains and face to the huge quantity of images, key-words are no more sufficient or unpractical. Moreover, images are rich in content; so in order to overcome these mentioned difficulties, some approaches are pro-posed based on visual features derived directly from the content of the image: these are the content-based image retrieval (CBIR) approaches. They allow users to search the desired image by specifying image queries: a query can be an exam-ple, a sketch or visual features (e.g., colour, texture and shape). Once the features have been defined and extracted, the retrieval becomes a task of measuring simi-larity between image features. An important property of these features is to be in-variant under various deformations that the observed image could undergo. In this chapter, we will present a number of existing methods for CBIR applica-tions. We will also describe some measures that are usually used for similarity measurement. At the end, and as an application example, we present a specific ap-proach, that we are developing, to illustrate the topic by providing experimental results

    Computationally efficient wavelet affine invariant functions for 2D object recognition

    Get PDF
    In this paper, an affine invariant function is presented for object recognition from wavelet coefficients of the object boundary. In previous works, undecimated wavelet transform was used for affine invariant functions. In this paper, an algorithm based on decimated wavelet transform is developed to compute the affine invariant function. As a result, computational complexity is significantly reduced without decreasing recognition performance. Experimental results are presented

    A wavelet based method for affine invariant 2D object recognition

    Get PDF
    Recognizing objects that have undergone certain viewing transformations is an important problem in the field of computer vision. Most current research has focused almost exclusively on single aspects of the problem, concentrating on a few geometric transformations and distortions. Probably, the most important one is the affine transformation which may be considered as an approximation to perspective transformation. Many algorithms were developed for this purpose. Most popular ones are Fourier descriptors and moment based methods. Another powerful tool to recognize affine transformed objects, is the invariants of implicit polynomials. These three methods are usually called as traditional methods. Wavelet-based affine invariant functions are recent contributions to the solution of the problem. This method is better at recognition and more robust to noise compared to other methods. These functions mostly rely on the object contour and undecimated wavelet transform. In this thesis, a technique is developed to recognize objects undergoing a general affine transformation. Affine invariant functions are used, based on on image projections and high-pass filtered images of objects at projection angles . Decimated Wavelet Transform is used instead of undecimated Wavelet Transform. We compared our method with the an another wavelet based affine invariant function, Khalil-Bayoumi and also with traditional methods

    Image Matching based on Curvilinear Regions

    Get PDF

    Similarity Measurement of Breast Cancer Mammographic Images Using Combination of Mesh Distance Fourier Transform and Global Features

    Get PDF
    Similarity measurement in breast cancer is an important aspect of determining the vulnerability of detected masses based on the previous cases. It is used to retrieve the most similar image for a given mammographic query image from a collection of previously archived images. By analyzing these results, doctors and radiologists can more accurately diagnose early-stage breast cancer and determine the best treatment. The direct result is better prognoses for breast cancer patients. Similarity measurement in images has always been a challenging task in the field of pattern recognition. A widely-adopted strategy in Content-Based Image Retrieval (CBIR) is comparison of local shape-based features of images. Contours summarize the orientations and sizes images, allowing for heuristic approach in measuring similarity between images. Similarly, global features of an image have the ability to generalize the entire object with a single vector which is also an important aspect of CBIR. The main objective of this paper is to enhance the similarity measurement between query images and database images so that the best match is chosen from the database for a particular query image, thus decreasing the chance of false positives. In this paper, a method has been proposed which compares both local and global features of images to determine their similarity. Three image filters are applied to make this comparison. First, we filter using the mesh distance Fourier descriptor (MDFD), which is based on the calculation of local features of the mammographic image. After this filter is applied, we retrieve the five most similar images from the database. Two additional filters are applied to the resulting image set to determine the best match. Experiments show that this proposed method overcomes shortcomings of existing methods, increasing accuracy of matches from 68% to 88%
    corecore