449 research outputs found

    Shape-based invariant features extraction for object recognition

    No full text
    International audienceThe emergence of new technologies enables generating large quantity of digital information including images; this leads to an increasing number of generated digital images. Therefore it appears a necessity for automatic systems for image retrieval. These systems consist of techniques used for query specification and re-trieval of images from an image collection. The most frequent and the most com-mon means for image retrieval is the indexing using textual keywords. But for some special application domains and face to the huge quantity of images, key-words are no more sufficient or unpractical. Moreover, images are rich in content; so in order to overcome these mentioned difficulties, some approaches are pro-posed based on visual features derived directly from the content of the image: these are the content-based image retrieval (CBIR) approaches. They allow users to search the desired image by specifying image queries: a query can be an exam-ple, a sketch or visual features (e.g., colour, texture and shape). Once the features have been defined and extracted, the retrieval becomes a task of measuring simi-larity between image features. An important property of these features is to be in-variant under various deformations that the observed image could undergo. In this chapter, we will present a number of existing methods for CBIR applica-tions. We will also describe some measures that are usually used for similarity measurement. At the end, and as an application example, we present a specific ap-proach, that we are developing, to illustrate the topic by providing experimental results

    Automated Pollen Image Classification

    Get PDF
    This Master of Science thesis reviews previous research, proposes a method anddemonstrates proof-of-concept software for the automated matching of pollen grainimages to satisfy degree requirements at the University of Tennessee. An ideal imagesegmentation algorithm and shape representation data structure is selected, alongwith a multi-phase shape matching system. The system is shown to be invariantto synthetic image translation, rotation, and to a lesser extent global contrast andintensity changes. The proof-of-concept software is used to demonstrate how pollengrains can be matched to images of other pollen grains, stored in a database, thatshare similar features with up to a 75% accuracy rate

    Optical to near-infrared transmission spectrum of the warm sub-Saturn HAT-P-12b

    Get PDF
    We present the transmission spectrum of HAT-P-12b through a joint analysis of data obtained from the Hubble Space Telescope Space Telescope Imaging Spectrograph (STIS) and Wide Field Camera 3 (WFC3) and Spitzer, covering the wavelength range 0.3-5.0 μ\mum. We detect a muted water vapor absorption feature at 1.4 μ\mum attenuated by clouds, as well as a Rayleigh scattering slope in the optical indicative of small particles. We interpret the transmission spectrum using both the state-of-the-art atmospheric retrieval code SCARLET and the aerosol microphysics model CARMA. These models indicate that the atmosphere of HAT-P-12b is consistent with a broad range of metallicities between several tens to a few hundred times solar, a roughly solar C/O ratio, and moderately efficient vertical mixing. Cloud models that include condensate clouds do not readily generate the sub-micron particles necessary to reproduce the observed Rayleigh scattering slope, while models that incorporate photochemical hazes composed of soot or tholins are able to match the full transmission spectrum. From a complementary analysis of secondary eclipses by Spitzer, we obtain measured depths of 0.042%±0.013%0.042\%\pm0.013\% and 0.045%±0.018%0.045\%\pm0.018\% at 3.6 and 4.5 μ\mum, respectively, which are consistent with a blackbody temperature of 890−70+60890^{+60}_{-70} K and indicate efficient day-night heat recirculation. HAT-P-12b joins the growing number of well-characterized warm planets that underscore the importance of clouds and hazes in our understanding of exoplanet atmospheres.Comment: 25 pages, 19 figures, accepted for publication in AJ, updated with proof correction

    Automated Target Acquisition, Recognition and Tracking (ATTRACT)

    Get PDF
    The primary objective of phase 1 of this research project is to conduct multidisciplinary research that will contribute to fundamental scientific knowledge in several of the USAF critical technology areas. Specifically, neural networks, signal processing techniques, and electro-optic capabilities are utilized to solve problems associated with automated target acquisition, recognition, and tracking. To accomplish the stated objective, several tasks have been identified and were executed

    Digital Processing and Management Tools for 2D and 3D Shape Repositories

    No full text

    Scene Segmentation and Object Classification for Place Recognition

    Get PDF
    This dissertation tries to solve the place recognition and loop closing problem in a way similar to human visual system. First, a novel image segmentation algorithm is developed. The image segmentation algorithm is based on a Perceptual Organization model, which allows the image segmentation algorithm to ‘perceive’ the special structural relations among the constituent parts of an unknown object and hence to group them together without object-specific knowledge. Then a new object recognition method is developed. Based on the fairly accurate segmentations generated by the image segmentation algorithm, an informative object description that includes not only the appearance (colors and textures), but also the parts layout and shape information is built. Then a novel feature selection algorithm is developed. The feature selection method can select a subset of features that best describes the characteristics of an object class. Classifiers trained with the selected features can classify objects with high accuracy. In next step, a subset of the salient objects in a scene is selected as landmark objects to label the place. The landmark objects are highly distinctive and widely visible. Each landmark object is represented by a list of SIFT descriptors extracted from the object surface. This object representation allows us to reliably recognize an object under certain viewpoint changes. To achieve efficient scene-matching, an indexing structure is developed. Both texture feature and color feature of objects are used as indexing features. The texture feature and the color feature are viewpoint-invariant and hence can be used to effectively find the candidate objects with similar surface characteristics to a query object. Experimental results show that the object-based place recognition and loop detection method can efficiently recognize a place in a large complex outdoor environment
    • …
    corecore