6,878 research outputs found

    Plant image retrieval using color, shape and texture features

    Get PDF
    We present a content-based image retrieval system for plant image retrieval, intended especially for the house plant identification problem. A plant image consists of a collection of overlapping leaves and possibly flowers, which makes the problem challenging.We studied the suitability of various well-known color, shape and texture features for this problem, as well as introducing some new texture matching techniques and shape features. Feature extraction is applied after segmenting the plant region from the background using the max-flow min-cut technique. Results on a database of 380 plant images belonging to 78 different types of plants show promise of the proposed new techniques and the overall system: in 55% of the queries, the correct plant image is retrieved among the top-15 results. Furthermore, the accuracy goes up to 73% when a 132-image subset of well-segmented plant images are considered

    Shape-based invariant features extraction for object recognition

    No full text
    International audienceThe emergence of new technologies enables generating large quantity of digital information including images; this leads to an increasing number of generated digital images. Therefore it appears a necessity for automatic systems for image retrieval. These systems consist of techniques used for query specification and re-trieval of images from an image collection. The most frequent and the most com-mon means for image retrieval is the indexing using textual keywords. But for some special application domains and face to the huge quantity of images, key-words are no more sufficient or unpractical. Moreover, images are rich in content; so in order to overcome these mentioned difficulties, some approaches are pro-posed based on visual features derived directly from the content of the image: these are the content-based image retrieval (CBIR) approaches. They allow users to search the desired image by specifying image queries: a query can be an exam-ple, a sketch or visual features (e.g., colour, texture and shape). Once the features have been defined and extracted, the retrieval becomes a task of measuring simi-larity between image features. An important property of these features is to be in-variant under various deformations that the observed image could undergo. In this chapter, we will present a number of existing methods for CBIR applica-tions. We will also describe some measures that are usually used for similarity measurement. At the end, and as an application example, we present a specific ap-proach, that we are developing, to illustrate the topic by providing experimental results

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Partial shape matching using CCP map and weighted graph transformation matching

    Get PDF
    La détection de la similarité ou de la différence entre les images et leur mise en correspondance sont des problèmes fondamentaux dans le traitement de l'image. Pour résoudre ces problèmes, on utilise, dans la littérature, différents algorithmes d'appariement. Malgré leur nouveauté, ces algorithmes sont pour la plupart inefficaces et ne peuvent pas fonctionner correctement dans les situations d’images bruitées. Dans ce mémoire, nous résolvons la plupart des problèmes de ces méthodes en utilisant un algorithme fiable pour segmenter la carte des contours image, appelée carte des CCPs, et une nouvelle méthode d'appariement. Dans notre algorithme, nous utilisons un descripteur local qui est rapide à calculer, est invariant aux transformations affines et est fiable pour des objets non rigides et des situations d’occultation. Après avoir trouvé le meilleur appariement pour chaque contour, nous devons vérifier si ces derniers sont correctement appariés. Pour ce faire, nous utilisons l'approche « Weighted Graph Transformation Matching » (WGTM), qui est capable d'éliminer les appariements aberrants en fonction de leur proximité et de leurs relations géométriques. WGTM fonctionne correctement pour les objets à la fois rigides et non rigides et est robuste aux distorsions importantes. Pour évaluer notre méthode, le jeu de données ETHZ comportant cinq classes différentes d'objets (bouteilles, cygnes, tasses, girafes, logos Apple) est utilisé. Enfin, notre méthode est comparée à plusieurs méthodes célèbres proposées par d'autres chercheurs dans la littérature. Bien que notre méthode donne un résultat comparable à celui des méthodes de référence en termes du rappel et de la précision de localisation des frontières, elle améliore significativement la précision moyenne pour toutes les catégories du jeu de données ETHZ.Matching and detecting similarity or dissimilarity between images is a fundamental problem in image processing. Different matching algorithms are used in literature to solve this fundamental problem. Despite their novelty, these algorithms are mostly inefficient and cannot perform properly in noisy situations. In this thesis, we solve most of the problems of previous methods by using a reliable algorithm for segmenting image contour map, called CCP Map, and a new matching method. In our algorithm, we use a local shape descriptor that is very fast, invariant to affine transform, and robust for dealing with non-rigid objects and occlusion. After finding the best match for the contours, we need to verify if they are correctly matched. For this matter, we use the Weighted Graph Transformation Matching (WGTM) approach, which is capable of removing outliers based on their adjacency and geometrical relationships. WGTM works properly for both rigid and non-rigid objects and is robust to high order distortions. For evaluating our method, the ETHZ dataset including five diverse classes of objects (bottles, swans, mugs, giraffes, apple-logos) is used. Finally, our method is compared to several famous methods proposed by other researchers in the literature. While our method shows a comparable result to other benchmarks in terms of recall and the precision of boundary localization, it significantly improves the average precision for all of the categories in the ETHZ dataset

    2-D shapes description by using features based on the differential turning angle scalogram

    Get PDF
    International audienceA 2-D shape description using the turning angle is presented 1 . This descriptor is based on a scalogram obtained from a progressive filtering of a planar closed contour. At a given scale, the differential turning angle function is calculated from which, three essential points are derived: the minimum differential-turning angle (α-points), the maximum differential-turning angle (β-points) and the zero-crossing of the turning angle (γ-points). For a continuum of the scale values in the filtering process, a map (called d-TASS map) is generated. As shown experimentally in a previous study, this map is invariant under rotation, translation and scale change. Moreover, it is shearing and noise resistant. The contribution of the present study is firstly, to prove theoretically that d-TASS is rotation and scale change invariant and secondly to propose a new descriptor extracted from the blocks within the scalogram. When applied to shape retrieval from commonly used image databases like MPEG-7 Core Experiments Shape-1 dataset, Multiview Curve Dataset and marines animals of SQUID dataset, experimental results yield very encouraging efficiency and effectiveness of the new analysis approach and the proposed descriptor

    Optimization in Differentiable Manifolds in Order to Determine the Method of Construction of Prehistoric Wall-Paintings

    Full text link
    In this paper a general methodology is introduced for the determination of potential prototype curves used for the drawing of prehistoric wall-paintings. The approach includes a) preprocessing of the wall-paintings contours to properly partition them, according to their curvature, b) choice of prototype curves families, c) analysis and optimization in 4-manifold for a first estimation of the form of these prototypes, d) clustering of the contour parts and the prototypes, to determine a minimal number of potential guides, e) further optimization in 4-manifold, applied to each cluster separately, in order to determine the exact functional form of the potential guides, together with the corresponding drawn contour parts. The introduced methodology simultaneously deals with two problems: a) the arbitrariness in data-points orientation and b) the determination of one proper form for a prototype curve that optimally fits the corresponding contour data. Arbitrariness in orientation has been dealt with a novel curvature based error, while the proper forms of curve prototypes have been exhaustively determined by embedding curvature deformations of the prototypes into 4-manifolds. Application of this methodology to celebrated wall-paintings excavated at Tyrins, Greece and the Greek island of Thera, manifests it is highly probable that these wall-paintings had been drawn by means of geometric guides that correspond to linear spirals and hyperbolae. These geometric forms fit the drawings' lines with an exceptionally low average error, less than 0.39mm. Hence, the approach suggests the existence of accurate realizations of complicated geometric entities, more than 1000 years before their axiomatic formulation in Classical Ages

    Shape Recognition: A Landmark-Based Approach

    Get PDF
    Shape recognition has applications in computer vision tasks such as industrial automated inspection and automatic target recognition. When objects are occluded, many recognition methods that use global information will fail. To recognize partially occluded objects, we represent each object by a Set of landmarks. The landmarks of an object are points of interest which have important shape attributes and are usually obtained from the object boundary. In this study, we use high curvature points along an object boundary as the landmarks of the object. Given a scene consisting of partially occluded objects, the hypothesis of a model object in the scene is verified by matching the landmarks of an object with those in the scene. A measure of similarity between two landmarks, one from a model and the other from a scene, is needed to perform this matching. One such local shape measure is the sphericity of a triangular transformation mapping the model landmark and its two neighboring landmarks to the scene landmark and its two neighboring landmarks. Sphericity is in general defined for a diffeomorphism. Its invariant properties under a group of transformation, namely, translation, rotation, and scaling are derived. The sphericity of a triangular transformation is shown to be a robust local shape measure in the sense that minor distortion in the landmarks does not significantly alter its value. To match landmarks between a model and a scene, a table of compatibility, where each entry of the table is the sphericity value derived from the mapping of a model landmark to a scene landmark, is constructed. A hopping dynamic programming procedure which switches between a forward and a backward dynamic programming procedure is applied to guide the landmark matching through the compatibility table. The location of the model in the scene is estimated with a least squares fit among the matched landmarks. A heuristic measure is then computed to decide if the model is in the scene
    corecore