590 research outputs found

    Object recognition using multi-view imaging

    No full text
    Single view imaging data has been used in most previous research in computer vision and image understanding and lots of techniques have been developed. Recently with the fast development and dropping cost of multiple cameras, it has become possible to have many more views to achieve image processing tasks. This thesis will consider how to use the obtained multiple images in the application of target object recognition. In this context, we present two algorithms for object recognition based on scale- invariant feature points. The first is single view object recognition method (SOR), which operates on single images and uses a chirality constraint to reduce the recognition errors that arise when only a small number of feature points are matched. The procedure is extended in the second multi-view object recognition algorithm (MOR) which operates on a multi-view image sequence and, by tracking feature points using a dynamic programming method in the plenoptic domain subject to the epipolar constraint, is able to fuse feature point matches from all the available images, resulting in more robust recognition. We evaluated these algorithms using a number of data sets of real images capturing both indoor and outdoor scenes. We demonstrate that MOR is better than SOR particularly for noisy and low resolution images, and it is also able to recognize objects that are partially occluded by combining it with some segmentation techniques

    Vision technology/algorithms for space robotics applications

    Get PDF
    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed

    2D partially occluded object recognition using curve moment invariants

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Partial shape matching using CCP map and weighted graph transformation matching

    Get PDF
    La détection de la similarité ou de la différence entre les images et leur mise en correspondance sont des problèmes fondamentaux dans le traitement de l'image. Pour résoudre ces problèmes, on utilise, dans la littérature, différents algorithmes d'appariement. Malgré leur nouveauté, ces algorithmes sont pour la plupart inefficaces et ne peuvent pas fonctionner correctement dans les situations d’images bruitées. Dans ce mémoire, nous résolvons la plupart des problèmes de ces méthodes en utilisant un algorithme fiable pour segmenter la carte des contours image, appelée carte des CCPs, et une nouvelle méthode d'appariement. Dans notre algorithme, nous utilisons un descripteur local qui est rapide à calculer, est invariant aux transformations affines et est fiable pour des objets non rigides et des situations d’occultation. Après avoir trouvé le meilleur appariement pour chaque contour, nous devons vérifier si ces derniers sont correctement appariés. Pour ce faire, nous utilisons l'approche « Weighted Graph Transformation Matching » (WGTM), qui est capable d'éliminer les appariements aberrants en fonction de leur proximité et de leurs relations géométriques. WGTM fonctionne correctement pour les objets à la fois rigides et non rigides et est robuste aux distorsions importantes. Pour évaluer notre méthode, le jeu de données ETHZ comportant cinq classes différentes d'objets (bouteilles, cygnes, tasses, girafes, logos Apple) est utilisé. Enfin, notre méthode est comparée à plusieurs méthodes célèbres proposées par d'autres chercheurs dans la littérature. Bien que notre méthode donne un résultat comparable à celui des méthodes de référence en termes du rappel et de la précision de localisation des frontières, elle améliore significativement la précision moyenne pour toutes les catégories du jeu de données ETHZ.Matching and detecting similarity or dissimilarity between images is a fundamental problem in image processing. Different matching algorithms are used in literature to solve this fundamental problem. Despite their novelty, these algorithms are mostly inefficient and cannot perform properly in noisy situations. In this thesis, we solve most of the problems of previous methods by using a reliable algorithm for segmenting image contour map, called CCP Map, and a new matching method. In our algorithm, we use a local shape descriptor that is very fast, invariant to affine transform, and robust for dealing with non-rigid objects and occlusion. After finding the best match for the contours, we need to verify if they are correctly matched. For this matter, we use the Weighted Graph Transformation Matching (WGTM) approach, which is capable of removing outliers based on their adjacency and geometrical relationships. WGTM works properly for both rigid and non-rigid objects and is robust to high order distortions. For evaluating our method, the ETHZ dataset including five diverse classes of objects (bottles, swans, mugs, giraffes, apple-logos) is used. Finally, our method is compared to several famous methods proposed by other researchers in the literature. While our method shows a comparable result to other benchmarks in terms of recall and the precision of boundary localization, it significantly improves the average precision for all of the categories in the ETHZ dataset

    Shape localization, quantification and correspondence using Region Matching Algorithm

    Get PDF
    We propose a method for local, region-based matching of planar shapes, especially as those shapes that change over time. This is a problem fundamental to medical imaging, specifically the comparison over time of mammograms. The method is based on the non-emergence and non-enhancement of maxima, as well as the causality principle of integral invariant scale space. The core idea of our Region Matching Algorithm (RMA) is to divide a shape into a number of “salient” regions and then to compare all such regions for local similarity in order to quantitatively identify new growths or partial/complete occlusions. The algorithm has several advantages over commonly used methods for shape comparison of segmented regions. First, it provides improved key-point alignment for optimal shape correspondence. Second, it identifies localized changes such as new growths as well as complete/partial occlusion in corresponding regions by dividing the segmented region into sub-regions based upon the extrema that persist over a sufficient range of scales. Third, the algorithm does not depend upon the spatial locations of mammographic features and eliminates the need for registration to identify salient changes over time. Finally, the algorithm is fast to compute and requires no human intervention. We apply the method to temporal pairs of mammograms in order to detect potentially important differences between them

    Object detection and activity recognition in digital image and video libraries

    Get PDF
    This thesis is a comprehensive study of object-based image and video retrieval, specifically for car and human detection and activity recognition purposes. The thesis focuses on the problem of connecting low level features to high level semantics by developing relational object and activity presentations. With the rapid growth of multimedia information in forms of digital image and video libraries, there is an increasing need for intelligent database management tools. The traditional text based query systems based on manual annotation process are impractical for today\u27s large libraries requiring an efficient information retrieval system. For this purpose, a hierarchical information retrieval system is proposed where shape, color and motion characteristics of objects of interest are captured in compressed and uncompressed domains. The proposed retrieval method provides object detection and activity recognition at different resolution levels from low complexity to low false rates. The thesis first examines extraction of low level features from images and videos using intensity, color and motion of pixels and blocks. Local consistency based on these features and geometrical characteristics of the regions is used to group object parts. The problem of managing the segmentation process is solved by a new approach that uses object based knowledge in order to group the regions according to a global consistency. A new model-based segmentation algorithm is introduced that uses a feedback from relational representation of the object. The selected unary and binary attributes are further extended for application specific algorithms. Object detection is achieved by matching the relational graphs of objects with the reference model. The major advantages of the algorithm can be summarized as improving the object extraction by reducing the dependence on the low level segmentation process and combining the boundary and region properties. The thesis then addresses the problem of object detection and activity recognition in compressed domain in order to reduce computational complexity. New algorithms for object detection and activity recognition in JPEG images and MPEG videos are developed. It is shown that significant information can be obtained from the compressed domain in order to connect to high level semantics. Since our aim is to retrieve information from images and videos compressed using standard algorithms such as JPEG and MPEG, our approach differentiates from previous compressed domain object detection techniques where the compression algorithms are governed by characteristics of object of interest to be retrieved. An algorithm is developed using the principal component analysis of MPEG motion vectors to detect the human activities; namely, walking, running, and kicking. Object detection in JPEG compressed still images and MPEG I frames is achieved by using DC-DCT coefficients of the luminance and chrominance values in the graph based object detection algorithm. The thesis finally addresses the problem of object detection in lower resolution and monochrome images. Specifically, it is demonstrated that the structural information of human silhouettes can be captured from AC-DCT coefficients

    Shape description and matching using integral invariants on eccentricity transformed images

    Get PDF
    Matching occluded and noisy shapes is a problem frequently encountered in medical image analysis and more generally in computer vision. To keep track of changes inside the breast, for example, it is important for a computer aided detection system to establish correspondences between regions of interest. Shape transformations, computed both with integral invariants (II) and with geodesic distance, yield signatures that are invariant to isometric deformations, such as bending and articulations. Integral invariants describe the boundaries of planar shapes. However, they provide no information about where a particular feature lies on the boundary with regard to the overall shape structure. Conversely, eccentricity transforms (Ecc) can match shapes by signatures of geodesic distance histograms based on information from inside the shape; but they ignore the boundary information. We describe a method that combines the boundary signature of a shape obtained from II and structural information from the Ecc to yield results that improve on them separately
    corecore