51 research outputs found

    Piecewise Affine Registration of Biological Images for Volume Reconstruction

    Get PDF
    This manuscript tackles the reconstruction of 3D volumes via mono-modal registration of series of 2D biological images (histological sections, autoradiographs, cryosections, etc.). The process of acquiring these images typically induces composite transformations that we model as a number of rigid or affine local transformations embedded in an elastic one. We propose a registration approach closely derived from this model. Given a pair of input images, we first compute a dense similarity field between them with a block matching algorithm. We use as a similarity measure an extension of the classical correlation coefficient that improves the consistency of the field. A hierarchical clustering algorithm then automatically partitions the field into a number of classes from which we extract independent pairs of sub-images. Our clustering algorithm relies on the Earth mover’s distribution metric and is additionally guided by robust least-square estimation of the transformations associated with each cluster. Finally, the pairs of sub-images are, independently, affinely registered and a hybrid affine/non-linear interpolation scheme is used to compose the output registered image. We investigate the behavior of our approach on several batches of histological data and discuss its sensitivity to parameters and noise

    Contour matching using ant colony optimization and curve evolution

    Get PDF
    Shape retrieval is a very important topic in computer vision. Image retrieval consists of selecting images that fulfil specific criteria from a collection of images. This thesis concentrates on contour-based image retrieval, in which we only explore the information located on the shape contour. There are many different kinds of shape retrieval methods. Most of the research in this field has till now concentrated on matching methods and how to achieve a meaningful correspondence. The matching process consist of finding correspondence between the points located on the designed contours. However, the huge number of incorporated points in the correspondence makes the matching process more complex. Furthermore, this scheme does not support computation of the correspondence intuitively without considering noise effect and distortions. Hence, heuristics methods are convoked to find acceptable solution. Moreover, some researches focus on improving polygonal modelling methods of a contour in such a way that the resulted contour is a good approximation of the original contour, which can be used to reduce the number of incorporated points in the matching. In this thesis, a novel approach for Ant Colony Optimization (ACO) contour matching that can be used to find an acceptable matching between contour shapes is developed. A polygonal evolution method proposed previously is selected to simplify the extracted contour. The main reason behind selecting this method is due to the use of a stopping criterion which must be predetermined. The match process is formulated as a Quadratic Assignment Problem (QAP) and resolved by using ACO. An approximated similarity is computed using original shape context descriptor and the Euclidean metric. The experimental results justify that the proposed approach is invariant to noise and distortions, and it is more robust to noise and distortion compared to the previously introduced Dominant Point (DP) Approach. This work serves as the fundamental study for assessing the Bender Test to diagnose dyslexic and non-dyslexic symptom in children

    Matching sets of features for efficient retrieval and recognition

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 145-153).In numerous domains it is useful to represent a single example by the collection of local features or parts that comprise it. In computer vision in particular, local image features are a powerful way to describe images of objects and scenes. Their stability under variable image conditions is critical for success in a wide range of recognition and retrieval applications. However, many conventional similarity measures and machine learning algorithms assume vector inputs. Comparing and learning from images represented by sets of local features is therefore challenging, since each set may vary in cardinality and its elements lack a meaningful ordering. In this thesis I present computationally efficient techniques to handle comparisons, learning, and indexing with examples represented by sets of features. The primary goal of this research is to design and demonstrate algorithms that can effectively accommodate this useful representation in a way that scales with both the representation size as well as the number of images available for indexing or learning. I introduce the pyramid match algorithm, which efficiently forms an implicit partial matching between two sets of feature vectors.(cont.) The matching has a linear time complexity, naturally forms a Mercer kernel, and is robust to clutter or outlier features, a critical advantage for handling images with variable backgrounds, occlusions, and viewpoint changes. I provide bounds on the expected error relative to the optimal partial matching. For very large databases, even extremely efficient pairwise comparisons may not offer adequately responsive query times. I show how to perform sub-linear time retrievals under the matching measure with randomized hashing techniques, even when input sets have varying numbers of features. My results are focused on several important vision tasks, including applications to content-based image retrieval, discriminative classification for object recognition, kernel regression, and unsupervised learning of categories. I show how the dramatic increase in performance enables accurate and flexible image comparisons to be made on large-scale data sets, and removes the need to artificially limit the number of local descriptions used per image when learning visual categories.by Kristen Lorraine Grauman.Ph.D

    Probabilistic approaches to matching and modelling shapes

    Get PDF

    Low Complexity Image Recognition Algorithms for Handheld devices

    Get PDF
    Content Based Image Retrieval (CBIR) has gained a lot of interest over the last two decades. The need to search and retrieve images from databases, based on information (“features”) extracted from the image itself, is becoming increasingly important. CBIR can be useful for handheld image recognition devices in which the image to be recognized is acquired with a camera, and thus there is no additional metadata associated to it. However, most CBIR systems require large computations, preventing their use in handheld devices. In this PhD work, we have developed low-complexity algorithms for content based image retrieval in handheld devices for camera acquired images. Two novel algorithms, ‘Color Density Circular Crop’ (CDCC) and ‘DCT-Phase Match’ (DCTPM), to perform image retrieval along with a two-stage image retrieval algorithm that combines CDCC and DCTPM, to achieve the low complexity required in handheld devices are presented. The image recognition algorithms run on a handheld device over a large database with fast retrieval time besides having high accuracy, precision and robustness to environment variations. Three algorithms for Rotation, Scale, and Translation (RST) compensation for images were also developed in this PhD work to be used in conjunction with the two-stage image retrieval algorithm. The developed algorithms are implemented, using a commercial fixed-point Digital Signal Processor (DSP), into a device, called ‘PictoBar’, in the domain of Alternative and Augmentative Communication (AAC). The PictoBar is intended to be used in the field of electronic aid for disabled people, in areas like speech rehabilitation therapy, education etc. The PictoBar is able to recognize pictograms and pictures contained in a database. Once an image is found in the database, a corresponding associated speech message is played. A methodology for optimal implementation and systematic testing of the developed image retrieval algorithms on a fixed point DSP is also established as part of this PhD work

    An Unsupervised Cluster: Learning Water Customer Behavior Using Variation of Information on a Reconstructed Phase Space

    Get PDF
    The unsupervised clustering algorithm described in this dissertation addresses the need to divide a population of water utility customers into groups based on their similarities and differences, using only the measured flow data collected by water meters. After clustering, the groups represent customers with similar consumption behavior patterns and provide insight into ‘normal’ and ‘unusual’ customer behavior patterns. This research focuses upon individually metered water utility customers and includes both residential and commercial customer accounts serviced by utilities within North America. The contributions of this dissertation not only represent a novel academic work, but also solve a practical problem for the utility industry. This dissertation introduces a method of agglomerative clustering using information theoretic distance measures on Gaussian mixture models within a reconstructed phase space. The clustering method accommodates a utility’s limited human, financial, computational, and environmental resources. The proposed weighted variation of information distance measure for comparing Gaussian mixture models places emphasis upon those behaviors whose statistical distributions are more compact over those behaviors with large variation and contributes a novel addition to existing comparison options

    Multiple human tracking in RGB-depth data: A survey

    Get PDF
    © The Institution of Engineering and Technology. Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-depth devices has led to many new approaches to MHT, and many of these integrate colour and depth cues to improve each and every stage of the process. In this survey, the authors present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. They identify and introduce existing, publicly available, benchmark datasets and software resources that fuse colour and depth data for MHT. Finally, they present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets

    A SDK improvement towards gesture support

    Get PDF
    Human-Computer Interaction have been one of the main focus of the technological community, specially the Natural User Interfaces (NUI) field of research as, since the launch of the Kinect Sensor, the goal to achieve fully natural interfaces just got a lot closer to reality. Taking advantage of this conditions the following research work proposes to compute the hand skeleton in order to recognize Sign Language Shapes. The proposed solution uses the Kinect Sensor to achieve a good segmentation and image analysis algorithms to extend the skeleton from the extraction of high-level features. In order to recognize complex hand shapes the current research work proposes the redefinition of the hand contour making it immutable to translation, rotation and scaling operations, and a set of tools to achieve a good recognition. The validation of the proposed solution extended the Kinects Software Development Kit to allow the developer to access the new set of inferred points and created a template-matching based platform that uses the contour to define the hand shape, this prototype was tested in a set of predefined conditions and showed to have a good success ration and has proven to be eligible for real-time scenarios

    Ideal Reference Point in Planning and Control for Automated Car-Like Vehicles

    Get PDF
    The choice of the reference point in automated vehicles impacts the vehicle's driving behavior. However, this influence is often not considered for planning and control tasks. To find out where the reference point should be located best, we first consider its position to be ideal if the needed lane width on the left and right side of the planned path is equal when cornering with constant curvature. For constantly curved paths we derive the ideal reference point depending on the curvature, using the kinematics of a slip angle free bicycle model. For non-stationary cornering, we analyze different maneuvers and finally, we select the reference point on the front axle. Utilizing this knowledge, the extent of a forward moving vehicle can be reduced to a point model, which does not require the orientation of the vehicle. This enables a simple and still promising approach for collision checking, where the vehicle's needed space is approximated by only one circle around the reference point. Finally, we analyze the influence of the reference point on a lateral feed-forward controller. Thus, we confirm the previously chosen reference point on the front axle for the equally distributed needed lane width and therefore recommend its use
    corecore