1,711 research outputs found

    Feature-Based Correspondences to Infer the Location of Anatomical Landmarks

    Get PDF
    A methodology has been developed for automatically determining inter-image correspondences between cliques of features extracted from a reference and a query image. Cliques consist of up to threefeatures and correspondences between them are determined via a hierarchy of similarity metrics based on the inherent properties of the features and geometric relationships between those features. As opposed to approaches that determine correspondences solely by voxel intensity, features that also include shape description are used. Specifically, medial-based features areemployed because they are sparse compared to the number of image voxels and can be automatically extracted from the image.The correspondence framework has been extended to automatically estimate the location of anatomical landmarks in the query image by adding landmarks to the cliques. Anatomical landmark locationsare then inferred from the reference image by maximizing landmark correspondences. The ability to infer landmark locations has provided a means to validate the correspondence framework in thepresence of structural variation between images. Moreover, automated landmark estimation imparts the user with anatomical information and can hypothetically be used to initialize andconstrain the search space of segmentation and registration methods.Methods developed in this dissertation were applied to simulated MRI brain images, synthetic images, and images constructed from several variations of a parametric model. Results indicate that the methods are invariant to global translation and rotation and can operate in the presence of structure variation between images.The automated landmark placement method was shown to be accurate as compared to ground-truth that was established both parametrically and manually. It is envisioned that these automated methods could prove useful for alleviating time-consuming and tedious tasks in applications that currently require manual input, and eliminate intra-user subjectivity

    A graph theoretic approach to scene matching

    Get PDF
    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors

    LANDSAT-D investigations in snow hydrology

    Get PDF
    Work undertaken during the contract and its results are described. Many of the results from this investigation are available in journal or conference proceedings literature - published, accepted for publication, or submitted for publication. For these the reference and the abstract are given. Those results that have not yet been submitted separately for publication are described in detail. Accomplishments during the contract period are summarized as follows: (1) analysis of the snow reflectance characteristics of the LANDSAT Thematic Mapper, including spectral suitability, dynamic range, and spectral resolution; (2) development of a variety of atmospheric models for use with LANDSAT Thematic Mapper data. These include a simple but fast two-stream approximation for inhomogeneous atmospheres over irregular surfaces, and a doubling model for calculation of the angular distribution of spectral radiance at any level in an plane-parallel atmosphere; (3) incorporation of digital elevation data into the atmospheric models and into the analysis of the satellite data; and (4) textural analysis of the spatial distribution of snow cover

    Efficient Point-Cloud Processing with Primitive Shapes

    Get PDF
    This thesis presents methods for efficient processing of point-clouds based on primitive shapes. The set of considered simple parametric shapes consists of planes, spheres, cylinders, cones and tori. The algorithms developed in this work are targeted at scenarios in which the occurring surfaces can be well represented by this set of shape primitives which is the case in many man-made environments such as e.g. industrial compounds, cities or building interiors. A primitive subsumes a set of corresponding points in the point-cloud and serves as a proxy for them. Therefore primitives are well suited to directly address the unavoidable oversampling of large point-clouds and lay the foundation for efficient point-cloud processing algorithms. The first contribution of this thesis is a novel shape primitive detection method that is efficient even on very large and noisy point-clouds. Several applications for the detected primitives are subsequently explored, resulting in a set of novel algorithms for primitive-based point-cloud processing in the areas of compression, recognition and completion. Each of these application directly exploits and benefits from one or more of the detected primitives' properties such as approximation, abstraction, segmentation and continuability

    Machine vision systems for on line quality monitoring in industrial applications

    Full text link

    Confocal Microscopy and Three-Dimensional Reconstruction of Thick, Transparent, Vital Tissue

    Get PDF
    The three-dimensional visualization of the 400 micron thick, transparent, in situ cornea is described to demonstrate the use of confocal light microscopy for noninvasive imaging of living cells and thick tissues in their normal, vital conditions. Specimen preparation and physiological stability, as well as light attenuation corrections are critical to data acquisition. The technique to provide mechanical stability of the specimen during the duration of the image acquisition is explained. A laser scanning confocal light microscope (LSCM) was used to obtain optical serial sections from rabbit eyes that were freshly removed and placed in a physiological Ringer\u27s solution. This study demonstrates the capability of the confocal light microscope to obtain a series of high contrast images, with a depth resolution of one micron, across the full thickness of living, transparent tissue. The problems of nonisotropic sampling and the limited eight-bit dynamic range are discussed. The three-dimensional reconstructions were obtained by computer graphics using the volume visualization projection technique. The three-dimensional visualization of the cornea in the in situ eye is presented as an example of image understanding of thick, viable biological cells and tissues. Finally, the criterion of image fidelity is explained. The techniques of confocal light microscopy with its enhanced lateral and axial resolution, improved image contrast, and volume visualization provides microscopists with new techniques for the observation of vital cells and tissues, both in vivo and in vitro

    Feature-based hybrid inspection planning for complex mechanical parts

    Get PDF
    Globalization and emerging new powers in the manufacturing world are among many challenges, major manufacturing enterprises are facing. This resulted in increased alternatives to satisfy customers\u27 growing needs regarding products\u27 aesthetic and functional requirements. Complexity of part design and engineering specifications to satisfy such needs often require a better use of advanced and more accurate tools to achieve good quality. Inspection is a crucial manufacturing function that should be further improved to cope with such challenges. Intelligent planning for inspection of parts with complex geometric shapes and free form surfaces using contact or non-contact devices is still a major challenge. Research in segmentation and localization techniques should also enable inspection systems to utilize modern measurement technologies capable of collecting huge number of measured points. Advanced digitization tools can be classified as contact or non-contact sensors. The purpose of this thesis is to develop a hybrid inspection planning system that benefits from the advantages of both techniques. Moreover, the minimization of deviation of measured part from the original CAD model is not the only characteristic that should be considered when implementing the localization process in order to accept or reject the part; geometric tolerances must also be considered. A segmentation technique that deals directly with the individual points is a necessary step in the developed inspection system, where the output is the actual measured points, not a tessellated model as commonly implemented by current segmentation tools. The contribution of this work is three folds. First, a knowledge-based system was developed for selecting the most suitable sensor using an inspection-specific features taxonomy in form of a 3D Matrix where each cell includes the corresponding knowledge rules and generate inspection tasks. A Travel Salesperson Problem (TSP) has been applied for sequencing these hybrid inspection tasks. A novel region-based segmentation algorithm was developed which deals directly with the measured point cloud and generates sub-point clouds, each of which represents a feature to be inspected and includes the original measured points. Finally, a new tolerance-based localization algorithm was developed to verify the functional requirements and was applied and tested using form tolerance specifications. This research enhances the existing inspection planning systems for complex mechanical parts with a hybrid inspection planning model. The main benefits of the developed segmentation and tolerance-based localization algorithms are the improvement of inspection decisions in order not to reject good parts that would have otherwise been rejected due to misleading results from currently available localization techniques. The better and more accurate inspection decisions achieved will lead to less scrap, which, in turn, will reduce the product cost and improve the company potential in the market

    A survey of dextrous manipulation

    Get PDF
    technical reportThe development of mechanical end effectors capable of dextrous manipulation is a rapidly growing and quite successful field of research. It has in some sense put the focus on control issues, in particular, how to control these remarkably humanlike manipulators to perform the deft movement that we take for granted in the human hand. The kinematic and control issues surrounding manipulation research are clouded by more basic concerns such as: what is the goal of a manipulation system, is the anthropomorphic or functional design methodology appropriate, and to what degree does the control of the manipulator depend on other sensory systems. This paper examines the potential of creating a general purpose, anthropomorphically motivated, dextrous manipulation system. The discussion will focus on features of the human hand that permit its general usefulness as a manipulator. A survey of machinery designed to emulate these capabilities is presented. Finally, the tasks of grasping and manipulation are examined from the control standpoint to suggest a control paradigm which is descriptive, yet flexible and computationally efficient1

    View generated database

    Get PDF
    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics

    Context-aware learning for robot-assisted endovascular catheterization

    Get PDF
    Endovascular intervention has become a mainstream treatment of cardiovascular diseases. However, multiple challenges remain such as unwanted radiation exposures, limited two-dimensional image guidance, insufficient force perception and haptic cues. Fast evolving robot-assisted platforms improve the stability and accuracy of instrument manipulation. The master-slave system also removes radiation to the operator. However, the integration of robotic systems into the current surgical workflow is still debatable since repetitive, easy tasks have little value to be executed by the robotic teleoperation. Current systems offer very low autonomy, potential autonomous features could bring more benefits such as reduced cognitive workloads and human error, safer and more consistent instrument manipulation, ability to incorporate various medical imaging and sensing modalities. This research proposes frameworks for automated catheterisation with different machine learning-based algorithms, includes Learning-from-Demonstration, Reinforcement Learning, and Imitation Learning. Those frameworks focused on integrating context for tasks in the process of skill learning, hence achieving better adaptation to different situations and safer tool-tissue interactions. Furthermore, the autonomous feature was applied to next-generation, MR-safe robotic catheterisation platform. The results provide important insights into improving catheter navigation in the form of autonomous task planning, self-optimization with clinical relevant factors, and motivate the design of intelligent, intuitive, and collaborative robots under non-ionizing image modalities.Open Acces
    • …
    corecore