3,182 research outputs found

    Image processing for plastic surgery planning

    Get PDF
    This thesis presents some image processing tools for plastic surgery planning. In particular, it presents a novel method that combines local and global context in a probabilistic relaxation framework to identify cephalometric landmarks used in Maxillofacial plastic surgery. It also uses a method that utilises global and local symmetry to identify abnormalities in CT frontal images of the human body. The proposed methodologies are evaluated with the help of several clinical data supplied by collaborating plastic surgeons

    Shape/image registration for medical imaging : novel algorithms and applications.

    Get PDF
    This dissertation looks at two different categories of the registration approaches: Shape registration, and Image registration. It also considers the applications of these approaches into the medical imaging field. Shape registration is an important problem in computer vision, computer graphics and medical imaging. It has been handled in different manners in many applications like shapebased segmentation, shape recognition, and tracking. Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. Many image processing applications like remote sensing, fusion of medical images, and computer-aided surgery need image registration. This study deals with two different applications in the field of medical image analysis. The first one is related to shape-based segmentation of the human vertebral bodies (VBs). The vertebra consists of the VB, spinous, and other anatomical regions. Spinous pedicles, and ribs should not be included in the bone mineral density (BMD) measurements. The VB segmentation is not an easy task since the ribs have similar gray level information. This dissertation investigates two different segmentation approaches. Both of them are obeying the variational shape-based segmentation frameworks. The first approach deals with two dimensional (2D) case. This segmentation approach starts with obtaining the initial segmentation using the intensity/spatial interaction models. Then, shape model is registered to the image domain. Finally, the optimal segmentation is obtained using the optimization of an energy functional which integrating the shape model with the intensity information. The second one is a 3D simultaneous segmentation and registration approach. The information of the intensity is handled by embedding a Willmore flow into the level set segmentation framework. Then the shape variations are estimated using a new distance probabilistic model. The experimental results show that the segmentation accuracy of the framework are much higher than other alternatives. Applications on BMD measurements of vertebral body are given to illustrate the accuracy of the proposed segmentation approach. The second application is related to the field of computer-aided surgery, specifically on ankle fusion surgery. The long-term goal of this work is to apply this technique to ankle fusion surgery to determine the proper size and orientation of the screws that are used for fusing the bones together. In addition, we try to localize the best bone region to fix these screws. To achieve these goals, the 2D-3D registration is introduced. The role of 2D-3D registration is to enhance the quality of the surgical procedure in terms of time and accuracy, and would greatly reduce the need for repeated surgeries; thus, saving the patients time, expense, and trauma

    Fuzzy cognitive maps for stereovision matching

    Get PDF
    This paper outlines a method for solving the stereovision matching problem using edge segments as the primitives. In stereovision matching the following constraints are commonly used: epipolar, similarity, smoothness, ordering and uniqueness. We propose a new matching strategy under a fuzzy context in which such constraints are mapped. The fuzzy context integrates both Fuzzy Clustering and Fuzzy Cognitive Maps. With such purpose a network of concepts (nodes) is designed, each concept represents a pair of primitives to be matched. Each concept has associated a fuzzy value which determines the degree of the correspondence. The goal is to achieve high performance in terms of correct matches. The main findings of this paper are reflected in the use of the fuzzy context that allows building the network of concepts where the matching constraints are mapped. Initially, each concept value is loaded via the Fuzzy Clustering and then updated by the Fuzzy Cognitive Maps framework. This updating is achieved through the influence of the remainder neighboring concepts until a good global matching solution is achieved. Under this fuzzy approach we gain quantitative and qualitative matching correspondences. This method works as a relaxation matching approach and its performance is illustrated by comparative analysis against some existing global matching methods. (c) 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved

    Exploring novel ways to improve the MRI-based image segmentation in the head region

    Get PDF
    Accurate electron density information is extremely important in positron emission tomography (PET) attenuation correction (AC) and radiotherapy (RT) treatment planning (RTP), especially in the head region, as many interesting brain regions are located near the skull. Achieving good electron density information for bone is not trivial when magnetic resonance imaging (MRI) is used as a source for the anatomical structures of the head, since many MRI sequences show bone in a similar fashion as air. Various atlas-based, emission-based, and segmentation-based methods have been explored to address this problem. In this PhD project, a pipeline for MRI-based substitute CT (sCT) creation is developed and novel ways are developed to further improve the quality of bone delineation in the head region. First, a robust sCT pipeline is developed and validated. This allows modular improvements of the various aspects of head sCT in later publications. The MRI image is segmented into different tissue classes and the final sCT image is constructed from these. The sCT images had good image quality with small non-systematic error. The time-of-flight (TOF) information improves the accuracy of PET reconstruction. The effect of TOF with different AC maps is evaluated to substantiate the need for accurate AC maps for a TOF capable system. The evaluation is performed on both subject and brain region level. While TOF information is helpful, it cannot negate the effect of the AC map quality. The sinus region is problematic in MRI-based sCT creation, as it is easily segmented as bone. Two new methods for addressing AC in the sinus region are presented. One method tries to find the cuboid that covers the largest area of air tissue incorrectly assigned as bone and then correct the incorrect attenuation coefficient. Another method uses the sinus covering cuboid in the normalized space, from which it is converted back to each subject’s individual space, after which the attenuation coefficients are calculated. Both methods improve the alignment of sCT and CT images. Finally, the possibilities of improving the quality of the bone segmentation by utilizing a random forest (RF) machine learning process is explored. The RF model is used to estimate the bone likelihood. The likelihood is then used to enhance the bone segmentation and to model the attenuation coefficient. The machine learning model improves the bone segmentation and reduces the error between sCT and CT images.Tutkimus uusien pään alueen MRI-kuvantamiseen pohjautuvien kuvasegmentaatiomenetelmien kehittämisestä. Tarkka elektronitiheystieto on hyvin tärkeää PET-kuvantamisen vaimenemiskorjauksessa sekä sädehoidon suunnittelussa erityisesti pään alueella, sillä useat kiinnostavat aivoalueet ovat kallon lähellä. Hyvän elektronitiheystiedon laskeminen luulle ei ole yksinkertaista MRI-kuvantamisen pohjalta, sillä useat MRI-sekvenssit näyttävät luut samoin kuin ilman. Useita atlas-, emissio- ja segmentointipohjaisia metodeja on tutkittu tämän ongelman ratkaisemiseksi. Tässä työssä MRI-pohjainen luotiin menetelmä MRI-pohjaisten vaihtoehto-TT-kuvien (sCT) laskemiseksi, sekä kehitetään uusia tapoja parantaa luun MRI-pohjaista erottelukykyä pään alueella. Ensin kehitettiin ja validoitiin sCT-menetelmä. Tämä mahdollisti modulaaristen parannusten lisäämiseen sCT-menetelmään tutkimuksen myöhemmissä vaiheissa. MRI-kuva segmentoidaan eri kudosluokkiin, ja sCT-kuva lasketaan niiden pohjalta. Näin saaduissa sCT-kuvissa oli hyvä kuvanlaatu pienin epäsystemaattisin virhein. PET-kuvantamisessa fotonin lentoaikatieto (TOF) parantaa PET-rekonstruktion tark-kuutta. Tämän parantumisen määrää tutkittiin eri vaimenemiskartoilla fotonin lento-aikaa mittaavien PET-kameroiden vaimenemiskartan laatuvaatimuksien arvioimiseksi. TOF-tieto ei kokonaan pysty poistamaan vaimenemiskartan laadun vaikutusta. Sinusten alue on ongelmallinen MRI-pohjaisessa sCT-kuvan luomisessa, sillä segmentointimenetelmät määrittävät sen usein luuksi. Kaksi uutta menetelmää esiteltiin sinusten alueen PET-kuvantamisen vaimenemiskarttojen laskentaan. Ensimmäinen menetelmä yrittää löytää sellaisen suorakulmaisen särmiön, joka kattaisi suurimman mahdollisen alueen ilmaa, joka on väärin segmentoitu luuksi, ja sitten korjata tämän alueen vaimenemiskertoimen. Toinen menetelmä asettaa suorakulmaisen särmiön normalisoituun kuva-avaruuteen, josta se käännetään takaisin kunkin henkilön omaan yksilölliseen kuva-avaruuteen, minkä jälkeen vaimenemiskertoimet määritetään. Molemmat menetelmät parantavat TT-kuvien ja sCT-kuvien vastaavuutta. Lopuksi tarkasteltiin mahdollisuuksia käyttää koneoppimista ja satunnaismetsäalgoritmeja luun segmentoinnin parantamiseen. Satunnaismetsäalgoritmia käytetään laskemaan ennusteita kunkin kuvapisteen luutodennäköisyydelle. Luutodennäköisyyksiä käytetään luun segmentaation parantamiseen sekä luun tiheyden arviointiin. Kone-oppimispohjainen malli parantaa luun segmentoinnin laatua, sekä vähentää virheitä TT-kuvien ja sCT-kuvien välillä

    Technique(s) for Spike - Sorting

    Get PDF
    Spike-sorting techniques attempt to classify a series of noisy electrical waveforms according to the identity of the neurons that generated them. Existing techniques perform this classification ignoring several properties of actual neurons that can ultimately improve classification performance. In this chapter, after illustrating the spike-sorting problem with real data, we propose a more realistic spike train generation model. It incorporates both a description of "non trivial" (ie, non Poisson) neuronal discharge statistics and a description of spike waveform dynamics (eg, the events amplitude decays for short inter-spike intervals). We show that this spike train generation model is analogous to a one-dimensional Potts spin glass model. We can therefore use the computational methods which have been developed in fields where Potts models are extensively used. These methods are based on the construction of a Markov Chain in the space of model parameters and spike train configurations, where a configuration is defined by specifying a neuron of origin for each spike. This Markov Chain is built such that its unique stationary density is the posterior density of model parameters and configurations given the observed data. A Monte Carlo simulation of the Markov Chain is then used to estimate the posterior density. The theoretical background on Markov chains is provided and the way to build the transition matrix of the Markov Chain is illustrated with a simple, but realistic, model for data generation . Simulated data are used to illustrate the performance of the method and to show that it can easily cope with neurons generating spikes with highly dynamic waveforms and/or generating strongly overlapping clusters on Wilson plots.Comment: 40 pages, 18 figures. LaTeX source file prepared with LyX. To be published as a chapter of the book "Models and Methods in Neurophysics" edited by D. Hansel and C. Meunie

    A Relaxation Scheme for Mesh Locality in Computer Vision.

    Get PDF
    Parallel processing has been considered as the key to build computer systems of the future and has become a mainstream subject in Computer Science. Computer Vision applications are computationally intensive that require parallel approaches to exploit the intrinsic parallelism. This research addresses this problem for low-level and intermediate-level vision problems. The contributions of this dissertation are a unified scheme based on probabilistic relaxation labeling that captures localities of image data and the ability of using this scheme to develop efficient parallel algorithms for Computer Vision problems. We begin with investigating the problem of skeletonization. The technique of pattern match that exhausts all the possible interaction patterns between a pixel and its neighboring pixels captures the locality of this problem, and leads to an efficient One-pass Parallel Asymmetric Thinning Algorithm (OPATA\sb8). The use of 8-distance in this algorithm, or chessboard distance, not only improves the quality of the resulting skeletons, but also improves the efficiency of the computation. This new algorithm plays an important role in a hierarchical route planning system to extract high level typological information of cross-country mobility maps which greatly speeds up the route searching over large areas. We generalize the neighborhood interaction description method to include more complicated applications such as edge detection and image restoration. The proposed probabilistic relaxation labeling scheme exploit parallelism by discovering local interactions in neighboring areas and by describing them effectively. The proposed scheme consists of a transformation function and a dictionary construction method. The non-linear transformation function is derived from Markov Random Field theory. It efficiently combines evidences from neighborhood interactions. The dictionary construction method provides an efficient way to encode these localities. A case study applies the scheme to the problem of edge detection. The relaxation step of this edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial rule in refining edge outputs. The experiments on both synthetic and natural images show that our algorithm converges quickly, and is robust in noisy environment

    Computer vision

    Get PDF
    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed

    Machine-Vision-Based Pose Estimation System Using Sensor Fusion for Autonomous Satellite Grappling

    Get PDF
    When capturing a non-cooperative satellite during an on-orbit satellite servicing mission, the position and orientation (pose) of the satellite with respect to the servicing vessel is required in order to guide the robotic arm of the vessel towards the satellite. The main objective of this research is the development of a machine vision-based pose estimation system for capturing a non-cooperative satellite. The proposed system finds the satellite pose using three types of natural geometric features: circles, lines and points, and it merges data from two monocular cameras and three different algorithms (one for each type of geometric feature) to increase the robustness of the pose estimation. It is assumed that the satellite has an interface ring (which is used to attach a satellite to the launch vehicle) and that the cameras are mounted on the robot end effector which contains the capture tool to grapple the satellite. The three algorithms are based on a feature extraction and detection scheme to provide the detected geometric features on the camera images that belong to the satellite, which its geometry is assumed to be known. Since the projection of a circle on the image plane is an ellipse, an ellipse detection system is used to find the 3D-coordinates of the center of the interface ring and its normal vector using its corresponding detected ellipse on the image plane. The sensor and data fusion is performed in two steps. In the first step, a pose solver system finds pose using the conjugate gradient method to optimize a cost function and to reduce the re-projection error of the detected features, which reduces the pose estimation error. In the second step, an extended Kalman filter merges data from the pose solver and the ellipse detection system, and gives the final estimated pose. The inputs of the pose estimation system are the camera images and the outputs are the position and orientation of the satellite with respect to the end-effector where the cameras are mounted. Virtual and real simulations using a full-scale realistic satellite-mockup and a 7DOF robotic manipulator were performed to evaluate the system performance. Two different lighting conditions and three scenarios each with a different set of features were used. Tracking of the satellite was performed successfully. The total translation error is between 25 mm and 50 mm and the total rotation error is between 2 deg and 3 deg when the target is at 0.7 m from the end effector

    Aeronautical Engineering: A special bibliography with indexes, supplement 67, February 1976

    Get PDF
    This bibliography lists 341 reports, articles, and other documents introduced into the NASA scientific and technical information system in January 1976

    Aeronautical Engineering: a Continuing Bibliography with Indexes (Supplement 244)

    Get PDF
    This bibliography lists 465 reports, articles, and other documents introduced into the NASA scientific and technical information system in September 1989. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics
    corecore