1,063 research outputs found

    A Stereo Vision Framework for 3-D Underwater Mosaicking

    Get PDF

    Deep learning cardiac motion analysis for human survival prediction

    Get PDF
    Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p < .0001) for our model C=0.73 (95%\% CI: 0.68 - 0.78) than the human benchmark of C=0.59 (95%\% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival

    AUTOMATIC IMAGE TO MODEL ALIGNMENT FOR PHOTO-REALISTIC URBAN MODEL RECONSTRUCTION

    Get PDF
    We introduce a hybrid approach in which images of an urban scene are automatically alignedwith a base geometry of the scene to determine model-relative external camera parameters. Thealgorithm takes as input a model of the scene and images with approximate external cameraparameters and aligns the images to the model by extracting the facades from the images andaligning the facades with the model by minimizing over a multivariate objective function. Theresulting image-pose pairs can be used to render photo-realistic views of the model via texturemapping.Several natural extensions to the base hybrid reconstruction technique are also introduced. Theseextensions, which include vanishing point based calibration refinement and video stream basedreconstruction, increase the accuracy of the base algorithm, reduce the amount of data that mustbe provided by the user as input to the algorithm, and provide a mechanism for automaticallycalibrating a large set of images for post processing steps such as automatic model enhancementand fly-through model visualization.Traditionally, photo-realistic urban reconstruction has been approached from purely image-basedor model-based approaches. Recently, research has been conducted on hybrid approaches, whichcombine the use of images and models. Such approaches typically require user assistance forcamera calibration. Our approach is an improvement over these methods because it does notrequire user assistance for camera calibration

    Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery

    Get PDF
    In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data

    Segmentation of corpus callosum using diffusion tensor imaging: validation in patients with glioblastoma

    Get PDF
    Abstract Background This paper presents a three-dimensional (3D) method for segmenting corpus callosum in normal subjects and brain cancer patients with glioblastoma. Methods Nineteen patients with histologically confirmed treatment naïve glioblastoma and eleven normal control subjects underwent DTI on a 3T scanner. Based on the information inherent in diffusion tensors, a similarity measure was proposed and used in the proposed algorithm. In this algorithm, diffusion pattern of corpus callosum was used as prior information. Subsequently, corpus callosum was automatically divided into Witelson subdivisions. We simulated the potential rotation of corpus callosum under tumor pressure and studied the reproducibility of the proposed segmentation method in such cases. Results Dice coefficients, estimated to compare automatic and manual segmentation results for Witelson subdivisions, ranged from 94% to 98% for control subjects and from 81% to 95% for tumor patients, illustrating closeness of automatic and manual segmentations. Studying the effect of corpus callosum rotation by different Euler angles showed that although segmentation results were more sensitive to azimuth and elevation than skew, rotations caused by brain tumors do not have major effects on the segmentation results. Conclusions The proposed method and similarity measure segment corpus callosum by propagating a hyper-surface inside the structure (resulting in high sensitivity), without penetrating into neighboring fiber bundles (resulting in high specificity)

    WELDMAP: A Photogrammetric Suite Applied to the Inspection of Welds

    Get PDF
    [EN] This paper presents a new tool for external quality control in welds using close-range photogrammetry. The main contribution of the developed approach is the automatic assessment of welds based on 3D photogrammetric models, enabling objective and accurate analyses through an in-house tool that was developed, WELDMAP. As a result, inspectors can perform external quality control of welds in a simple and efficient way without requiring visual inspections or external tools, and thus avoiding the subjectivity and imprecisions of the classical protocol. The tool was validated with a large dataset in laboratory tests as well as in real scenarios.SIMinistry of Science and Innovation, Government of Spai
    corecore