12,393 research outputs found

    Simultaneous Multiple Surface Segmentation Using Deep Learning

    Full text link
    The task of automatically segmenting 3-D surfaces representing boundaries of objects is important for quantitative analysis of volumetric images, and plays a vital role in biomedical image analysis. Recently, graph-based methods with a global optimization property have been developed and optimized for various medical imaging applications. Despite their widespread use, these require human experts to design transformations, image features, surface smoothness priors, and re-design for a different tissue, organ or imaging modality. Here, we propose a Deep Learning based approach for segmentation of the surfaces in volumetric medical images, by learning the essential features and transformations from training data, without any human expert intervention. We employ a regional approach to learn the local surface profiles. The proposed approach was evaluated on simultaneous intraretinal layer segmentation of optical coherence tomography (OCT) images of normal retinas and retinas affected by age related macular degeneration (AMD). The proposed approach was validated on 40 retina OCT volumes including 20 normal and 20 AMD subjects. The experiments showed statistically significant improvement in accuracy for our approach compared to state-of-the-art graph based optimal surface segmentation with convex priors (G-OSC). A single Convolution Neural Network (CNN) was used to learn the surfaces for both normal and diseased images. The mean unsigned surface positioning errors obtained by G-OSC method 2.31 voxels (95% CI 2.02-2.60 voxels) was improved to 1.271.27 voxels (95% CI 1.14-1.40 voxels) using our new approach. On average, our approach takes 94.34 s, requiring 95.35 MB memory, which is much faster than the 2837.46 s and 6.87 GB memory required by the G-OSC method on the same computer system.Comment: 8 page

    Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene

    Full text link
    The goal of this paper is to take a single 2D image of a scene and recover the 3D structure in terms of a small set of factors: a layout representing the enclosing surfaces as well as a set of objects represented in terms of shape and pose. We propose a convolutional neural network-based approach to predict this representation and benchmark it on a large dataset of indoor scenes. Our experiments evaluate a number of practical design questions, demonstrate that we can infer this representation, and quantitatively and qualitatively demonstrate its merits compared to alternate representations.Comment: Project url with code: https://shubhtuls.github.io/factored3

    3D Well-composed Polyhedral Complexes

    Full text link
    A binary three-dimensional (3D) image II is well-composed if the boundary surface of its continuous analog is a 2D manifold. Since 3D images are not often well-composed, there are several voxel-based methods ("repairing" algorithms) for turning them into well-composed ones but these methods either do not guarantee the topological equivalence between the original image and its corresponding well-composed one or involve sub-sampling the whole image. In this paper, we present a method to locally "repair" the cubical complex Q(I)Q(I) (embedded in R3\mathbb{R}^3) associated to II to obtain a polyhedral complex P(I)P(I) homotopy equivalent to Q(I)Q(I) such that the boundary of every connected component of P(I)P(I) is a 2D manifold. The reparation is performed via a new codification system for P(I)P(I) under the form of a 3D grayscale image that allows an efficient access to cells and their faces

    Hybrid model for vascular tree structures

    Get PDF
    This paper proposes a new representation scheme of the cerebral blood vessels. This model provides information on the semantics of the vascular structure: the topological relationships between vessels and the labeling of vascular accidents such as aneurysms and stenoses. In addition, the model keeps information of the inner surface geometry as well as of the vascular map volume properties, i.e. the tissue density, the blood flow velocity and the vessel wall elasticity. The model can be constructed automatically in a pre-process from a set of segmented MRA images. Its memory requirements are optimized on the basis of the sparseness of the vascular structure. It allows fast queries and efficient traversals and navigations. The visualizations of the vessel surface can be performed at different levels of detail. The direct rendering of the volume is fast because the model provides a natural way to skip over empty data. The paper analyzes the memory requirements of the model along with the costs of the most important operations on it.Postprint (published version

    Path-tracing Monte Carlo Library for 3D Radiative Transfer in Highly Resolved Cloudy Atmospheres

    Full text link
    Interactions between clouds and radiation are at the root of many difficulties in numerically predicting future weather and climate and in retrieving the state of the atmosphere from remote sensing observations. The large range of issues related to these interactions, and in particular to three-dimensional interactions, motivated the development of accurate radiative tools able to compute all types of radiative metrics, from monochromatic, local and directional observables, to integrated energetic quantities. In the continuity of this community effort, we propose here an open-source library for general use in Monte Carlo algorithms. This library is devoted to the acceleration of path-tracing in complex data, typically high-resolution large-domain grounds and clouds. The main algorithmic advances embedded in the library are those related to the construction and traversal of hierarchical grids accelerating the tracing of paths through heterogeneous fields in null-collision (maximum cross-section) algorithms. We show that with these hierarchical grids, the computing time is only weakly sensitivive to the refinement of the volumetric data. The library is tested with a rendering algorithm that produces synthetic images of cloud radiances. Two other examples are given as illustrations, that are respectively used to analyse the transmission of solar radiation under a cloud together with its sensitivity to an optical parameter, and to assess a parametrization of 3D radiative effects of clouds.Comment: Submitted to JAMES, revised and submitted again (this is v2

    Volume-Enclosing Surface Extraction

    Full text link
    In this paper we present a new method, which allows for the construction of triangular isosurfaces from three-dimensional data sets, such as 3D image data and/or numerical simulation data that are based on regularly shaped, cubic lattices. This novel volume-enclosing surface extraction technique, which has been named VESTA, can produce up to six different results due to the nature of the discretized 3D space under consideration. VESTA is neither template-based nor it is necessarily required to operate on 2x2x2 voxel cell neighborhoods only. The surface tiles are determined with a very fast and robust construction technique while potential ambiguities are detected and resolved. Here, we provide an in-depth comparison between VESTA and various versions of the well-known and very popular Marching Cubes algorithm for the very first time. In an application section, we demonstrate the extraction of VESTA isosurfaces for various data sets ranging from computer tomographic scan data to simulation data of relativistic hydrodynamic fireball expansions.Comment: 24 pages, 33 figures, 4 tables, final versio

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure
    • 

    corecore