26,167 research outputs found

    3D fusion of intravascular ultrasound and coronary computed tomography for in-vivo wall shear stress analysis: A feasibility study

    Get PDF
    Wall shear stress, the force per area acting on the lumen wall due to the blood flow, is an important biomechanical parameter in the localization and progression of atherosclerosis. To calculate shear stress and relate it to atherosclerosis, a 3D description of the lumen and vessel wall is required. We present a framework to obtain the 3D reconstruction of human coronary arteries by the fusion of intravascular ultrasound (IVUS) and coronary computed tomography angiography (CT). We imaged 23 patients with IVUS and CT. The images from both modalities were registered for 35 arteries, using bifurcations as landmarks. The IVUS images together with IVUS derived lumen and wall contours were positioned on the 3D centerline, which was derived from CT. The resulting 3D lumen and wall contours were transformed to a surface for calculation of shear stress and plaque thickness. We applied variations in selection of landmarks and investigated whether these variations influenced the relation between shear stress and plaque thickness. Fusion was successfully achieved in 31 of the 35 arteries. The average length of the fused segments was 36.4 ± 15.7 mm. The length in IVUS and CT of the fused parts correlated excellently (R2= 0.98). Both for a mildly diseased and a very diseased coronary artery, shear stress was calculated and related to plaque thickness. Variations in the selection of the landmarks for these two arteries did not affect the relationship between shear stress and plaque thickness. This new framework can therefore successfully be applied for shear stress analysis in human coronary arteries

    Geometry Processing of Conventionally Produced Mouse Brain Slice Images

    Full text link
    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as an application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data.Comment: 14 pages, 11 figure

    A framework for digital sunken relief generation based on 3D geometric models

    Get PDF
    Sunken relief is a special art form of sculpture whereby the depicted shapes are sunk into a given surface. This is traditionally created by laboriously carving materials such as stone. Sunken reliefs often utilize the engraved lines or strokes to strengthen the impressions of a 3D presence and to highlight the features which otherwise are unrevealed. In other types of reliefs, smooth surfaces and their shadows convey such information in a coherent manner. Existing methods for relief generation are focused on forming a smooth surface with a shallow depth which provides the presence of 3D figures. Such methods unfortunately do not help the art form of sunken reliefs as they omit the presence of feature lines. We propose a framework to produce sunken reliefs from a known 3D geometry, which transforms the 3D objects into three layers of input to incorporate the contour lines seamlessly with the smooth surfaces. The three input layers take the advantages of the geometric information and the visual cues to assist the relief generation. This framework alters existing techniques in line drawings and relief generation, and then combines them organically for this particular purpose

    Active modelling of virtual humans

    Get PDF
    This thesis provides a complete framework that enables the creation of photorealistic 3D human models in real-world environments. The approach allows a non-expert user to use any digital capture device to obtain four images of an individual and create a personalised 3D model, for multimedia applications. To achieve this, it is necessary that the system is automatic and that the reconstruction process is flexible to account for information that is not available or incorrectly captured. In this approach the individual is automatically extracted from the environment using constrained active B-spline templates that are scaled and automatically initialised using only image information. These templates incorporate the energy minimising framework for Active Contour Models, providing a suitable and flexible method to deal with the adjustments in pose an individual can adopt. The final states of the templates describe the individual’s shape. The contours in each view are combined to form a 3D B-spline surface that characterises an individual’s maximal silhouette equivalent. The surface provides a mould that contains sufficient information to allow for the active deformation of an underlying generic human model. This modelling approach is performed using a novel technique that evolves active-meshes to 3D for deforming the underlying human model, while adaptively constraining it to preserve its existing structure. The active-mesh approach incorporates internal constraints that maintain the structural relationship of the vertices of the human model, while external forces deform the model congruous to the 3D surface mould. The strength of the internal constraints can be reduced to allow the model to adopt the exact shape of the bounding volume or strengthened to preserve the internal structure, particularly in areas of high detail. This novel implementation provides a uniform framework that can be simply and automatically applied to the entire human model

    Developing a methodology for three-dimensional correlation of PET–CT images and whole-mount histopathology in non-small-cell lung cancer

    Get PDF
    Background: Understanding the three-dimensional (3D) volumetric relationship between imaging and functional or histopathologic heterogeneity of tumours is a key concept in the development of image-guided radiotherapy. Our aim was to develop a methodologic framework to enable the reconstruction of resected lung specimens containing non-small-cell lung cancer (NSCLC), to register the result in 3D with diagnostic imaging, and to import the reconstruction into a radiation treatment planning system. Methods and Results: We recruited 12 patients for an investigation of radiology-pathology correlation (RPC) in NSCLC. Before resection, imaging by positron emission tomography (PET) or computed tomography (CT) was obtained. Resected specimens were formalin-fixed for 1-24 hours before sectioning at 3-mm to 10-mm intervals. To try to retain the original shape, we embedded the specimens in agar before sectioning. Consecutive sections were laid out for photography and manually adjusted to maintain shape. Following embedding, the tissue blocks underwent whole-mount sectioning (4-ÎŒm sections) and staining with hematoxylin and eosin. Large histopathology slides were used to whole-mount entire sections for digitization. The correct sequence was maintained to assist in subsequent reconstruction. Using Photoshop (Adobe Systems Incorporated, San Jose, CA, U.S.A.), contours were placed on the photographic images to represent the external borders of the section and the extent of macroscopic disease. Sections were stacked in sequence and manually oriented in Photoshop. The macroscopic tumour contours were then transferred to MATLAB (The Mathworks, Natick, MA, U.S.A.) and stacked, producing 3D surface renderings of the resected specimen and embedded gross tumour. To evaluate the microscopic extent of disease, customized "tile-based" and commercial confocal panoramic laser scanning (TISSUEscope: Biomedical Photometrics, Waterloo, ON) systems were used to generate digital images of whole-mount histopathology sections. Using the digital whole-mount images and imaging software, we contoured the gross and microscopic extent of disease. Two methods of registering pathology and imaging were used. First, selected PET and CT images were transferred into Photoshop, where they were contoured, stacked, and reconstructed. After importing the pathology and the imaging contours to MATLAB, the contours were reconstructed, manually rotated, and rigidly registered. In the second method, MATLAB tumour renderings were exported to a software platform for manual registration with the original PET and CT images in multiple planes. Data from this software platform were then exported to the Pinnacle radiation treatment planning system in DICOM (Digital Imaging and Communications in Medicine) format. Conclusions: There is no one definitive method for 3D volumetric RPC in NSCLC. An innovative approach to the 3D reconstruction of resected NSCLC specimens incorporates agar embedding of the specimen and whole-mount digital histopathology. The reconstructions can be rigidly and manually registered to imaging modalities such as CT and PET and exported to a radiation treatment planning system

    Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View

    Get PDF
    We propose a method for predicting the 3D shape of a deformable surface from a single view. By contrast with previous approaches, we do not need a pre-registered template of the surface, and our method is robust to the lack of texture and partial occlusions. At the core of our approach is a {\it geometry-aware} deep architecture that tackles the problem as usually done in analytic solutions: first perform 2D detection of the mesh and then estimate a 3D shape that is geometrically consistent with the image. We train this architecture in an end-to-end manner using a large dataset of synthetic renderings of shapes under different levels of deformation, material properties, textures and lighting conditions. We evaluate our approach on a test split of this dataset and available real benchmarks, consistently improving state-of-the-art solutions with a significantly lower computational time.Comment: Accepted at CVPR 201
    • 

    corecore