156,951 research outputs found

    Imaging of temporomandibular joint: Approach by direct volume rendering

    Get PDF
    Materials and Methods: We have studied the temporom-andibular joint anatomy, directly on the living, from 3D images obtained by medical imaging Computed Tomography and Nuclear Magnetic Resonance acquisition, and subsequent re-engineering techniques 3D Surface Rendering and Volume Rendering. Data were analysed with the goal of being able to isolate, identify and distinguish the anatomical structures of the joint, and get the largest possible number of information utilizing software for post-processing work.Results: It was possible to reproduce anatomy of the skeletal structures, as well as through acquisitions of Magnetic Resonance Imaging; it was also possible to visualize the vascular, muscular, ligamentous and tendinous components of the articular complex, and also the capsule and the fibrous cartilaginous disc. We managed the Surface Rendering and Volume Rendering, not only to obtain three-dimensional images for colour and for resolution comparable to the usual anatomical preparations, but also a considerable number of anatomical, minuter details, zooming, rotating and cutting the same images with linking, graduating the colour, transparency and opacity from time to time.Conclusion: These results are encouraging to stimulate further studies in other anatomical districts.Background: The purpose of this study was to conduct a morphological analysis of the temporomandibular joint, a highly specialized synovial joint that permits movement and function of the mandible

    Utilizing FEM-Software to quantify pre- and post-interventional cardiac reconstruction data based on modelling data sets from surgical ventricular repair therapy (SVRT) and cardiac resynchronisation therapy (CRT)

    Get PDF
    BACKGROUND: Left ventricle (LV) 3D structural data can be easily obtained using standard transesophageal echocardiography (TEE) devices but quantitative pre- and intraoperative volumetry and geometry analysis of the LV is presently not feasible in the cardiac operation room (OR). Finite element method (FEM) modelling is necessary to carry out precise and individual volume analysis and in the future will form the basis for simulation of cardiac interventions. METHOD: A Philips/HP Sonos 5500 ultrasound device stores volume data as time-resolved 4D volume data sets. In this prospective study TomTec LV Analysis TEE(© )Software was used for semi-automatic endocardial border detection, reconstruction, and volume-rendering of the clinical 3D echocardiographic data. With the software FemCoGen(© )a quantification of partial volumes and surface directions of the LV was carried out for two patients data sets. One patient underwent surgical ventricular repair therapy (SVR) and the other a cardiac resynchronisation therapy (CRT). RESULTS: For both patients a detailed volume and surface direction analysis is provided. Partial volumes as well as normal directions to the LV surface are pre- and post-interventionally compared. CONCLUSION: The operation results for both patients are quantified. The quantification shows treatment details for both interventions (e.g. the elimination of the discontinuities for CRT intervention and the segments treated for SVR intervention). The LV quantification is feasible in the cardiac OR and it gives a detailed and immediate quantitative feedback of the quality of the intervention to the medical

    A high-level 3D visualization API for Java and ImageJ

    Get PDF
    BACKGROUND: Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. RESULTS: Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. CONCLUSIONS: Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de

    Showing their true colors: a practical approach to volume rendering from serial sections

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In comparison to more modern imaging methods, conventional light microscopy still offers a range of substantial advantages with regard to contrast options, accessible specimen size, and resolution. Currently, tomographic image data in particular is most commonly visualized in three dimensions using volume rendering. To date, this method has only very rarely been applied to image stacks taken from serial sections, whereas surface rendering is still the most prevalent method for presenting such data sets three-dimensionally. The aim of this study was to develop standard protocols for volume rendering of image stacks of serial sections, while retaining the benefits of light microscopy such as resolution and color information.</p> <p>Results</p> <p>Here we provide a set of protocols for acquiring high-resolution 3D images of diverse microscopic samples through volume rendering based on serial light microscopical sections using the 3D reconstruction software Amira (Visage Imaging Inc.). We overcome several technical obstacles and show that these renderings are comparable in quality and resolution to 3D visualizations using other methods. This practical approach for visualizing 3D micro-morphology in full color takes advantage of both the sub-micron resolution of light microscopy and the specificity of histological stains, by combining conventional histological sectioning techniques, digital image acquisition, three-dimensional image filtering, and 3D image manipulation and visualization technologies.</p> <p>Conclusions</p> <p>We show that this method can yield "true"-colored high-resolution 3D views of tissues as well as cellular and sub-cellular structures and thus represents a powerful tool for morphological, developmental, and comparative investigations. We conclude that the presented approach fills an important gap in the field of micro-anatomical 3D imaging and visualization methods by combining histological resolution and differentiation of details with 3D rendering of whole tissue samples. We demonstrate the method on selected invertebrate and vertebrate specimens, and propose that reinvestigation of historical serial section material may be regarded as a special benefit.</p

    PG-NeuS: Robust and Efficient Point Guidance for Multi-View Neural Surface Reconstruction

    Full text link
    Recently, learning multi-view neural surface reconstruction with the supervision of point clouds or depth maps has been a promising way. However, due to the underutilization of prior information, current methods still struggle with the challenges of limited accuracy and excessive time complexity. In addition, prior data perturbation is also an important but rarely considered issue. To address these challenges, we propose a novel point-guided method named PG-NeuS, which achieves accurate and efficient reconstruction while robustly coping with point noise. Specifically, aleatoric uncertainty of the point cloud is modeled to capture the distribution of noise, leading to noise robustness. Furthermore, a Neural Projection module connecting points and images is proposed to add geometric constraints to implicit surface, achieving precise point guidance. To better compensate for geometric bias between volume rendering and point modeling, high-fidelity points are filtered into a Bias Network to further improve details representation. Benefiting from the effective point guidance, even with a lightweight network, the proposed PG-NeuS achieves fast convergence with an impressive 11x speedup compared to NeuS. Extensive experiments show that our method yields high-quality surfaces with high efficiency, especially for fine-grained details and smooth regions, outperforming the state-of-the-art methods. Moreover, it exhibits strong robustness to noisy data and sparse data

    Drishti, a volume exploration and presentation tool

    Get PDF
    Among several rendering techniques for volumetric data, direct volume rendering is a powerful visualization tool for a wide variety of applications. This paper describes the major features of hardware based volume exploration and presentation tool - Drishti. The word, Drishti, stands for vision or insight in Sanskrit, an ancient Indian language. Drishti is a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings. The features in Drishti include, though not limited to, production quality rendering, volume sculpting, multi-resolution zooming, transfer function blending, profile generation, measurement tools, mesh generation, stereo/anaglyph/crosseye renderings. Ultimately, Drishti provides an intuitive and powerful interface for choreographing animations

    Virtual liver biopsy: image processing and 3D visualization

    Get PDF
    • 

    corecore