3,235 research outputs found

    Cross-Platform Presentation of Interactive Volumetric Imagery

    Get PDF
    Volume data is useful across many disciplines, not just medicine. Thus, it is very important that researchers have a simple and lightweight method of sharing and reproducing such volumetric data. In this paper, we explore some of the challenges associated with volume rendering, both from a classical sense and from the context of Web3D technologies. We describe and evaluate the pro- posed X3D Volume Rendering Component and its associated styles for their suitability in the visualization of several types of image data. Additionally, we examine the ability for a minimal X3D node set to capture provenance and semantic information from outside ontologies in metadata and integrate it with the scene graph

    A Sketch-Based Interface for Annotation of 3D Brain Vascular Reconstructions

    Get PDF
    Within the medical imaging community, 3D models of anatomical structures are now widely used in order to establish more accurate diagnoses than those based on 2D images. Many research works focus on an automatic process to build such 3D models. However automatic reconstruction induces many artifacts if the anatomical structure exhibits tortuous and thin parts (such as vascular networks) and the correction of these artifacts involves 3D-modeling skills and times that radiologists do not have. This article presents a semi-automatic approach to build a correct topology of vascular networks from 3D medical images. The user interface is based on sketching; user strokes both defines a command and the part of geometry where the command is applied to. Moreover the user-gesture speed is taken into account to adjust the command: a slow and precise gesture will correct a local part of the topology while a fast gesture will correct a larger part of the topology. Our system relies on an automatic segmentation that provides a initial guess that the user can interactively modify using the proposed set of commands. This allows to correct the anatomical aberrations or ambiguities that appear on the segmented model in a few strokes.Dans le domaine de l'imagerie médicale, la modélisation 3D de structures anatomiques est maintenant largement utilisée dans l'optique d'é}tablir des diagnostics plus précis qu'avec des données basées sur des images 2D. Aujourd'hui, de nombreux travaux mettent l'accent sur les méthodes automatique de reconstruction de modèles 3D mais ces méthodes induisent de nombreuses erreurs. Avec une structure anatomique (réseau cérébral) présente des parties assez fines et tortueuses, des erreurs sont introduites, cela nécessitent de la correction du modèle 3D, mais aussi des compétences et des heures que les radiologistes ne possèdent pas. Cet article présente une approche semi-automatique de reconstruction d'une topologie correcte de réseaux vasculaires issus d'images médicales en 3D. Notre système repose sur une segmentation automatique qui fournit une estimation initiale dont l'utilisateur peut modifier interactivement en utilisant un jeu proposé de commandes basées sur le croquis. Cela permet de corriger les aberrations anatomiques ou les ambiguïtés qui apparaissent sur le modèle segmenté en quelques traits

    Three-dimensional visualization software assists learning in students with diverse spatial intelligence in medical education

    Get PDF
    This study evaluated effect of mental rotation (MR) training on learning outcomes and explored effectiveness of teaching via three-dimensional (3D) software among medical students with diverse spatial intelligence. Data from n = 67 student volunteers were included. A preliminary test was conducted to obtain baseline level of MR competency and was utilized to assign participants to two experimental conditions, i.e., trained group (n = 25) and untrained group (n = 42). Data on the effectiveness of training were collected to measure participants\u27 speed and accuracy in performing various MR activities. Six weeks later, a large class format (LCF) session was conducted for all students using 3D software. The usefulness of technology-assisted learning at the LCF was evaluated via a pre- and post-test. Students\u27 feedback regarding MR training and use of 3D software was acquired through questionnaires. MR scores of the trainees improved from 25.9±4.6 points to 28.1±4.4 (P = 0.011) while time taken to complete the tasks reduced from 20.9±3.9 to 12.2±4.4 minutes. Males scored higher than females in all components (P = 0.016). Further, higher pre- and post-test scores were observed in trained (9.0±1.9 and 12.3±1.6) versus untrained group (7.8±1.8; 10.8±1.8). Although mixed-design analysis of variance suggested significant difference in their test scores (P \u3c 0.001), both groups reported similar trend in improvement by means of 3D software (P = 0.54). Ninety-seven percent of students reported technology-assisted learning as an effective means of instruction and found use of 3D software superior to plastic models. Software based on 3D technologies could be adopted as an effective teaching pedagogy to support learning across students with diverse levels of mental rotation abilities

    The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain

    Get PDF
    The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals

    Object-based representation and analysis of light and electron microscopic volume data using Blender

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.BACKGROUND: Rapid improvements in light and electron microscopy imaging techniques and the development of 3D anatomical atlases necessitate new approaches for the visualization and analysis of image data. Pixel-based representations of raw light microscopy data suffer from limitations in the number of channels that can be visualized simultaneously. Complex electron microscopic reconstructions from large tissue volumes are also challenging to visualize and analyze. RESULTS: Here we exploit the advanced visualization capabilities and flexibility of the open-source platform Blender to visualize and analyze anatomical atlases. We use light-microscopy-based gene expression atlases and electron microscopy connectome volume data from larval stages of the marine annelid Platynereis dumerilii. We build object-based larval gene expression atlases in Blender and develop tools for annotation and coexpression analysis. We also represent and analyze connectome data including neuronal reconstructions and underlying synaptic connectivity. CONCLUSIONS: We demonstrate the power and flexibility of Blender for visualizing and exploring complex anatomical atlases. The resources we have developed for Platynereis will facilitate data sharing and the standardization of anatomical atlases for this species. The flexibility of Blender, particularly its embedded Python application programming interface, means that our methods can be easily extended to other organisms.The research leading to these results received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/European Research Council Grant Agreement 260821

    Medical Image Data and Datasets in the Era of Machine Learning-Whitepaper from the 2016 C-MIMI Meeting Dataset Session.

    Get PDF
    At the first annual Conference on Machine Intelligence in Medical Imaging (C-MIMI), held in September 2016, a conference session on medical image data and datasets for machine learning identified multiple issues. The common theme from attendees was that everyone participating in medical image evaluation with machine learning is data starved. There is an urgent need to find better ways to collect, annotate, and reuse medical imaging data. Unique domain issues with medical image datasets require further study, development, and dissemination of best practices and standards, and a coordinated effort among medical imaging domain experts, medical imaging informaticists, government and industry data scientists, and interested commercial, academic, and government entities. High-level attributes of reusable medical image datasets suitable to train, test, validate, verify, and regulate ML products should be better described. NIH and other government agencies should promote and, where applicable, enforce, access to medical image datasets. We should improve communication among medical imaging domain experts, medical imaging informaticists, academic clinical and basic science researchers, government and industry data scientists, and interested commercial entities

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201
    corecore