104,638 research outputs found

    Sectioned images and surface models of a cadaver head with reference to botulinum neurotoxin injection

    Get PDF
    Background: The aim of this study is to elucidate the anatomical considerations with reference to botulinum neurotoxin type A (BTX) injection, on sectioned images and surface models, using Visible Korean. These can be used for medical education and clinical training in the field of facial surgery.Materials and methods: Serially sectioned images of the head were obtained from a cadaver. Significant anatomic structures in the sectioned images were outlined and assembled to create a surface model.Results: The PDF file (27.8 MB) of the stacked models can be accessed for free. The file can also be obtained from the authors by email. Using this file, important anatomical structures associated with the BTX injection can be investigated in the sectioned images. All surface models and stereoscopic structures related with theBTX injection are described in real time.Conclusions: We hope that these state-of-the-art sectioned images, outlined images, and surface models will assist students and trainees in acquiring a better understanding of the anatomy associated with the BTX injection

    Visual Quality Enhancement in Optoacoustic Tomography using Active Contour Segmentation Priors

    Full text link
    Segmentation of biomedical images is essential for studying and characterizing anatomical structures, detection and evaluation of pathological tissues. Segmentation has been further shown to enhance the reconstruction performance in many tomographic imaging modalities by accounting for heterogeneities of the excitation field and tissue properties in the imaged region. This is particularly relevant in optoacoustic tomography, where discontinuities in the optical and acoustic tissue properties, if not properly accounted for, may result in deterioration of the imaging performance. Efficient segmentation of optoacoustic images is often hampered by the relatively low intrinsic contrast of large anatomical structures, which is further impaired by the limited angular coverage of some commonly employed tomographic imaging configurations. Herein, we analyze the performance of active contour models for boundary segmentation in cross-sectional optoacoustic tomography. The segmented mask is employed to construct a two compartment model for the acoustic and optical parameters of the imaged tissues, which is subsequently used to improve accuracy of the image reconstruction routines. The performance of the suggested segmentation and modeling approach are showcased in tissue-mimicking phantoms and small animal imaging experiments.Comment: Accepted for publication in IEEE Transactions on Medical Imagin

    Nextmed: Automatic Imaging Segmentation, 3D Reconstruction, and 3D Model Visualization Platform Using Augmented and Virtual Reality

    Get PDF
    The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization

    MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT Images

    Full text link
    This paper introduces an innovative methodology for producing high-quality 3D lung CT images guided by textual information. While diffusion-based generative models are increasingly used in medical imaging, current state-of-the-art approaches are limited to low-resolution outputs and underutilize radiology reports' abundant information. The radiology reports can enhance the generation process by providing additional guidance and offering fine-grained control over the synthesis of images. Nevertheless, expanding text-guided generation to high-resolution 3D images poses significant memory and anatomical detail-preserving challenges. Addressing the memory issue, we introduce a hierarchical scheme that uses a modified UNet architecture. We start by synthesizing low-resolution images conditioned on the text, serving as a foundation for subsequent generators for complete volumetric data. To ensure the anatomical plausibility of the generated samples, we provide further guidance by generating vascular, airway, and lobular segmentation masks in conjunction with the CT images. The model demonstrates the capability to use textual input and segmentation tasks to generate synthesized images. The results of comparative assessments indicate that our approach exhibits superior performance compared to the most advanced models based on GAN and diffusion techniques, especially in accurately retaining crucial anatomical features such as fissure lines, airways, and vascular structures. This innovation introduces novel possibilities. This study focuses on two main objectives: (1) the development of a method for creating images based on textual prompts and anatomical components, and (2) the capability to generate new images conditioning on anatomical elements. The advancements in image generation can be applied to enhance numerous downstream tasks

    Theme C: Medical information systems and databases - results and future work

    Get PDF
    International audienceThis paper presents the activities of the theme C “medical information systems and databases” in the GDR Stic Santé. Six one-day workshops have been organized during the period 2011–2012. They were devoted to 1) sharing anatomical and physiological object models for simulation of clinical medical images, 2) advantages and limitations of datawarehouse for biological data, 3) medical information engineering, 4) systems for sharing medical images for research, 5) knowledge engineering for semantic interoperability in e-health applications, and 6) using context in health. In the future, our activities will continue with a specific interest on information systems for translational medicine and the role of electronic healthcare reports in decision-making. Workshops with other research groups will be organized in particular with the e-health research group

    One-shot Localization and Segmentation of Medical Images with Foundation Models

    Full text link
    Recent advances in Vision Transformers (ViT) and Stable Diffusion (SD) models with their ability to capture rich semantic features of the image have been used for image correspondence tasks on natural images. In this paper, we examine the ability of a variety of pre-trained ViT (DINO, DINOv2, SAM, CLIP) and SD models, trained exclusively on natural images, for solving the correspondence problems on medical images. While many works have made a case for in-domain training, we show that the models trained on natural images can offer good performance on medical images across different modalities (CT,MR,Ultrasound) sourced from various manufacturers, over multiple anatomical regions (brain, thorax, abdomen, extremities), and on wide variety of tasks. Further, we leverage the correspondence with respect to a template image to prompt a Segment Anything (SAM) model to arrive at single shot segmentation, achieving dice range of 62%-90% across tasks, using just one image as reference. We also show that our single-shot method outperforms the recently proposed few-shot segmentation method - UniverSeg (Dice range 47%-80%) on most of the semantic segmentation tasks(six out of seven) across medical imaging modalities.Comment: Accepted at NeurIPS 2023 R0-FoMo Worksho

    Total Variation meets Sparsity: statistical learning with segmenting penalties

    Get PDF
    International audiencePrediction from medical images is a valuable aid to diagnosis. For instance, anatomical MR images can reveal certain disease conditions, while their functional counterparts can predict neuropsychi-atric phenotypes. However, a physician will not rely on predictions by black-box models: understanding the anatomical or functional features that underpin decision is critical. Generally, the weight vectors of clas-sifiers are not easily amenable to such an examination: Often there is no apparent structure. Indeed, this is not only a prediction task, but also an inverse problem that calls for adequate regularization. We address this challenge by introducing a convex region-selecting penalty. Our penalty combines total-variation regularization, enforcing spatial conti-guity, and 1 regularization, enforcing sparsity, into one group: Voxels are either active with non-zero spatial derivative or zero with inactive spatial derivative. This leads to segmenting contiguous spatial regions (inside which the signal can vary freely) against a background of zeros. Such segmentation of medical images in a target-informed manner is an important analysis tool. On several prediction problems from brain MRI, the penalty shows good segmentation. Given the size of medical images, computational efficiency is key. Keeping this in mind, we contribute an efficient optimization scheme that brings significant computational gains
    • …
    corecore