7 research outputs found
Anatomy Completor: A Multi-class Completion Framework for 3D Anatomy Reconstruction
In this paper, we introduce a completion framework to reconstruct the
geometric shapes of various anatomies, including organs, vessels and muscles.
Our work targets a scenario where one or multiple anatomies are missing in the
imaging data due to surgical, pathological or traumatic factors, or simply
because these anatomies are not covered by image acquisition. Automatic
reconstruction of the missing anatomies benefits many applications, such as
organ 3D bio-printing, whole-body segmentation, animation realism,
paleoradiology and forensic imaging. We propose two paradigms based on a 3D
denoising auto-encoder (DAE) to solve the anatomy reconstruction problem: (i)
the DAE learns a many-to-one mapping between incomplete and complete instances;
(ii) the DAE learns directly a one-to-one residual mapping between the
incomplete instances and the target anatomies. We apply a loss aggregation
scheme that enables the DAE to learn the many-to-one mapping more effectively
and further enhances the learning of the residual mapping. On top of this, we
extend the DAE to a multiclass completor by assigning a unique label to each
anatomy involved. We evaluate our method using a CT dataset with whole-body
segmentations. Results show that our method produces reasonable anatomy
reconstructions given instances with different levels of incompleteness (i.e.,
one or multiple random anatomies are missing). Codes and pretrained models are
publicly available at https://github.com/Jianningli/medshapenet-feedback/
tree/main/anatomy-completorComment: 15 page
Multilingual Natural Language Processing Model for Radiology Reports -- The Summary is all you need!
The impression section of a radiology report summarizes important radiology
findings and plays a critical role in communicating these findings to
physicians. However, the preparation of these summaries is time-consuming and
error-prone for radiologists. Recently, numerous models for radiology report
summarization have been developed. Nevertheless, there is currently no model
that can summarize these reports in multiple languages. Such a model could
greatly improve future research and the development of Deep Learning models
that incorporate data from patients with different ethnic backgrounds. In this
study, the generation of radiology impressions in different languages was
automated by fine-tuning a model, publicly available, based on a multilingual
text-to-text Transformer to summarize findings available in English,
Portuguese, and German radiology reports. In a blind test, two board-certified
radiologists indicated that for at least 70% of the system-generated summaries,
the quality matched or exceeded the corresponding human-written summaries,
suggesting substantial clinical reliability. Furthermore, this study showed
that the multilingual model outperformed other models that specialized in
summarizing radiology reports in only one language, as well as models that were
not specifically designed for summarizing radiology reports, such as ChatGPT.Comment: Problems with the mode
Holographic Augmented Reality for DIEP Flap Harvest
Background: During a deep inferior epigastric perforator (DIEP) flap harvest, the identification and localization of the epigastric arteries and its perforators are crucial. Holographic augmented reality is an innovative technique that can be used to visualize this patient-specific anatomy extracted from a computed tomographic scan directly on the patient. This study describes an innovative workflow to achieve this. Methods: A software application for the Microsoft HoloLens was developed to visualize the anatomy as a hologram. By using abdominal nevi as natural landmarks, the anatomy hologram is registered to the patient. To ensure that the anatomy hologram remains correctly positioned when the patient or the user moves, real-time patient tracking is obtained with a quick response marker attached to the patient. Results: Holographic augmented reality can be used to visualize the epigastric arteries and its perforators in preparation for a deep inferior epigastric perforator flap harvest. Conclusions: Potentially, this workflow can be used visualize the vessels intraoperatively. Furthermore, this workflow is intuitive to use and could be applied for other flaps or other types of surgery
3D-COSI ~ 3D Collection of Surgical Instruments
<p><strong>COSI - 3D STL Collection of Surgical Instruments</strong></p><p><i><strong>Due to large file names, we have chosen to use </strong></i><a href="https://www.7-zip.org/"><i><strong>https://www.7-zip.org/</strong></i></a><i><strong> which is 100% free and compatible with WinZip. If you encounter an error using WinZip, its likely due to large file names, please use 7zip.</strong></i></p><p>Inside the repository, you will find an information overview "<i>Overview.docx", </i>a showcase video "<i>Example video 3D instruments.mp4"</i>, STL files of 103 surgical instruments "<i>Surgical Instruments.7z"</i>, examples of variations of the surgical instruments using Blender add-on or Python script build on the Trimesh library "<i>Blender Part x of 8 ... 7z"</i>, or "<i>Trimesh part x of 9 ... .7z". </i>You will also find the used Blender Add On <i>"MultiMesh.zip", </i>measurements of virtual instruments and settings for the add-on (.xlsx), and the script that was used to perform these measurements inside a single folder, "<i>Scripts, measurements and used Blender settings.7z</i>".</p><p>The proposed data collection consists of 103 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments like retractors, forceps, and clamps. The collection is augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality or mixed reality.</p><p>The data within this work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0 NC) (<a href="https://creativecommons.org/licenses/by-nc/4.0/">https://creativecommons.org/licenses/by-nc/4.0/</a>).</p><p><strong>Related article:</strong><br>Luijten, G., Gsaxner, C., Li, J. <i>et al.</i> 3D surgical instrument collection for computer vision and extended reality. <i>Sci Data</i> <strong>10</strong>, 796 (2023). https://doi.org/10.1038/s41597-023-02684-0</p>This work was supported by the REACT-EU project KITE (grant number: EFRE-0801977, Plattform für KI-Translation Essen, https://kite.ikim.nrw/), FWF enFaced 2.0 (grant number: KLI-1044, https://enfaced2.ikim.nrw/), and the Clinician Scientist Program of the Faculty of Medicine RWTH Aachen University. Instruments provided by the Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen. The STL files of the surgical instruments are also available within MedShapeNet (https://medshapenet.ikim.nrw)
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac