4 research outputs found

    3D-COSI ~ 3D Collection of Surgical Instruments

    No full text
    <p><strong>COSI - 3D STL Collection of Surgical Instruments</strong></p><p><i><strong>Due to large file names, we have chosen to use </strong></i><a href="https://www.7-zip.org/"><i><strong>https://www.7-zip.org/</strong></i></a><i><strong> which is 100% free and compatible with WinZip. If you encounter an error using WinZip, its likely due to large file names, please use 7zip.</strong></i></p><p>Inside the repository, you will find an information overview "<i>Overview.docx", </i>a showcase video "<i>Example video 3D instruments.mp4"</i>, STL files of 103 surgical instruments "<i>Surgical Instruments.7z"</i>, examples of variations of the surgical instruments using Blender add-on or Python script build on the Trimesh library "<i>Blender Part x of 8 ... 7z"</i>, or "<i>Trimesh part x of 9 ... .7z". </i>You will also find the used Blender Add On <i>"MultiMesh.zip", </i>measurements of virtual instruments and settings for the add-on (.xlsx), and the script that was used to perform these measurements inside a single folder, "<i>Scripts, measurements and used Blender settings.7z</i>".</p><p>The proposed data collection consists of 103 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments like retractors, forceps, and clamps. The collection is augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality or mixed reality.</p><p>The data within this work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0 NC) (<a href="https://creativecommons.org/licenses/by-nc/4.0/">https://creativecommons.org/licenses/by-nc/4.0/</a>).</p><p><strong>Related article:</strong><br>Luijten, G., Gsaxner, C., Li, J. <i>et al.</i> 3D surgical instrument collection for computer vision and extended reality. <i>Sci Data</i> <strong>10</strong>, 796 (2023). https://doi.org/10.1038/s41597-023-02684-0</p>This work was supported by the REACT-EU project KITE (grant number: EFRE-0801977, Plattform für KI-Translation Essen, https://kite.ikim.nrw/), FWF enFaced 2.0 (grant number: KLI-1044, https://enfaced2.ikim.nrw/), and the Clinician Scientist Program of the Faculty of Medicine RWTH Aachen University. Instruments provided by the Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen. The STL files of the surgical instruments are also available within MedShapeNet (https://medshapenet.ikim.nrw)

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
    corecore