8 research outputs found
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Prior to the deep learning era, shape was commonly used to describe the
objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are
predominantly diverging from computer vision, where voxel grids, meshes, point
clouds, and implicit surface models are used. This is seen from numerous
shape-related publications in premier vision conferences as well as the growing
popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915
models). For the medical domain, we present a large collection of anatomical
shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument,
called MedShapeNet, created to facilitate the translation of data-driven vision
algorithms to medical applications and to adapt SOTA vision algorithms to
medical problems. As a unique feature, we directly model the majority of shapes
on the imaging data of real patients. As of today, MedShapeNet includes 23
dataset with more than 100,000 shapes that are paired with annotations (ground
truth). Our data is freely accessible via a web interface and a Python
application programming interface (API) and can be used for discriminative,
reconstructive, and variational benchmarks as well as various applications in
virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present
use cases in the fields of classification of brain tumors, facial and skull
reconstructions, multi-class anatomy completion, education, and 3D printing. In
future, we will extend the data and improve the interfaces. The project pages
are: https://medshapenet.ikim.nrw/ and
https://github.com/Jianningli/medshapenet-feedbackComment: 16 page
Highly Cancellous Titanium Alloy (TiAl6V4) Surfaces on Three-Dimensionally Printed, Custom-Made Intercalary Tibia Prostheses: Promising Short- to Intermediate-Term Results
Custom-made, three-dimensionally-printed (3D) bone prostheses gain increasing importance in the reconstruction of bone defects after musculoskeletal tumor resections. They may allow preservation of little remaining bone stock and ensure joint or limb salvage. However, we believe that by constructing anatomy-imitating implants with highly cancellous titanium alloy (TiAl6V4) surfaces using 3D printing technology, further benefits such as functional enhancement and reduction of complications may be achieved. We present a case series of four patients reconstructed using custom-made, 3D-printed intercalary monobloc tibia prostheses treated between 2016 and 2020. The mean patient age at operation was 30 years. Tumor resections were performed for Ewing sarcoma (n = 2), high-grade undifferentiated pleomorphic bone sarcoma (n = 1) and adamantinoma (n = 1). Mean resection length was 17.5 cm and mean operation time 147 min. All patients achieved full weight-bearing and limb salvage at a mean follow-up of 21.25 months. One patient developed a non-union at the proximal bone-implant interface. Alteration of implant design prevented non-union in later patients. Mean MSTS and TESS scores were 23.5 and 88. 3D-printed, custom-made intercalary tibia prostheses achieved joint and limb salvage in this case series despite high, published complication rates for biological and endoprosthetic reconstructions of the diaphyseal and distal tibia. Ingrowth of soft tissues into the highly cancellous implant surface structure reduces dead space, enhances function, and appears promising in reducing complication rates
Variation in response rates to isolated limb perfusion in different soft-tissue tumour subtypes:an international multi-centre study
Objective: The aim of this study was to investigate the response rates of different extremity soft-tissue sarcoma subtypes (eSTS) after isolated limb perfusion (ILP), based on an international multi-centre study. Materials and methods: The retrospective cohort comprised eSTS patients from 17 specialised ILP centres that underwent melphalan-based ILP, with or without recombinant human tumour necrosis factor (rhTNFα) (TM-ILP and M-ILP, respectively). Response was measured on imaging (magnetic resonance imaging) and/or clinical response, for which M-ILPs were excluded. Results: A total of 1109 eSTS patients were included. The three most common histological subtypes were undifferentiated pleomorphic sarcoma (17%, n = 184), synovial sarcoma (16%, n = 175) and myxofibrosarcoma (8%, n = 87). rhTNFα was used in 93% (TM-ILP) and resulted in a significantly better overall response rate (ORR, p = 0.031) and complete responses (CR, p < 0.001) in comparison to M-ILP, without significant differences among histological subgroups. The ORR of TM-ILP was 68%, including 17% CR. Also, 80% showed progressive disease. Significantly higher response rates were shown for Kaposi sarcoma (KS) with 42% CR and 96% ORR (both p < 0.001), and significantly higher CR rates for angiosarcoma (AS, 45%, p < 0.001) and clear cell sarcoma (CCS, 31%, p = 0.049). ILP was followed by resection ≤ 6 months in 80% of the patients. The overall limb salvage rate was 88%, without significant differences among histological subgroups, but was significantly higher for ILP responders compared to non-responders (93% versus 76%, p < 0.001). Conclusion: ILP resulted in high response and LRS among all eSTS subtypes, however, with significant differences between subtypes with most promising results for KS, AS and CCS.</p
Technical considerations for isolated limb perfusion: A consensus paper
Background: Isolated limb perfusion (ILP) is a well-established surgical procedure for the administration of high dose chemotherapy to a limb for the treatment of advanced extremity malignancy. Although the technique of ILP was first described over 60 years ago, ILP is utilised in relatively few specialist centres, co-located with tertiary or quaternary cancer centres. The combination of high dose cytotoxic chemotherapy and the cytokine tumour necrosis factor alpha (TNFα), mandates leakage monitoring to prevent potentially serious systemic toxicity. Since the procedure is performed at relatively few specialist centres, an ILP working group was formed with the aim of producing technical consensus guidelines for the procedure to streamline practice and to provide guidance for new centres commencing the technique. Methods: Between October 2021 and October 2023 a series of face to face online and hybrid meetings were held in which a modified Delphi process was used to develop a unified consensus document. After each meeting the document was modified and recirculated and then rediscussed at subsequent meeting until a greater than 90% consensus was achieved in all recommendations. Results: The completed consensus document comprised 23 topics in which greater than 90% consensus was achieved, with 83% of recommendations having 100% consensus across all members of the working group. The consensus recommendations covered all areas of the surgical procedure including pre-operative assessment, drug dosing and administration, perfusion parameters, hyperthermia, leakage monitoring and theatre logistics, practical surgical strategies and also post-operative care, response evaluation and staff training. Conclusion: We present the first joint expert-based consensus statement with respect to the technical aspects of ILP that can serve as a reference point for both existing and new centres in providing ILP
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac