10 research outputs found

    From bones to bytes : do manipulable 3D models have added value in osteology education compared to static images?

    No full text
    BackgroundOver the past few years, anatomy education has been revolutionized through digital media, resulting in innovative computer-based 3D models to supplement or even replace traditional learning materials. However, the added value of these models in terms of learning performance remains unclear. Multiple mechanisms may contribute to the inconclusive findings. This study focusses on the impact of active manipulation on learning performance and the influence that posttest design features may have on the outcome measurement. MethodsParticipants were randomly assigned to one of two research conditions: studying on the base of a computer-based manipulable pelvic bone model versus online static images of the same model. Pretests focused on students' baseline anatomy knowledge and spatial ability. Three knowledge posttests were administered: a test based on a physical pelvic bone model, and two computer-based tests based on static images and a manipulable model. Mental effort was measured with the Paas mental effort rating scale. ResultsIn the static images-based posttest, significantly higher knowledge scores were attained by participants studying in the static images research condition (p = 0.043). No other significant knowledge-related differences could be observed. In the manipulable model-based posttest, spatial ability rather than the research condition seemed to have an influential role on the outcome scores (r = 0.18, p = 0.049). Mental effort scores reflected no difference between both research conditions. ConclusionThe research results are counter-intuitive, especially because no significant differences were found in the physical model-based posttest in students who studied with the manipulable model. Explaining the results builds on differences in anatomical models requiring less or more active manipulation to process spatial information. The pelvic bone manipulable model, and by extension osteology models, might be insufficiently complex to provide added value compared with static images. Moreover, the posttest modality should be chosen with care since spatial ability rather than anatomy knowledge may be measured

    Digital body preservation : technique and applications

    No full text
    High-fidelity anatomical models can be produced with three-dimensional (3D) scanning techniques and as such be digitally preserved, archived, and subsequently rendered through various media. Here, a novel methodology-digital body preservation-is presented for combining and matching scan geometry with radiographic imaging. The technique encompasses joining layers of 3D surface scans in an anatomical correct spatial relationship. To do so, a computed tomography (CT) volume is used as template to join and merge different surface scan geometries by means of nonrigid registration into a single environment. In addition, the use and applicability of the generated 3D models in digital learning modalities is presented. Finally, as computational expense is usually the main bottleneck in extended 3D applications, the influence of mesh simplification in combination with texture mapping on the quality of 3D models was investigated. The physical fidelity of the simplified meshes was evaluated in relation to their resolution and with respect to key anatomical features. Large- and medium-scale features were well preserved despite extensive 3D mesh simplification. Subtle fine-scale features, particular in curved areas demonstrated the major limitation to extensive mesh size reduction. Depending on the local topography, workable mesh sizes ranging from 10% to 3% of the original size could be obtained, making them usable in various learning applications and environments

    Digital taxidermy

    No full text
    Detailed anatomical models can be produced with three-dimensional (3D) scanning techniques and as such be digitally preserved, archived and subsequently rendered through various media. A novel methodology - digital taxidermy - is presented for combining and matching scan geometry with radiographic imaging. The technique encompasses joining layers of 3D surface scans in an anatomical correct spatial relationship. To do so, a computed tomography (CT) volume is used as template to join and merge different surface scan geometries by means of non-rigid registration into a single environment. This results in a digital model that can be used in multiple digital learning environments. Finally, as computational expense is usually the main bottleneck in extended 3D applications, the influence of mesh simplification in combination with texture mapping on the quality of 3D models was investigated. The fidelity of the simplified meshes was evaluated in relation to their resolution and with respect to key anatomical features. Large- and medium-scale features were well preserved despite extensive 3D mesh simplification. Subtle fine-scale features, particular in curved areas demonstrated the major limitation to extensive mesh size reduction. Depending on the local topography, workable mesh sizes ranging from 10% to 3% of the original size could be obtained, making them usable in various learning applications and environments

    The open anatomy explorer, a new platform for anatomical education

    No full text
    The Open Anatomy Explorer is a web-based anatomical platform based on real human 3D-scanned models. It was developed as an educational tool to help students improve their anatomical knowledge. The web-based application, which is available across different platforms, is convenient for all students with access to the internet. The application can import 3D models and interactively visualize these. The framework provides tools for labelling of the models in a live 3D view using painting tools that teachers can freely interact with. In addition, it provides a “flash-card”-like quiz system that allows students and lecturers to set up quizzes regarding the annotated anatomical specimens, with question types such as locating or identifying features. Ideas for future improvements include integration of additional information websites, volume-rendering of CT-scans, examination tools, compatibility to AR-devices and user-interface optimalisation

    From bones to bytes : turning the human body into a 3D model library

    No full text
    Purpose: Creation of a digital library of three-dimensional (3D) models using human cadaveric material. Methods: A blue-light surface scanner (Artec® Space Spider) was used to scan anatomical specimens such as human bones, brains, liver, hearts and prosection specimens. In addition, a Thiel embalmed human cadaver was dissected layer by layer and scanned. Scan processing was done with Artec® Studio 15 software, using eraser tools, an outlier removal algorithm and a global registration process. Afterwards, multiple scans were aligned and the model was finalized with smooth or sharp fusion and texture mapping. Results: 3D digital models were successfully created using human cadaveric material. Detailed structures were well preserved due to high resolution scanning. Realistic color information and shapes were preserved due to texture mapping. Soft tissue scanning posed the biggest challenge as movement and deformation of the tissue could lead to misalignment and noise during processing. Therefore, fixation of the body parts before scanning was crucial. Freezing of deformable tissues was done and body parts were positioned in such a way that they could be scanned from as many angles as possible without needing to reposition the body part. Conclusion: This method can be used to create a 3D model library representing different categories of human cadaveric material such as bones, internal organs and a full body multi-layer model. The models are based on true anatomy and colors, and can be integrated into digital learning environments

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
    corecore