25 research outputs found

    Mesh-based vs. Image-based Statistical Appearance Model of the Human Femur: a Preliminary Comparison Study for the Creation of Finite Element Meshes

    Get PDF
    Statistical models have been recently introduced in computational orthopaedics to investigate the bone mechanical properties across several populations. A fundamental aspect for the construction of statistical models concerns the establishment of accurate anatomical correspondences among the objects of the training dataset. Various methods have been proposed to solve this problem such as mesh morphing or image registration algorithms. The objective of this study is to compare a mesh-based and an image-based statistical appearance model approaches for the creation of nite element(FE) meshes. A computer tomography (CT) dataset of 157 human left femurs was used for the comparison. For each approach, 30 finite element meshes were generated with the models. The quality of the obtained FE meshes was evaluated in terms of volume, size and shape of the elements. Results showed that the quality of the meshes obtained with the image-based approach was higher than the quality of the mesh-based approach. Future studies are required to evaluate the impact of this finding on the final mechanical simulations

    ORMIR_XCT: A Python package for high resolution peripheral quantitative computed tomography image processing

    Full text link
    High resolution peripheral quantitative computed tomography (HR-pQCT) is an imaging technique capable of imaging trabecular bone in-vivo. HR-pQCT has a wide range of applications, primarily focused on bone to improve our understanding of musculoskeletal diseases, assess epidemiological associations, and evaluate the effects of pharmaceutical interventions. Processing HR-pQCT images has largely been supported using the scanner manufacturer scripting language (Image Processing Language, IPL, Scanco Medical). However, by expanding image processing workflows outside of the scanner manufacturer software environment, users have the flexibility to apply more advanced mathematical techniques and leverage modern software packages to improve image processing. The ORMIR_XCT Python package was developed to reimplement some existing IPL workflows and provide an open and reproducible package allowing for the development of advanced HR-pQCT data processing workflows

    Evidence Based Development of a Novel Lateral Fibula Plate (VariAx Fibula) Using a Real CT Bone Data Based Optimization Process During Device Development

    Get PDF
    Development of novel implants in orthopaedic trauma surgery is based on limited datasets of cadaver trials or artificial bone models. A method has been developed whereby implants can be constructed in an evidence based method founded on a large anatomic database consisting of more than 2.000 datasets of bones extracted from CT scans. The aim of this study was the development and clinical application of an anatomically pre-contoured plate for the treatment of distal fibular fractures based on the anatomical database

    pyKNEEr: An image analysis workflow for open and reproducible research on femoral knee cartilage.

    No full text
    Transparent research in musculoskeletal imaging is fundamental to reliably investigate diseases such as knee osteoarthritis (OA), a chronic disease impairing femoral knee cartilage. To study cartilage degeneration, researchers have developed algorithms to segment femoral knee cartilage from magnetic resonance (MR) images and to measure cartilage morphology and relaxometry. The majority of these algorithms are not publicly available or require advanced programming skills to be compiled and run. However, to accelerate discoveries and findings, it is crucial to have open and reproducible workflows. We present pyKNEEr, a framework for open and reproducible research on femoral knee cartilage from MR images. pyKNEEr is written in python, uses Jupyter notebook as a user interface, and is available on GitHub with a GNU GPLv3 license. It is composed of three modules: 1) image preprocessing to standardize spatial and intensity characteristics; 2) femoral knee cartilage segmentation for intersubject, multimodal, and longitudinal acquisitions; and 3) analysis of cartilage morphology and relaxometry. Each module contains one or more Jupyter notebooks with narrative, code, visualizations, and dependencies to reproduce computational environments. pyKNEEr facilitates transparent image-based research of femoral knee cartilage because of its ease of installation and use, and its versatility for publication and sharing among researchers. Finally, due to its modular structure, pyKNEEr favors code extension and algorithm comparison. We tested our reproducible workflows with experiments that also constitute an example of transparent research with pyKNEEr, and we compared pyKNEEr performances to existing algorithms in literature review visualizations. We provide links to executed notebooks and executable environments for immediate reproducibility of our findings

    Image-based vs. mesh-based statistical appearance models of the human femur: Implications for finite element simulations.

    No full text
    Statistical appearance models have recently been introduced in bone mechanics to investigate bone geometry and mechanical properties in population studies. The establishment of accurate anatomical correspondences is a critical aspect for the construction of reliable models. Depending on the representation of a bone as an image or a mesh, correspondences are detected using image registration or mesh morphing. The objective of this study was to compare image-based and mesh-based statistical appearance models of the femur for finite element (FE) simulations. To this aim, (i) we compared correspondence detection methods on bone surface and in bone volume; (ii) we created an image-based and a mesh-based statistical appearance models from 130 images, which we validated using compactness, representation and generalization, and we analyzed the FE results on 50 recreated bones vs. original bones; (iii) we created 1000 new instances, and we compared the quality of the FE meshes. Results showed that the image-based approach was more accurate in volume correspondence detection and quality of FE meshes, whereas the mesh-based approach was more accurate for surface correspondence detection and model compactness. Based on our results, we recommend the use of image-based statistical appearance models for FE simulations of the femur

    Combined statistical model of bone shape and mechanical properties for bone and implant modeling

    Get PDF
    International audienceCurrent design process for orthopedic implants relies on limited knowledge about shape and mechanical properties of the bones of interest. Nowadays implant design is based on reduced information deriving from literature (e.g. bone angles and lengths) and does not take into account differences among populations (ethnicity, height, gender ...). For these reasons we propose a method to model bone shape variations for different populations. The proposed model is based on recent advances in image processing to create compact mathematical representations of bone anatomy. About 140 CT images were collected. These images were segmented and registered to find the anatomical correspondences among them. Based on this registration, a statistical shape model was built to compute the average bone, variations of bone shape across the population and to generate new bone instances. The finite element method was then used to calculate the mechanical performance of the generated instances. The developed model was used to compare bone stiffness between male and female populations. Results showed no significant difference between male and female bone stiffness. This was affected by the fact that all instance had the same dimensions and the same mechanical properties. The combination of statistical shape model and finite element calculation has the potential to become a powerful tool to assess orthopedic procedures and implant design

    The Virtual Skeleton Database: An Open Access Repository for Biomedical Research and Collaboration

    Get PDF
    Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms
    corecore