3,010 research outputs found

    Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review

    Get PDF
    Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519

    IMAGE ANALYSIS FOR SPINE SURGERY: DATA-DRIVEN DETECTION OF SPINE INSTRUMENTATION & AUTOMATIC ANALYSIS OF GLOBAL SPINAL ALIGNMENT

    Get PDF
    Spine surgery is a therapeutic modality for treatment of spine disorders, including spinal deformity, degeneration, and trauma. Such procedures benefit from accurate localization of surgical targets, precise delivery of instrumentation, and reliable validation of surgical objectives – for example, confirming that the surgical implants are delivered as planned and desired changes to the global spinal alignment (GSA) are achieved. Recent advances in surgical navigation have helped to improve the accuracy and precision of spine surgery, including intraoperative imaging integrated with real-time tracking and surgical robotics. This thesis aims to develop two methods for improved image-guided surgery using image analytic techniques. The first provides a means for automatic detection of pedicle screws in intraoperative radiographs – for example, to streamline intraoperative assessment of implant placement. The algorithm achieves a precision and recall of 0.89 and 0.91, respectively, with localization accuracy within ~10 mm. The second develops two algorithms for automatic assessment of GSA in computed tomography (CT) or cone-beam CT (CBCT) images, providing a means to quantify changes in spinal curvature and reduce the variability in GSA measurement associated with manual methods. The algorithms demonstrate GSA estimates with 93.8% of measurements within a 95% confidence interval of manually defined truth. Such methods support the goals of safe, effective spine surgery and provide a means for more quantitative intraoperative quality assurance. In turn, the ability to quantitatively assess instrument placement and changes in GSA could represent important elements of retrospective analysis of large image datasets, improved clinical decision support, and improved patient outcomes

    Automatic Calculation of Cervical Spine Parameters Using Deep Learning: Development and Validation on an External Dataset

    Full text link
    STUDY DESIGN Retrospective data analysis. OBJECTIVES This study aims to develop a deep learning model for the automatic calculation of some important spine parameters from lateral cervical radiographs. METHODS We collected two datasets from two different institutions. The first dataset of 1498 images was used to train and optimize the model to find the best hyperparameters while the second dataset of 79 images was used as an external validation set to evaluate the robustness and generalizability of our model. The performance of the model was assessed by calculating the median absolute errors between the model prediction and the ground truth for the following parameters: T1 slope, C7 slope, C2-C7 angle, C2-C6 angle, Sagittal Vertical Axis (SVA), C0-C2, Redlund-Johnell distance (RJD), the cranial tilting (CT) and the craniocervical angle (CCA). RESULTS Regarding the angles, we found median errors of 1.66° (SD 2.46°), 1.56° (1.95°), 2.46° (SD 2.55), 1.85° (SD 3.93°), 1.25° (SD 1.83°), .29° (SD .31°) and .67° (SD .77°) for T1 slope, C7 slope, C2-C7, C2-C6, C0-C2, CT, and CCA respectively. As concerns the distances, we found median errors of .55 mm (SD .47 mm) and .47 mm (.62 mm) for SVA and RJD respectively. CONCLUSIONS In this work, we developed a model that was able to accurately predict cervical spine parameters from lateral cervical radiographs. In particular, the performances on the external validation set demonstrate the robustness and the high degree of generalizability of our model on images acquired in a different institution

    Cube-Cut: Vertebral Body Segmentation in MRI-Data through Cubic-Shaped Divergences

    Full text link
    In this article, we present a graph-based method using a cubic template for volumetric segmentation of vertebrae in magnetic resonance imaging (MRI) acquisitions. The user can define the degree of deviation from a regular cube via a smoothness value Delta. The Cube-Cut algorithm generates a directed graph with two terminal nodes (s-t-network), where the nodes of the graph correspond to a cubic-shaped subset of the image's voxels. The weightings of the graph's terminal edges, which connect every node with a virtual source s or a virtual sink t, represent the affinity of a voxel to the vertebra (source) and to the background (sink). Furthermore, a set of infinite weighted and non-terminal edges implements the smoothness term. After graph construction, a minimal s-t-cut is calculated within polynomial computation time, which splits the nodes into two disjoint units. Subsequently, the segmentation result is determined out of the source-set. A quantitative evaluation of a C++ implementation of the algorithm resulted in an average Dice Similarity Coefficient (DSC) of 81.33% and a running time of less than a minute.Comment: 23 figures, 2 tables, 43 references, PLoS ONE 9(4): e9338

    Spinal alignment shift between supine and prone CT imaging occurs frequently and regardless of the anatomic region, risk factors, or pathology

    Get PDF
    Computer-assisted spine surgery based on preoperative CT imaging may be hampered by sagittal alignment shifts due to an intraoperative switch from supine to prone. In the present study, we systematically analyzed the occurrence and pattern of sagittal spinal alignment shift between corresponding preoperative (supine) and intraoperative (prone) CT imaging in patients that underwent navigated posterior instrumentation between 2014 and 2017. Sagittal alignment across the levels of instrumentation was determined according to the C2 fracture gap (C2-F) and C2 translation (C2-T) in odontoid type 2 fractures, next to the modified Cobb angle (CA), plumbline (PL), and translation (T) in subaxial pathologies. One-hundred and twenty-one patients (C1/C2: n = 17; C3-S1: n = 104) with degenerative (39/121; 32%), oncologic (35/121; 29%), traumatic (34/121; 28%), or infectious (13/121; 11%) pathologies were identified. In the subaxial spine, significant shift occurred in 104/104 (100%) cases (CA: *p = .044; T: *p = .021) compared to only 10/17 (59%) cases that exhibited shift at the C1/C2 level (C2-F: **p = .002; C2-T: *p 5 segments (" increment PL > 5 segments": 4.5 +/- 1.8 mm; " increment PL <= 5 segments": 2 +/- 0.6 mm; *p = .013) or in revision surgery with pre-existing instrumentation (" increment PL presence": 5 +/- 2.6 mm; " increment PL absence": 2.4 +/- 0.7 mm; **p = .007). Interestingly, typical morphological instability risk factors did not influence the degree of shift. In conclusion, intraoperative spinal alignment shift due to a change in patient position should be considered as a cause for inaccuracy during computer-assisted spine surgery and when correcting spinal alignment according to parameters that were planned in other patient positions

    Deformable Multisurface Segmentation of the Spine for Orthopedic Surgery Planning and Simulation

    Get PDF
    Purpose: We describe a shape-aware multisurface simplex deformable model for the segmentation of healthy as well as pathological lumbar spine in medical image data. Approach: This model provides an accurate and robust segmentation scheme for the identification of intervertebral disc pathologies to enable the minimally supervised planning and patient-specific simulation of spine surgery, in a manner that combines multisurface and shape statistics-based variants of the deformable simplex model. Statistical shape variation within the dataset has been captured by application of principal component analysis and incorporated during the segmentation process to refine results. In the case where shape statistics hinder detection of the pathological region, user assistance is allowed to disable the prior shape influence during deformation. Results: Results demonstrate validation against user-assisted expert segmentation, showing excellent boundary agreement and prevention of spatial overlap between neighboring surfaces. This section also plots the characteristics of the statistical shape model, such as compactness, generalizability and specificity, as a function of the number of modes used to represent the family of shapes. Final results demonstrate a proof-of-concept deformation application based on the open-source surgery simulation Simulation Open Framework Architecture toolkit. Conclusions: To summarize, we present a deformable multisurface model that embeds a shape statistics force, with applications to surgery planning and simulation

    Anatomy-Aware Inference of the 3D Standing Spine Posture from 2D Radiographs

    Full text link
    An important factor for the development of spinal degeneration, pain and the outcome of spinal surgery is known to be the balance of the spine. It must be analyzed in an upright, standing position to ensure physiological loading conditions and visualize load-dependent deformations. Despite the complex 3D shape of the spine, this analysis is currently performed using 2D radiographs, as all frequently used 3D imaging techniques require the patient to be scanned in a prone position. To overcome this limitation, we propose a deep neural network to reconstruct the 3D spinal pose in an upright standing position, loaded naturally. Specifically, we propose a novel neural network architecture, which takes orthogonal 2D radiographs and infers the spine’s 3D posture using vertebral shape priors. In this work, we define vertebral shape priors using an atlas and a spine shape prior, incorporating both into our proposed network architecture. We validate our architecture on digitally reconstructed radiographs, achieving a 3D reconstruction Dice of 0.95, indicating an almost perfect 2D-to-3D domain translation. Validating the reconstruction accuracy of a 3D standing spine on real data is infeasible due to the lack of a valid ground truth. Hence, we design a novel experiment for this purpose, using an orientation invariant distance metric, to evaluate our model’s ability to synthesize full-3D, upright, and patient-specific spine models. We compare the synthesized spine shapes from clinical upright standing radiographs to the same patient’s 3D spinal posture in the prone position from CT

    Shape/image registration for medical imaging : novel algorithms and applications.

    Get PDF
    This dissertation looks at two different categories of the registration approaches: Shape registration, and Image registration. It also considers the applications of these approaches into the medical imaging field. Shape registration is an important problem in computer vision, computer graphics and medical imaging. It has been handled in different manners in many applications like shapebased segmentation, shape recognition, and tracking. Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. Many image processing applications like remote sensing, fusion of medical images, and computer-aided surgery need image registration. This study deals with two different applications in the field of medical image analysis. The first one is related to shape-based segmentation of the human vertebral bodies (VBs). The vertebra consists of the VB, spinous, and other anatomical regions. Spinous pedicles, and ribs should not be included in the bone mineral density (BMD) measurements. The VB segmentation is not an easy task since the ribs have similar gray level information. This dissertation investigates two different segmentation approaches. Both of them are obeying the variational shape-based segmentation frameworks. The first approach deals with two dimensional (2D) case. This segmentation approach starts with obtaining the initial segmentation using the intensity/spatial interaction models. Then, shape model is registered to the image domain. Finally, the optimal segmentation is obtained using the optimization of an energy functional which integrating the shape model with the intensity information. The second one is a 3D simultaneous segmentation and registration approach. The information of the intensity is handled by embedding a Willmore flow into the level set segmentation framework. Then the shape variations are estimated using a new distance probabilistic model. The experimental results show that the segmentation accuracy of the framework are much higher than other alternatives. Applications on BMD measurements of vertebral body are given to illustrate the accuracy of the proposed segmentation approach. The second application is related to the field of computer-aided surgery, specifically on ankle fusion surgery. The long-term goal of this work is to apply this technique to ankle fusion surgery to determine the proper size and orientation of the screws that are used for fusing the bones together. In addition, we try to localize the best bone region to fix these screws. To achieve these goals, the 2D-3D registration is introduced. The role of 2D-3D registration is to enhance the quality of the surgical procedure in terms of time and accuracy, and would greatly reduce the need for repeated surgeries; thus, saving the patients time, expense, and trauma

    Advanced Algorithms for 3D Medical Image Data Fusion in Specific Medical Problems

    Get PDF
    Fúze obrazu je dnes jednou z nejběžnějších avšak stále velmi diskutovanou oblastí v lékařském zobrazování a hraje důležitou roli ve všech oblastech lékařské péče jako je diagnóza, léčba a chirurgie. V této dizertační práci jsou představeny tři projekty, které jsou velmi úzce spojeny s oblastí fúze medicínských dat. První projekt pojednává o 3D CT subtrakční angiografii dolních končetin. V práci je využito kombinace kontrastních a nekontrastních dat pro získání kompletního cévního stromu. Druhý projekt se zabývá fúzí DTI a T1 váhovaných MRI dat mozku. Cílem tohoto projektu je zkombinovat stukturální a funkční informace, které umožňují zlepšit znalosti konektivity v mozkové tkáni. Třetí projekt se zabývá metastázemi v CT časových datech páteře. Tento projekt je zaměřen na studium vývoje metastáz uvnitř obratlů ve fúzované časové řadě snímků. Tato dizertační práce představuje novou metodologii pro klasifikaci těchto metastáz. Všechny projekty zmíněné v této dizertační práci byly řešeny v rámci pracovní skupiny zabývající se analýzou lékařských dat, kterou vedl pan Prof. Jiří Jan. Tato dizertační práce obsahuje registrační část prvního a klasifikační část třetího projektu. Druhý projekt je představen kompletně. Další část prvního a třetího projektu, obsahující specifické předzpracování dat, jsou obsaženy v disertační práci mého kolegy Ing. Romana Petera.Image fusion is one of today´s most common and still challenging tasks in medical imaging and it plays crucial role in all areas of medical care such as diagnosis, treatment and surgery. Three projects crucially dependent on image fusion are introduced in this thesis. The first project deals with the 3D CT subtraction angiography of lower limbs. It combines pre-contrast and contrast enhanced data to extract the blood vessel tree. The second project fuses the DTI and T1-weighted MRI brain data. The aim of this project is to combine the brain structural and functional information that purvey improved knowledge about intrinsic brain connectivity. The third project deals with the time series of CT spine data where the metastases occur. In this project the progression of metastases within the vertebrae is studied based on fusion of the successive elements of the image series. This thesis introduces new methodology of classifying metastatic tissue. All the projects mentioned in this thesis have been solved by the medical image analysis group led by Prof. Jiří Jan. This dissertation concerns primarily the registration part of the first project and the classification part of the third project. The second project is described completely. The other parts of the first and third project, including the specific preprocessing of the data, are introduced in detail in the dissertation thesis of my colleague Roman Peter, M.Sc.
    corecore