39 research outputs found

    An integrated approach for reconstructing a surface model of the proximal femur from sparse input data and a multi-resolution point distribution model: an in vitro study

    Get PDF
    Background: Accurate reconstruction of a patient-specific surface model of the proximal femur from preoperatively or intraoperatively available sparse data plays an important role in planning and supporting various computer-assisted surgical procedures. Methods: In this paper, we present an integrated approach using a multi-resolution point distribution model (MR-PDM) to reconstruct a patient-specific surface model of the proximal femur from sparse input data, which may consist of sparse point data or a limited number of calibrated X-ray images. Depending on the modality of the input data, our approach chooses different PDMs. When 3D sparse points are used, which may be obtained intraoperatively via a pointer-based digitization or from a calibrated ultrasound, a fine level point distribution model (FL-PDM) is used in the reconstruction process. In contrast, when calibrated X-ray images are used, which may be obtained preoperatively or intraoperatively, a coarse level point distribution model (CL-PDM) will be used. Results: The present approach was verified on 31 femurs. Three different types of input data, i.e., sparse points, calibrated fluoroscopic images, and calibrated X-ray radiographs, were used in our experiments to reconstruct a surface model of the associated bone. Our experimental results demonstrate promising accuracy of the present approach. Conclusions: A multi-resolution point distribution model facilitate the reconstruction of a patient-specific surface model of the proximal femur from sparse input dat

    Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications

    Get PDF
    This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each bone’s edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems

    Accuracy and Reliability of Pelvimetry Measures Obtained by Manual or Automatic Labeling of Three-Dimensional Pelvic Models.

    Get PDF
    (1) Background: The morphology of the pelvic cavity is important for decision-making in obstetrics. This study aimed to estimate the accuracy and reliability of pelvimetry measures obtained when radiologists manually label anatomical landmarks on three-dimensional (3D) pelvic models. A second objective was to design an automatic labeling method. (2) Methods: Three operators segmented 10 computed tomography scans each. Three radiologists then labeled 12 anatomical landmarks on the pelvic models, which allowed for the calculation of 15 pelvimetry measures. Additionally, an automatic labeling method was developed based on a reference pelvic model, including reference anatomical landmarks, matching the individual pelvic models. (3) Results: Heterogeneity among landmarks in radiologists' labeling accuracy was observed, with some landmarks being rarely mislabeled by more than 4 mm and others being frequently mislabeled by 10 mm or more. The propagation to the pelvimetry measures was limited; only one out of the 15 measures reported a median error above 5 mm or 5°, and all measures showed moderate to excellent inter-radiologist reliability. The automatic method outperformed manual labeling. (4) Conclusions: This study confirmed the suitability of pelvimetry measures based on manual labeling of 3D pelvic models. Automatic labeling offers promising perspectives to decrease the demand on radiologists, standardize the labeling, and describe the pelvic cavity in more detail

    Accurate 3D reconstruction of bony surfaces using ultrasonic synthetic aperture techniques for robotic knee arthroplasty

    Get PDF
    Robotically guided knee arthroplasty systems generally require an individualized, preoperative 3D model of the knee joint. This is typically measured using Computed Tomography (CT) which provides the required accuracy for preoperative surgical intervention planning. Ultrasound imaging presents an attractive alternative to CT, allowing for reductions in cost and the elimination of doses of ionizing radiation, whilst maintaining the accuracy of the 3D model reconstruction of the joint. Traditional phased array ultrasound imaging methods, however, are susceptible to poor resolution and signal to noise ratios (SNR). Alleviating these weaknesses by offering superior focusing power, synthetic aperture methods have been investigated extensively within ultrasonic non-destructive testing. Despite this, they have yet to be fully exploited in medical imaging. In this paper, the ability of a robotic deployed ultrasound imaging system based on synthetic aperture methods to accurately reconstruct bony surfaces is investigated. Employing the Total Focussing Method (TFM) and the Synthetic Aperture Focussing Technique (SAFT), two samples were imaged which were representative of the bones of the knee joint: a human-shaped, composite distal femur and a bovine distal femur. Data were captured using a 5MHz, 128 element 1D phased array, which was manipulated around the samples using a robotic positioning system. Three dimensional surface reconstructions were then produced and compared with reference models measured using a precision laser scanner. Mean errors of 0.82 mm and 0.88 mm were obtained for the composite and bovine samples, respectively, thus demonstrating the feasibility of the approach to deliver the sub-millimetre accuracy required for the application

    Use of a CT statistical deformation model for multi-modal pelvic bone segmentation

    Get PDF
    We present a segmentation algorithm using a statistical deformation model constructed from CT data of adult male pelves coupled to MRI appearance data. The algorithm allows the semi-automatic segmentation of bone for a limited population of MRI data sets. Our application is pelvic bone delineation from pre-operative MRI for image guided pelvic surgery. Specifically, we are developing image guidance for prostatectomies using the daVinci telemanipulator. Hence the use of male pelves only. The algorithm takes advantage of the high contrast of bone in CT data, allowing a robust shape model to be constructed relatively easily. This shape model can then be applied to a population of MRI data sets using a single data set that contains both CT and MRI data. The model is constructed automatically using fluid based non-rigid registration between a set of CT training images, followed by principal component analysis. MRI appearance data is imported using CT and MRI data from the same patient. Registration optimisation is performed using differential evolution. Based on our limited validation to date, the algorithm may outperform segmentation using non-rigid registration between MRI images without the use of shape data. The mean surface registration error achieved was 1.74 mm. The algorithm shows promise for use in segmentation of pelvic bone from MRI, though further refinement and validation is required. We envisage that the algorithm presented could be extended to allow the rapid creation of application specific models in various imaging modalities using a shape model based on CT data

    A Novel Imaging System for Automatic Real-Time 3D Patient-Specific Knee Model Reconstruction Using Ultrasound RF Data

    Get PDF
    This dissertation introduces a novel imaging method and system for automatic real-time 3D patient-specific knee model reconstruction using ultrasound RF data. The developed method uses ultrasound to transcutaneously digitize a point cloud representing the bone’s surface. This point cloud is then used to reconstruct 3D bone model using deformable models method. In this work, three systems were developed for 3D knee bone model reconstruction using ultrasound RF data. The first system uses tracked single-element ultrasound transducer, and was experimented on 12 knee phantoms. An average reconstruction accuracy of 0.98 mm was obtained. The second system was developed using an ultrasound machine which provide real-time access to the ultrasound RF data, and was experimented on 2 cadaveric distal femurs, and proximal tibia. An average reconstruction accuracy of 0.976 mm was achieved. The third system was developed as an extension of the second system, and was used for clinical study of the developed system further assess its accuracy and repeatability. A knee scanning protocol was developed to scan the different articular surfaces of the knee bones to reconstruct 3D model of the bone without the need for bone-implanted motion tracking reference probes. The clinical study was performed on 6 volunteers’ knees. Average reconstruction accuracy of 0.88 mm was achieved with 93.5% repeatability. Three extensions to the developed system were investigated for future work. The first extension is 3D knee injection guidance system. A prototype for the 3D injection guidance system was developed to demonstrate the feasibility of the idea. The second extension in a knee kinematics tracking system using A-mode ultrasound. A simulation framework was developed to study the feasibility of the idea, and to find the best number of single-element ultrasound transducers and their spatial distribution that yield the highest kinematics tracking accuracy. The third extension is 3D cartilage model reconstruction. A preliminary method for cartilage echo detection from ultrasound RF data was developed, and experimented on the distal femur scans of one of the clinical study’s volunteers to reconstruct a 3D point cloud for the cartilage

    3D shape instantiation for intra-operative navigation from a single 2D projection

    Get PDF
    Unlike traditional open surgery where surgeons can see the operation area clearly, in robot-assisted Minimally Invasive Surgery (MIS), a surgeon’s view of the region of interest is usually limited. Currently, 2D images from fluoroscopy, Magnetic Resonance Imaging (MRI), endoscopy or ultrasound are used for intra-operative guidance as real-time 3D volumetric acquisition is not always possible due to the acquisition speed or exposure constraints. 3D reconstruction, however, is key to navigation in complex in vivo geometries and can help resolve this issue. Novel 3D shape instantiation schemes are developed in this thesis, which can reconstruct the high-resolution 3D shape of a target from limited 2D views, especially a single 2D projection or slice. To achieve a complete and automatic 3D shape instantiation pipeline, segmentation schemes based on deep learning are also investigated. These include normalization schemes for training U-Nets and network architecture design of Atrous Convolutional Neural Networks (ACNNs). For U-Net normalization, four popular normalization methods are reviewed, then Instance-Layer Normalization (ILN) is proposed. It uses a sigmoid function to linearly weight the feature map after instance normalization and layer normalization, and cascades group normalization after the weighted feature map. Detailed validation results potentially demonstrate the practical advantages of the proposed ILN for effective and robust segmentation of different anatomies. For network architecture design in training Deep Convolutional Neural Networks (DCNNs), the newly proposed ACNN is compared to traditional U-Net where max-pooling and deconvolutional layers are essential. Only convolutional layers are used in the proposed ACNN with different atrous rates and it has been shown that the method is able to provide a fully-covered receptive field with a minimum number of atrous convolutional layers. ACNN enhances the robustness and generalizability of the analysis scheme by cascading multiple atrous blocks. Validation results have shown the proposed method achieves comparable results to the U-Net in terms of medical image segmentation, whilst reducing the trainable parameters, thus improving the convergence and real-time instantiation speed. For 3D shape instantiation of soft and deforming organs during MIS, Sparse Principle Component Analysis (SPCA) has been used to analyse a 3D Statistical Shape Model (SSM) and to determine the most informative scan plane. Synchronized 2D images are then scanned at the most informative scan plane and are expressed in a 2D SSM. Kernel Partial Least Square Regression (KPLSR) has been applied to learn the relationship between the 2D and 3D SSM. It has been shown that the KPLSR-learned model developed in this thesis is able to predict the intra-operative 3D target shape from a single 2D projection or slice, thus permitting real-time 3D navigation. Validation results have shown the intrinsic accuracy achieved and the potential clinical value of the technique. The proposed 3D shape instantiation scheme is further applied to intra-operative stent graft deployment for the robot-assisted treatment of aortic aneurysms. Mathematical modelling is first used to simulate the stent graft characteristics. This is then followed by the Robust Perspective-n-Point (RPnP) method to instantiate the 3D pose of fiducial markers of the graft. Here, Equally-weighted Focal U-Net is proposed with a cross-entropy and an additional focal loss function. Detailed validation has been performed on patient-specific stent grafts with an accuracy between 1-3mm. Finally, the relative merits and potential pitfalls of all the methods developed in this thesis are discussed, followed by potential future research directions and additional challenges that need to be tackled.Open Acces

    Virtual reality surgery simulation: A survey on patient specific solution

    Get PDF
    For surgeons, the precise anatomy structure and its dynamics are important in the surgery interaction, which is critical for generating the immersive experience in VR based surgical training applications. Presently, a normal therapeutic scheme might not be able to be straightforwardly applied to a specific patient, because the diagnostic results are based on averages, which result in a rough solution. Patient Specific Modeling (PSM), using patient-specific medical image data (e.g. CT, MRI, or Ultrasound), could deliver a computational anatomical model. It provides the potential for surgeons to practice the operation procedures for a particular patient, which will improve the accuracy of diagnosis and treatment, thus enhance the prophetic ability of VR simulation framework and raise the patient care. This paper presents a general review based on existing literature of patient specific surgical simulation on data acquisition, medical image segmentation, computational mesh generation, and soft tissue real time simulation
    corecore