23 research outputs found

    Intra-operative fiducial-based CT/fluoroscope image registration framework for image-guided robot-assisted joint fracture surgery

    Get PDF
    Purpose Joint fractures must be accurately reduced minimising soft tissue damages to avoid negative surgical outcomes. To this regard, we have developed the RAFS surgical system, which allows the percutaneous reduction of intra-articular fractures and provides intra-operative real-time 3D image guidance to the surgeon. Earlier experiments showed the effectiveness of the RAFS system on phantoms, but also key issues which precluded its use in a clinical application. This work proposes a redesign of the RAFS’s navigation system overcoming the earlier version’s issues, aiming to move the RAFS system into a surgical environment. Methods The navigation system is improved through an image registration framework allowing the intra-operative registration between pre-operative CT images and intra-operative fluoroscopic images of a fractured bone using a custom-made fiducial marker. The objective of the registration is to estimate the relative pose between a bone fragment and an orthopaedic manipulation pin inserted into it intra-operatively. The actual pose of the bone fragment can be updated in real time using an optical tracker, enabling the image guidance. Results Experiments on phantom and cadavers demonstrated the accuracy and reliability of the registration framework, showing a reduction accuracy (sTRE) of about 0.88 ±0.2mm (phantom) and 1.15±0.8mm (cadavers). Four distal femur fractures were successfully reduced in cadaveric specimens using the improved navigation system and the RAFS system following the new clinical workflow (reduction error 1.2±0.3mm, 2±1∘). Conclusion Experiments showed the feasibility of the image registration framework. It was successfully integrated into the navigation system, allowing the use of the RAFS system in a realistic surgical application

    NucTools: analysis of chromatin feature occupancy profiles from high-throughput sequencing data

    Get PDF
    Background: Biomedical applications of high-throughput sequencing methods generate a vast amount of data in which numerous chromatin features are mapped along the genome. The results are frequently analysed by creating binary data sets that link the presence/absence of a given feature to specific genomic loci. However, the nucleosome occupancy or chromatin accessibility landscape is essentially continuous. It is currently a challenge in the field to cope with continuous distributions of deep sequencing chromatin readouts and to integrate the different types of discrete chromatin features to reveal linkages between them. Results: Here we introduce the NucTools suite of Perl scripts as well as MATLAB- and R-based visualization programs for a nucleosome-centred downstream analysis of deep sequencing data. NucTools accounts for the continuous distribution of nucleosome occupancy. It allows calculations of nucleosome occupancy profiles averaged over several replicates, comparisons of nucleosome occupancy landscapes between different experimental conditions, and the estimation of the changes of integral chromatin properties such as the nucleosome repeat length. Furthermore, NucTools facilitates the annotation of nucleosome occupancy with other chromatin features like binding of transcription factors or architectural proteins, and epigenetic marks like histone modifications or DNA methylation. The applications of NucTools are demonstrated for the comparison of several datasets for nucleosome occupancy in mouse embryonic stem cells (ESCs) and mouse embryonic fibroblasts (MEFs). Conclusions: The typical workflows of data processing and integrative analysis with NucTools reveal information on the interplay of nucleosome positioning with other features such as for example binding of a transcription factor CTCF, regions with stable and unstable nucleosomes, and domains of large organized chromatin K9me2 modifications (LOCKs). As potential limitations and problems we discuss how inter-replicate variability of MNase-seq experiments can be addressed

    Acquiring observation models through reverse plan monitoring

    No full text
    Abstract. We present a general-purpose framework for updating a robot’s observation model within the context of planning and execution. Traditional plan execution relies on monitoring plan step transitions through accurate state observations obtained from sensory data. In order to gather meaningful state data from sensors, tedious and time-consuming calibration methods are often required. To address this problem we introduce Reverse Monitoring, a process of learning an observation model through the use of plans composed of scripted actions. The automatically acquired observation models allow the robot to adapt to changes in the environment and robustly execute arbitrary plans. We have fully implemented the method in our AIBO robots, and our empirical results demonstrate its effectiveness.

    3D Surface Reconstruction and Registration for Image Guided Medialization Laryngoplasty

    No full text
    The purpose of our project is to develop an image guided system for the medialization laryngoplasty. One of the fundamental challenges in our system is to accurately register the preoperative 3D CT data to the intraoperative 3D surfaces of the patient. In this paper, we will present a combined surface and fiducial based registration method to register the preoperative 3D CT data to the intraoperative surface of larynx. To accurately model the exposed surface area, an active illumination based stereo vision technique is used for the surface reconstruction. To register the point clouds from the intraoperative stage to the preoperative 3D GT data, a shape priori based ICPmethod is proposed to quickly register the two surfaces. The proposed approach is capable of tracking the fiducial markers and reconstructing the surface of larynx with no damage to the anatomical structure. Although, the proposed method is specifically designed for the image guided laryngoplasty, it can be applied to other image guided surgical areas. We used off-the-shelf digital cameras, LCD projector and rapid 3D prototyper to develop our experimental system. The final RMS error in the registration is less than 1mm. © Springer-Verlag Berlin Heidelberg 2006

    Numerical Investigation of an Axis-based Approach to Rigid Registration

    No full text
    The term rigid registration identifies the process that optimally aligns different data sets whose information has to be merged, as in the case of robot calibration, image-guided surgery or patient-specific gait analysis. One of the most common approaches to rigid registration relies on the identifica-tion of a set of fiducial points in each data set to be registered to compute the rototranslational matrix that optimally aligns them. Both measurement and hu-man errors directly affect the final accuracy of the process. Increasing the number of fiducials may improve registration accuracy but it will also increase the time and complexity of the whole procedure, since correspondence must be estab-lished between fiducials in different data sets. The aim of this paper is to present a new approach that resorts to axes instead of points as fiducial features. The fundamental advantage is that any axis can be easily identified in each data set by least-square linear fitting of multiple, un-sorted measured data. This provides a way to filtering the measurement error within each data set, improving the registration accuracy with a reduced effort. In this work, a closed-form solution for the optimal axis-based rigid registration is presented. The accuracy of the method is compared with standard point-based rigid registration through a numerical test. Axis-based registration results one or-der of magnitude more accurate than point-based registration
    corecore