606 research outputs found

    Computer assisted surgery for fracture reduction and deformity correction of the pelvis and long bones

    Get PDF
    Many orthopaedic operations, for example osteotomies, are not preoperative planned. The operation result depends on the experience of the operating surgeon. In the industry new developments are not longer curried out without CAD planning or computer simulations. Only in medicine the operation technology of corrective osteotomies are still in their infant stage in the last 30 years. Two dimensional analysis is not accurate that results in operation errors in the operating room. The surgeon usually obtains the preoperative information about the current bone state by radiographs. In case of complex operations (also inserting implants) planning is required. Planning based on radiographs has some system-dependent disadvantages like small accuracy, requirement of time for corrections ( distortions due to the projection) and restrictions, if complex corrections are necessary. Today the computer tomography is used as a solution. It is the only modality that allows to reach the accuracy and the resolution required for a good 3D-planning. However its a high dose rate for the patient is the serious disadvantage. Therefore in dilemma between the low dose rate and an adequate planning the first is often preferred. However in future it is expected that good operation results are guarantied only with implementation of 3D-planung. MR systems provide image information too, from which indirectly bones can be extracted. But due to their large distortions (susceptibility, non non-homogeneity of magnetic field), small spatial dissolution and the high costs, it is not expected that MRI represents an alternative in next time. The solution is the use of other image modalities. Ultrasound is here a good compromise both of the costs of the accuracy. In this work I developed an algorithm, which can produce 3D bone models from ultrasonic data. They have good resolution and accuracy compared with CT, and therefore can be used for 3D planning. In the work an improved procedure for segmenting bone surfaces is realised in combination with methods for the fusion for a three-dimensional model. The novelty of the presented work is in new approaches to realising an operation planning system, based on 3D computations, and implementing the intraoperative control by a guided ultrasound system for bone tracking. To realise these ideas the following tasks are solved: - bone modelling from CT data; - real-time extraction of bone surfaces from ultrasound imaging; - tracking the bone with respect to CT bone model. - integrating and implementing the above results in the development of an operation planning system for osteotomy corrections that supports on-line measurements, different types of deformity correction, a bone geometry design and a high level of automation. The developed osteotomy planning system allows to investigate the pathology, makes its analysis, finds an optimal way to realise surgery and provides visual and quantitative information about the results of the virtual operation. Therefore, the implementation of the proposed system can be considered as an additional significant tool for the diagnosis and orthopaedic surgery. The major parts of the planning system are: bone modelling from 3D data derived from CT, MRI or other modalities, visualisation of the elements of the 3D scene in real-time, and the geometric design of bone elements. A high level of automation allows the surgeon to reduce significantly the time of the operation plane development

    Towards markerless orthopaedic navigation with intuitive Optical See-through Head-mounted displays

    Get PDF
    The potential of image-guided orthopaedic navigation to improve surgical outcomes has been well-recognised during the last two decades. According to the tracked pose of target bone, the anatomical information and preoperative plans are updated and displayed to surgeons, so that they can follow the guidance to reach the goal with higher accuracy, efficiency and reproducibility. Despite their success, current orthopaedic navigation systems have two main limitations: for target tracking, artificial markers have to be drilled into the bone and calibrated manually to the bone, which introduces the risk of additional harm to patients and increases operating complexity; for guidance visualisation, surgeons have to shift their attention from the patient to an external 2D monitor, which is disruptive and can be mentally stressful. Motivated by these limitations, this thesis explores the development of an intuitive, compact and reliable navigation system for orthopaedic surgery. To this end, conventional marker-based tracking is replaced by a novel markerless tracking algorithm, and the 2D display is replaced by a 3D holographic Optical see-through (OST) Head-mounted display (HMD) precisely calibrated to a user's perspective. Our markerless tracking, facilitated by a commercial RGBD camera, is achieved through deep learning-based bone segmentation followed by real-time pose registration. For robust segmentation, a new network is designed and efficiently augmented by a synthetic dataset. Our segmentation network outperforms the state-of-the-art regarding occlusion-robustness, device-agnostic behaviour, and target generalisability. For reliable pose registration, a novel Bounded Iterative Closest Point (BICP) workflow is proposed. The improved markerless tracking can achieve a clinically acceptable error of 0.95 deg and 2.17 mm according to a phantom test. OST displays allow ubiquitous enrichment of perceived real world with contextually blended virtual aids through semi-transparent glasses. They have been recognised as a suitable visual tool for surgical assistance, since they do not hinder the surgeon's natural eyesight and require no attention shift or perspective conversion. The OST calibration is crucial to ensure locational-coherent surgical guidance. Current calibration methods are either human error-prone or hardly applicable to commercial devices. To this end, we propose an offline camera-based calibration method that is highly accurate yet easy to implement in commercial products, and an online alignment-based refinement that is user-centric and robust against user error. The proposed methods are proven to be superior to other similar State-of- the-art (SOTA)s regarding calibration convenience and display accuracy. Motivated by the ambition to develop the world's first markerless OST navigation system, we integrated the developed markerless tracking and calibration scheme into a complete navigation workflow designed for femur drilling tasks during knee replacement surgery. We verify the usability of our designed OST system with an experienced orthopaedic surgeon by a cadaver study. Our test validates the potential of the proposed markerless navigation system for surgical assistance, although further improvement is required for clinical acceptance.Open Acces

    The state-of-the-art in ultrasound-guided spine interventions.

    Get PDF
    During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications

    Sub-pixel Registration In Computational Imaging And Applications To Enhancement Of Maxillofacial Ct Data

    Get PDF
    In computational imaging, data acquired by sampling the same scene or object at different times or from different orientations result in images in different coordinate systems. Registration is a crucial step in order to be able to compare, integrate and fuse the data obtained from different measurements. Tomography is the method of imaging a single plane or slice of an object. A Computed Tomography (CT) scan, also known as a CAT scan (Computed Axial Tomography scan), is a Helical Tomography, which traditionally produces a 2D image of the structures in a thin section of the body. It uses X-ray, which is ionizing radiation. Although the actual dose is typically low, repeated scans should be limited. In dentistry, implant dentistry in specific, there is a need for 3D visualization of internal anatomy. The internal visualization is mainly based on CT scanning technologies. The most important technological advancement which dramatically enhanced the clinician\u27s ability to diagnose, treat, and plan dental implants has been the CT scan. Advanced 3D modeling and visualization techniques permit highly refined and accurate assessment of the CT scan data. However, in addition to imperfections of the instrument and the imaging process, it is not uncommon to encounter other unwanted artifacts in the form of bright regions, flares and erroneous pixels due to dental bridges, metal braces, etc. Currently, removing and cleaning up the data from acquisition backscattering imperfections and unwanted artifacts is performed manually, which is as good as the experience level of the technician. On the other hand the process is error prone, since the editing process needs to be performed image by image. We address some of these issues by proposing novel registration methods and using stonecast models of patient\u27s dental imprint as reference ground truth data. Stone-cast models were originally used by dentists to make complete or partial dentures. The CT scan of such stone-cast models can be used to automatically guide the cleaning of patients\u27 CT scans from defects or unwanted artifacts, and also as an automatic segmentation system for the outliers of the CT scan data without use of stone-cast models. Segmented data is subsequently used to clean the data from artifacts using a new proposed 3D inpainting approach

    3D registration of MR and X-ray spine images using an articulated model

    Get PDF
    Présentation: Cet article a été publié dans le journal : Computerised medical imaging and graphics (CMIG). Le but de cet article est de recaler les vertèbres extraites à partir d’images RM avec des vertèbres extraites à partir d’images RX pour des patients scoliotiques, en tenant compte des déformations non-rigides due au changement de posture entre ces deux modalités. À ces fins, une méthode de recalage à l’aide d’un modèle articulé est proposée. Cette méthode a été comparée avec un recalage rigide en calculant l’erreur sur des points de repère, ainsi qu’en calculant la différence entre l’angle de Cobb avant et après recalage. Une validation additionelle de la méthode de recalage présentée ici se trouve dans l’annexe A. Ce travail servira de première étape dans la fusion des images RM, RX et TP du tronc complet. Donc, cet article vérifie l’hypothèse 1 décrite dans la section 3.2.1.Abstract This paper presents a magnetic resonance image (MRI)/X-ray spine registration method that compensates for the change in the curvature of the spine between standing and prone positions for scoliotic patients. MRIs in prone position and X-rays in standing position are acquired for 14 patients with scoliosis. The 3D reconstructions of the spine are then aligned using an articulated model which calculates intervertebral transformations. Results show significant decrease in regis- tration error when the proposed articulated model is compared with rigid registration. The method can be used as a basis for full body MRI/X-ray registration incorporating soft tissues for surgical simulation.Canadian Institute of Health Research (CIHR

    Patient-specific modelling in orthopedics: from image to surgery

    Get PDF
    In orthopedic surgery, to decide upon intervention and how it can be optimized, surgeons usually rely on subjective analysis of medical images of the patient, obtained from computed tomography, magnetic resonance imaging, ultrasound or other techniques. Recent advancements in computational performance, image analysis and in silico modeling techniques have started to revolutionize clinical practice through the development of quantitative tools, including patient#specific models aiming at improving clinical diagnosis and surgical treatment. Anatomical and surgical landmarks as well as features extraction can be automated allowing for the creation of general or patient-specific models based on statistical shape models. Preoperative virtual planning and rapid prototyping tools allow the implementation of customized surgical solutions in real clinical environments. In the present chapter we discuss the applications of some of these techniques in orthopedics and present new computer-aided tools that can take us from image analysis to customized surgical treatment

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    Fusion of interventional ultrasound & X-ray

    Get PDF
    In einer immer älter werdenden Bevölkerung wird die Behandlung von strukturellen Herzkrankheiten zunehmend wichtiger. Verbesserte medizinische Bildgebung und die Einführung neuer Kathetertechnologien führten dazu, dass immer mehr herkömmliche chirurgische Eingriffe am offenen Herzen durch minimal invasive Methoden abgelöst werden. Diese modernen Interventionen müssen durch verschiedenste Bildgebungsverfahren navigiert werden. Hierzu werden hauptsächlich Röntgenfluoroskopie und transösophageale Echokardiografie (TEE) eingesetzt. Röntgen bietet eine gute Visualisierung der eingeführten Katheter, was essentiell für eine gute Navigation ist. TEE hingegen bietet die Möglichkeit der Weichteilgewebedarstellung und kann damit vor allem zur Darstellung von anatomischen Strukturen, wie z.B. Herzklappen, genutzt werden. Beide Modalitäten erzeugen Bilder in Echtzeit und werden für die erfolgreiche Durchführung minimal invasiver Herzchirurgie zwingend benötigt. Üblicherweise sind beide Systeme eigenständig und nicht miteinander verbunden. Es ist anzunehmen, dass eine Bildfusion beider Welten einen großen Vorteil für die behandelnden Operateure erzeugen kann, vor allem eine verbesserte Kommunikation im Behandlungsteam. Ebenso können sich aus der Anwendung heraus neue chirurgische Worfklows ergeben. Eine direkte Fusion beider Systeme scheint nicht möglich, da die Bilddaten eine zu unterschiedliche Charakteristik aufweisen. Daher kommt in dieser Arbeit eine indirekte Registriermethode zum Einsatz. Die TEE-Sonde ist während der Intervention ständig im Fluoroskopiebild sichtbar. Dadurch wird es möglich, die Sonde im Röntgenbild zu registrieren und daraus die 3D Position abzuleiten. Der Zusammenhang zwischen Ultraschallbild und Ultraschallsonde wird durch eine Kalibrierung bestimmt. In dieser Arbeit wurde die Methode der 2D-3D Registrierung gewählt, um die TEE Sonde auf 2D Röntgenbildern zu erkennen. Es werden verschiedene Beiträge präsentiert, welche einen herkömmlichen 2D-3D Registrieralgorithmus verbessern. Nicht nur im Bereich der Ultraschall-Röntgen-Fusion, sondern auch im Hinblick auf allgemeine Registrierprobleme. Eine eingeführte Methode ist die der planaren Parameter. Diese verbessert die Robustheit und die Registriergeschwindigkeit, vor allem während der Registrierung eines Objekts aus zwei nicht-orthogonalen Richtungen. Ein weiterer Ansatz ist der Austausch der herkömmlichen Erzeugung von sogenannten digital reconstructed radiographs. Diese sind zwar ein integraler Bestandteil einer 2D-3D Registrierung aber gleichzeitig sehr zeitaufwendig zu berechnen. Es führt zu einem erheblichen Geschwindigkeitsgewinn die herkömmliche Methode durch schnelles Rendering von Dreiecksnetzen zu ersetzen. Ebenso wird gezeigt, dass eine Kombination von schnellen lernbasierten Detektionsalgorithmen und 2D-3D Registrierung die Genauigkeit und die Registrierreichweite verbessert. Zum Abschluss werden die ersten Ergebnisse eines klinischen Prototypen präsentiert, welcher die zuvor genannten Methoden verwendet.Today, in an elderly community, the treatment of structural heart disease will become more and more important. Constant improvements of medical imaging technologies and the introduction of new catheter devices caused the trend to replace conventional open heart surgery by minimal invasive interventions. These advanced interventions need to be guided by different medical imaging modalities. The two main imaging systems here are X-ray fluoroscopy and Transesophageal  Echocardiography (TEE). While X-ray provides a good visualization of inserted catheters, which is essential for catheter navigation, TEE can display soft tissues, especially anatomical structures like heart valves. Both modalities provide real-time imaging and are necessary to lead minimal invasive heart surgery to success. Usually, the two systems are detached and not connected. It is conceivable that a fusion of both worlds can create a strong benefit for the physicians. It can lead to a better communication within the clinical team and can probably enable new surgical workflows. Because of the completely different characteristics of the image data, a direct fusion seems to be impossible. Therefore, an indirect registration of Ultrasound and X-ray images is used. The TEE probe is usually visible in the X-ray image during the described minimal-invasive interventions. Thereby, it becomes possible to register the TEE probe in the fluoroscopic images and to establish its 3D position. The relationship of the Ultrasound image to the Ultrasound probe is known by calibration. To register the TEE probe on 2D X-ray images, a 2D-3D registration approach is chosen in this thesis. Several contributions are presented, which are improving the common 2D-3D registration algorithm for the task of Ultrasound and X-ray fusion, but also for general 2D-3D registration problems. One presented approach is the introduction of planar parameters that increase robustness and speed during the registration of an object on two non-orthogonal views. Another approach is to replace the conventional generation of digital reconstructedradiographs, which is an integral part of 2D-3D registration but also a performance bottleneck, with fast triangular mesh rendering. This will result in a significant performance speed-up. It is also shown that a combination of fast learning-based detection algorithms with 2D-3D registration will increase the accuracy and the capture range, instead of employing them solely to the  registration/detection of a TEE probe. Finally, a first clinical prototype is presented which employs the presented approaches and first clinical results are shown

    A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom

    Get PDF
    Purpose: Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. Materials and methods: A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Brocaâ\u80\u99s area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Results: Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. Conclusions: The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures
    • …
    corecore