303 research outputs found

    Advancing Intra-operative Precision: Dynamic Data-Driven Non-Rigid Registration for Enhanced Brain Tumor Resection in Image-Guided Neurosurgery

    Full text link
    During neurosurgery, medical images of the brain are used to locate tumors and critical structures, but brain tissue shifts make pre-operative images unreliable for accurate removal of tumors. Intra-operative imaging can track these deformations but is not a substitute for pre-operative data. To address this, we use Dynamic Data-Driven Non-Rigid Registration (NRR), a complex and time-consuming image processing operation that adjusts the pre-operative image data to account for intra-operative brain shift. Our review explores a specific NRR method for registering brain MRI during image-guided neurosurgery and examines various strategies for improving the accuracy and speed of the NRR method. We demonstrate that our implementation enables NRR results to be delivered within clinical time constraints while leveraging Distributed Computing and Machine Learning to enhance registration accuracy by identifying optimal parameters for the NRR method. Additionally, we highlight challenges associated with its use in the operating room

    Adaptive Physics-Based Non-Rigid Registration for Immersive Image-Guided Neuronavigation Systems

    Get PDF
    Objective: In image-guided neurosurgery, co-registered preoperative anatomical, functional, and diffusion tensor imaging can be used to facilitate a safe resection of brain tumors in eloquent areas of the brain. However, the brain deforms during surgery, particularly in the presence of tumor resection. Non-Rigid Registration (NRR) of the preoperative image data can be used to create a registered image that captures the deformation in the intraoperative image while maintaining the quality of the preoperative image. Using clinical data, this paper reports the results of a comparison of the accuracy and performance among several non-rigid registration methods for handling brain deformation. A new adaptive method that automatically removes mesh elements in the area of the resected tumor, thereby handling deformation in the presence of resection is presented. To improve the user experience, we also present a new way of using mixed reality with ultrasound, MRI, and CT. Materials and methods: This study focuses on 30 glioma surgeries performed at two different hospitals, many of which involved the resection of significant tumor volumes. An Adaptive Physics-Based Non-Rigid Registration method (A-PBNRR) registers preoperative and intraoperative MRI for each patient. The results are compared with three other readily available registration methods: a rigid registration implemented in 3D Slicer v4.4.0; a B-Spline non-rigid registration implemented in 3D Slicer v4.4.0; and PBNRR implemented in ITKv4.7.0, upon which A-PBNRR was based. Three measures were employed to facilitate a comprehensive evaluation of the registration accuracy: (i) visual assessment, (ii) a Hausdorff Distance-based metric, and (iii) a landmark-based approach using anatomical points identified by a neurosurgeon. Results: The A-PBNRR using multi-tissue mesh adaptation improved the accuracy of deformable registration by more than five times compared to rigid and traditional physics based non-rigid registration, and four times compared to B-Spline interpolation methods which are part of ITK and 3D Slicer. Performance analysis showed that A-PBNRR could be applied, on average, in \u3c2 min, achieving desirable speed for use in a clinical setting. Conclusions: The A-PBNRR method performed significantly better than other readily available registration methods at modeling deformation in the presence of resection. Both the registration accuracy and performance proved sufficient to be of clinical value in the operating room. A-PBNRR, coupled with the mixed reality system, presents a powerful and affordable solution compared to current neuronavigation systems

    A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation

    Full text link
    A reliable Ultrasound (US)-to-US registration method to compensate for brain shift would substantially improve Image-Guided Neurological Surgery. Developing such a registration method is very challenging, due to factors such as missing correspondence in images, the complexity of brain pathology and the demand for fast computation. We propose a novel feature-driven active framework. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the result. We retrospectively demonstrate our registration framework as a robust and accurate brain shift compensation solution on clinical data acquired during neurosurgery

    Deep Multimodality Image-Guided System for Assisting Neurosurgery

    Get PDF
    Intrakranielle Hirntumoren gehören zu den zehn häufigsten bösartigen Krebsarten und sind für eine erhebliche Morbidität und Mortalität verantwortlich. Die größte histologische Kategorie der primären Hirntumoren sind die Gliome, die ein äußerst heterogenes Erschei-nungsbild aufweisen und radiologisch schwer von anderen Hirnläsionen zu unterscheiden sind. Die Neurochirurgie ist meist die Standardbehandlung für neu diagnostizierte Gliom-Patienten und kann von einer Strahlentherapie und einer adjuvanten Temozolomid-Chemotherapie gefolgt werden. Die Hirntumorchirurgie steht jedoch vor großen Herausforderungen, wenn es darum geht, eine maximale Tumorentfernung zu erreichen und gleichzeitig postoperative neurologische Defizite zu vermeiden. Zwei dieser neurochirurgischen Herausforderungen werden im Folgenden vorgestellt. Erstens ist die manuelle Abgrenzung des Glioms einschließlich seiner Unterregionen aufgrund seines infiltrativen Charakters und des Vorhandenseins einer heterogenen Kontrastverstärkung schwierig. Zweitens verformt das Gehirn seine Form ̶ die so genannte "Hirnverschiebung" ̶ als Reaktion auf chirurgische Manipulationen, Schwellungen durch osmotische Medikamente und Anästhesie, was den Nutzen präopera-tiver Bilddaten für die Steuerung des Eingriffs einschränkt. Bildgesteuerte Systeme bieten Ärzten einen unschätzbaren Einblick in anatomische oder pathologische Ziele auf der Grundlage moderner Bildgebungsmodalitäten wie Magnetreso-nanztomographie (MRT) und Ultraschall (US). Bei den bildgesteuerten Instrumenten handelt es sich hauptsächlich um computergestützte Systeme, die mit Hilfe von Computer-Vision-Methoden die Durchführung perioperativer chirurgischer Eingriffe erleichtern. Die Chirurgen müssen jedoch immer noch den Operationsplan aus präoperativen Bildern gedanklich mit Echtzeitinformationen zusammenführen, während sie die chirurgischen Instrumente im Körper manipulieren und die Zielerreichung überwachen. Daher war die Notwendigkeit einer Bildführung während neurochirurgischer Eingriffe schon immer ein wichtiges Anliegen der Ärzte. Ziel dieser Forschungsarbeit ist die Entwicklung eines neuartigen Systems für die peri-operative bildgeführte Neurochirurgie (IGN), nämlich DeepIGN, mit dem die erwarteten Ergebnisse der Hirntumorchirurgie erzielt werden können, wodurch die Gesamtüberle-bensrate maximiert und die postoperative neurologische Morbidität minimiert wird. Im Rahmen dieser Arbeit werden zunächst neuartige Methoden für die Kernbestandteile des DeepIGN-Systems der Hirntumor-Segmentierung im MRT und der multimodalen präope-rativen MRT zur intraoperativen US-Bildregistrierung (iUS) unter Verwendung der jüngs-ten Entwicklungen im Deep Learning vorgeschlagen. Anschließend wird die Ergebnisvor-hersage der verwendeten Deep-Learning-Netze weiter interpretiert und untersucht, indem für den Menschen verständliche, erklärbare Karten erstellt werden. Schließlich wurden Open-Source-Pakete entwickelt und in weithin anerkannte Software integriert, die für die Integration von Informationen aus Tracking-Systemen, die Bildvisualisierung und -fusion sowie die Anzeige von Echtzeit-Updates der Instrumente in Bezug auf den Patientenbe-reich zuständig ist. Die Komponenten von DeepIGN wurden im Labor validiert und in einem simulierten Operationssaal evaluiert. Für das Segmentierungsmodul erreichte DeepSeg, ein generisches entkoppeltes Deep-Learning-Framework für die automatische Abgrenzung von Gliomen in der MRT des Gehirns, eine Genauigkeit von 0,84 in Bezug auf den Würfelkoeffizienten für das Bruttotumorvolumen. Leistungsverbesserungen wurden bei der Anwendung fort-schrittlicher Deep-Learning-Ansätze wie 3D-Faltungen über alle Schichten, regionenbasier-tes Training, fliegende Datenerweiterungstechniken und Ensemble-Methoden beobachtet. Um Hirnverschiebungen zu kompensieren, wird ein automatisierter, schneller und genauer deformierbarer Ansatz, iRegNet, für die Registrierung präoperativer MRT zu iUS-Volumen als Teil des multimodalen Registrierungsmoduls vorgeschlagen. Es wurden umfangreiche Experimente mit zwei Multi-Location-Datenbanken durchgeführt: BITE und RESECT. Zwei erfahrene Neurochirurgen führten eine zusätzliche qualitative Validierung dieser Studie durch, indem sie MRT-iUS-Paare vor und nach der deformierbaren Registrierung überlagerten. Die experimentellen Ergebnisse zeigen, dass das vorgeschlagene iRegNet schnell ist und die besten Genauigkeiten erreicht. Darüber hinaus kann das vorgeschlagene iRegNet selbst bei nicht trainierten Bildern konkurrenzfähige Ergebnisse liefern, was seine Allgemeingültigkeit unter Beweis stellt und daher für die intraoperative neurochirurgische Führung von Nutzen sein kann. Für das Modul "Erklärbarkeit" wird das NeuroXAI-Framework vorgeschlagen, um das Vertrauen medizinischer Experten in die Anwendung von KI-Techniken und tiefen neuro-nalen Netzen zu erhöhen. Die NeuroXAI umfasst sieben Erklärungsmethoden, die Visuali-sierungskarten bereitstellen, um tiefe Lernmodelle transparent zu machen. Die experimen-tellen Ergebnisse zeigen, dass der vorgeschlagene XAI-Rahmen eine gute Leistung bei der Extraktion lokaler und globaler Kontexte sowie bei der Erstellung erklärbarer Salienzkar-ten erzielt, um die Vorhersage des tiefen Netzwerks zu verstehen. Darüber hinaus werden Visualisierungskarten erstellt, um den Informationsfluss in den internen Schichten des Encoder-Decoder-Netzwerks zu erkennen und den Beitrag der MRI-Modalitäten zur end-gültigen Vorhersage zu verstehen. Der Erklärungsprozess könnte medizinischen Fachleu-ten zusätzliche Informationen über die Ergebnisse der Tumorsegmentierung liefern und somit helfen zu verstehen, wie das Deep-Learning-Modell MRT-Daten erfolgreich verar-beiten kann. Außerdem wurde ein interaktives neurochirurgisches Display für die Eingriffsführung entwickelt, das die verfügbare kommerzielle Hardware wie iUS-Navigationsgeräte und Instrumentenverfolgungssysteme unterstützt. Das klinische Umfeld und die technischen Anforderungen des integrierten multimodalen DeepIGN-Systems wurden mit der Fähigkeit zur Integration von (1) präoperativen MRT-Daten und zugehörigen 3D-Volumenrekonstruktionen, (2) Echtzeit-iUS-Daten und (3) positioneller Instrumentenver-folgung geschaffen. Die Genauigkeit dieses Systems wurde anhand eines benutzerdefi-nierten Agar-Phantom-Modells getestet, und sein Einsatz in einem vorklinischen Operati-onssaal wurde simuliert. Die Ergebnisse der klinischen Simulation bestätigten, dass die Montage des Systems einfach ist, in einer klinisch akzeptablen Zeit von 15 Minuten durchgeführt werden kann und mit einer klinisch akzeptablen Genauigkeit erfolgt. In dieser Arbeit wurde ein multimodales IGN-System entwickelt, das die jüngsten Fort-schritte im Bereich des Deep Learning nutzt, um Neurochirurgen präzise zu führen und prä- und intraoperative Patientenbilddaten sowie interventionelle Geräte in das chirurgi-sche Verfahren einzubeziehen. DeepIGN wurde als Open-Source-Forschungssoftware entwickelt, um die Forschung auf diesem Gebiet zu beschleunigen, die gemeinsame Nut-zung durch mehrere Forschungsgruppen zu erleichtern und eine kontinuierliche Weiter-entwicklung durch die Gemeinschaft zu ermöglichen. Die experimentellen Ergebnisse sind sehr vielversprechend für die Anwendung von Deep-Learning-Modellen zur Unterstützung interventioneller Verfahren - ein entscheidender Schritt zur Verbesserung der chirurgi-schen Behandlung von Hirntumoren und der entsprechenden langfristigen postoperativen Ergebnisse

    Finite Element Modeling Driven by Health Care and Aerospace Applications

    Get PDF
    This thesis concerns the development, analysis, and computer implementation of mesh generation algorithms encountered in finite element modeling in health care and aerospace. The finite element method can reduce a continuous system to a discrete idealization that can be solved in the same manner as a discrete system, provided the continuum is discretized into a finite number of simple geometric shapes (e.g., triangles in two dimensions or tetrahedrons in three dimensions). In health care, namely anatomic modeling, a discretization of the biological object is essential to compute tissue deformation for physics-based simulations. This thesis proposes an efficient procedure to convert 3-dimensional imaging data into adaptive lattice-based discretizations of well-shaped tetrahedra or mixed elements (i.e., tetrahedra, pentahedra and hexahedra). This method operates directly on segmented images, thus skipping a surface reconstruction that is required by traditional Computer-Aided Design (CAD)-based meshing techniques and is convoluted, especially in complex anatomic geometries. Our approach utilizes proper mesh gradation and tissue-specific multi-resolution, without sacrificing the fidelity and while maintaining a smooth surface to reflect a certain degree of visual reality. Image-to-mesh conversion can facilitate accurate computational modeling for biomechanical registration of Magnetic Resonance Imaging (MRI) in image-guided neurosurgery. Neuronavigation with deformable registration of preoperative MRI to intraoperative MRI allows the surgeon to view the location of surgical tools relative to the preoperative anatomical (MRI) or functional data (DT-MRI, fMRI), thereby avoiding damage to eloquent areas during tumor resection. This thesis presents a deformable registration framework that utilizes multi-tissue mesh adaptation to map preoperative MRI to intraoperative MRI of patients who have undergone a brain tumor resection. Our enhancements with mesh adaptation improve the accuracy of the registration by more than 5 times compared to rigid and traditional physics-based non-rigid registration, and by more than 4 times compared to publicly available B-Spline interpolation methods. The adaptive framework is parallelized for shared memory multiprocessor architectures. Performance analysis shows that this method could be applied, on average, in less than two minutes, achieving desirable speed for use in a clinical setting. The last part of this thesis focuses on finite element modeling of CAD data. This is an integral part of the design and optimization of components and assemblies in industry. We propose a new parallel mesh generator for efficient tetrahedralization of piecewise linear complex domains in aerospace. CAD-based meshing algorithms typically improve the shape of the elements in a post-processing step due to high complexity and cost of the operations involved. On the contrary, our method optimizes the shape of the elements throughout the generation process to obtain a maximum quality and utilizes high performance computing to reduce the overheads and improve end-user productivity. The proposed mesh generation technique is a combination of Advancing Front type point placement, direct point insertion, and parallel multi-threaded connectivity optimization schemes. The mesh optimization is based on a speculative (optimistic) approach that has been proven to perform well on hardware-shared memory. The experimental evaluation indicates that the high quality and performance attributes of this method see substantial improvement over existing state-of-the-art unstructured grid technology currently incorporated in several commercial systems. The proposed mesh generator will be part of an Extreme-Scale Anisotropic Mesh Generation Environment to meet industries expectations and NASA\u27s CFD visio

    Comparison of Physics-Based Deformable Registration Methods for Image-Guided Neurosurgery

    Get PDF
    This paper compares three finite element-based methods used in a physics-based non-rigid registration approach and reports on the progress made over the last 15 years. Large brain shifts caused by brain tumor removal affect registration accuracy by creating point and element outliers. A combination of approximation- and geometry-based point and element outlier rejection improves the rigid registration error by 2.5 mm and meets the real-time constraints (4 min). In addition, the paper raises several questions and presents two open problems for the robust estimation and improvement of registration error in the presence of outliers due to sparse, noisy, and incomplete data. It concludes with preliminary results on leveraging Quantum Computing, a promising new technology for computationally intensive problems like Feature Detection and Block Matching in addition to finite element solver; all three account for 75% of computing time in deformable registration

    Registration of 3D Ultrasound Volumes with Applications in Neurosurgery and Prostate Radiotherapy

    Get PDF
    Brain tissue deforms significantly after opening the dura and during tumor resection, invalidating pre-operative imaging data. Ultrasound is a popular imaging modality for providing the neurosurgeon with real-time updated images of brain tissue. Interpretation of post-resection ultrasound images is difficult due to large brain shift and tissue resection. Furthermore, several factors degrade the quality of post-resection ultrasound images such as strong reflection of waves at the interface of saline water and brain tissue in resection cavities, air bubbles and the application of blood-clotting agents around the edges of resection. Image registration allows comparison of post-resection ultrasound images with higher quality pre-resection images, assists in interpretation of post-resection images and may help identify residual tumor, and as such, is of significant clinical importance. Prostate motion is known to reduce the precision of prostate radiotherapy. This motion can be categorized into intrafraction and interfraction. Interfraction motion introduces large systematic errors into the treatment and is the largest contributor to prostate planning treatment volume (PTV) margins. Conventional solutions to interfraction motion all have respective drawbacks. Clarity Autoscan system provides continuous ultrasound imaging of the prostate for interfraction motion correction, however it is time-consuming and can have large interobserver errors. The intension of accurately targeting the prostate and reducing the side effects in treatment requests a faster and more accurate registration framework for interfraction motion correction. In this thesis, we first propose a registration framework called Nonrigid Symmetric Registration (NSR) for accurate alignment of pre- and post-resection volumetric ultrasound images in near real-time. An outlier detection algorithm is proposed and utilized in this framework to identify non-corresponding regions (outliers) and therefore improve the robustness and accuracy of registration. We use an Efficient Second-order Minimization (ESM) method for fast and robust optimization. A symmetric and inverse-consistent method is exploited to generate realistic deformation fields. The results show that NSR significantly improves the quality of alignment between pre- and post-resection ultrasound images. Then based on this framework, we develop a rigid registration framework called Prostate Registration Framework (PRF) for alignment of the prosate region in simulation and treatment volumes. PRF is trained using 2 3D transperineal ultrasound (TPUS) images of an ultrasound prostate phantom and 20 3D TPUS images from 11 patients receiving Clarity Autoscan. Algorithm performance is evaluated using further 21 TPUS images from a total of 8 patients by comparison of the PRF with manual matching of landmarks and Clarity-based estimation of interfraction motion performed by three observers. The results show that PRF outputs more accurate alignment of the prosate region in simulation and treatment volumes than Clarity, and further, provides the reposition of the prostate in treatment images efficiently and accurately

    iRegNet: Non-rigid Registration of MRI to Interventional US for Brain-Shift Compensation using Convolutional Neural Networks

    Get PDF
    Accurate and safe neurosurgical intervention can be affected by intra-operative tissue deformation, known as brain-shift. In this study, we propose an automatic, fast, and accurate deformable method, called iRegNet, for registering pre-operative magnetic resonance images to intra-operative ultrasound volumes to compensate for brain-shift. iRegNet is a robust end-to-end deep learning approach for the non-linear registration of MRI-iUS images in the context of image-guided neurosurgery. Pre-operative MRI (as moving image) and iUS (as fixed image) are first appended to our convolutional neural network, after which a non-rigid transformation field is estimated. The MRI image is then transformed using the output displacement field to the iUS coordinate system. Extensive experiments have been conducted on two multi-location databases, which are the BITE and the RESECT. Quantitatively, iRegNet reduced the mean landmark errors from pre-registration value of (4.18 ± 1.84 and 5.35 ± 4.19 mm) to the lowest value of (1.47 ± 0.61 and 0.84 ± 0.16 mm) for the BITE and RESECT datasets, respectively. Additional qualitative validation of this study was conducted by two expert neurosurgeons through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that our proposed iRegNet is fast and achieves state-of-the-art accuracies outperforming state-of-the-art approaches. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance
    • …
    corecore