143 research outputs found

    Automated and robust geometric and spectral fusion of multi-sensor, multi-spectral satellite images

    Get PDF
    Die in den letzten Jahrzehnten aufgenommenen Satellitenbilder zur Erdbeobachtung bieten eine ideale Grundlage für eine genaue Langzeitüberwachung und Kartierung der Erdoberfläche und Atmosphäre. Unterschiedliche Sensoreigenschaften verhindern jedoch oft eine synergetische Nutzung. Daher besteht ein dringender Bedarf heterogene Multisensordaten zu kombinieren und als geometrisch und spektral harmonisierte Zeitreihen nutzbar zu machen. Diese Dissertation liefert einen vorwiegend methodischen Beitrag und stellt zwei neu entwickelte Open-Source-Algorithmen zur Sensorfusion vor, die gründlich evaluiert, getestet und validiert werden. AROSICS, ein neuer Algorithmus zur Co-Registrierung und geometrischen Harmonisierung von Multisensor-Daten, ermöglicht eine robuste und automatische Erkennung und Korrektur von Lageverschiebungen und richtet die Daten an einem gemeinsamen Koordinatengitter aus. Der zweite Algorithmus, SpecHomo, wurde entwickelt, um unterschiedliche spektrale Sensorcharakteristika zu vereinheitlichen. Auf Basis von materialspezifischen Regressoren für verschiedene Landbedeckungsklassen ermöglicht er nicht nur höhere Transformationsgenauigkeiten, sondern auch die Abschätzung einseitig fehlender Spektralbänder. Darauf aufbauend wurde in einer dritten Studie untersucht, inwieweit sich die Abschätzung von Brandschäden aus Landsat mittels synthetischer Red-Edge-Bänder und der Verwendung dichter Zeitreihen, ermöglicht durch Sensorfusion, verbessern lässt. Die Ergebnisse zeigen die Effektivität der entwickelten Algorithmen zur Verringerung von Inkonsistenzen bei Multisensor- und Multitemporaldaten sowie den Mehrwert einer geometrischen und spektralen Harmonisierung für nachfolgende Produkte. Synthetische Red-Edge-Bänder erwiesen sich als wertvoll bei der Abschätzung vegetationsbezogener Parameter wie z. B. Brandschweregraden. Zudem zeigt die Arbeit das große Potenzial zur genaueren Überwachung und Kartierung von sich schnell entwickelnden Umweltprozessen, das sich aus einer Sensorfusion ergibt.Earth observation satellite data acquired in recent years and decades provide an ideal data basis for accurate long-term monitoring and mapping of the Earth's surface and atmosphere. However, the vast diversity of different sensor characteristics often prevents synergetic use. Hence, there is an urgent need to combine heterogeneous multi-sensor data to generate geometrically and spectrally harmonized time series of analysis-ready satellite data. This dissertation provides a mainly methodical contribution by presenting two newly developed, open-source algorithms for sensor fusion, which are both thoroughly evaluated as well as tested and validated in practical applications. AROSICS, a novel algorithm for multi-sensor image co-registration and geometric harmonization, provides a robust and automated detection and correction of positional shifts and aligns the data to a common coordinate grid. The second algorithm, SpecHomo, was developed to unify differing spectral sensor characteristics. It relies on separate material-specific regressors for different land cover classes enabling higher transformation accuracies and the estimation of unilaterally missing spectral bands. Based on these algorithms, a third study investigated the added value of synthesized red edge bands and the use of dense time series, enabled by sensor fusion, for the estimation of burn severity and mapping of fire damage from Landsat. The results illustrate the effectiveness of the developed algorithms to reduce multi-sensor, multi-temporal data inconsistencies and demonstrate the added value of geometric and spectral harmonization for subsequent products. Synthesized red edge information has proven valuable when retrieving vegetation-related parameters such as burn severity. Moreover, using sensor fusion for combining multi-sensor time series was shown to offer great potential for more accurate monitoring and mapping of quickly evolving environmental processes

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    Tailored for Real-World: A Whole Slide Image Classification System Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology Workload.

    Get PDF
    Standard of care diagnostic procedure for suspected skin cancer is microscopic examination of hematoxylin & eosin stained tissue by a pathologist. Areas of high inter-pathologist discordance and rising biopsy rates necessitate higher efficiency and diagnostic reproducibility. We present and validate a deep learning system which classifies digitized dermatopathology slides into 4 categories. The system is developed using 5,070 images from a single lab, and tested on an uncurated set of 13,537 images from 3 test labs, using whole slide scanners manufactured by 3 different vendors. The system\u27s use of deep-learning-based confidence scoring as a criterion to consider the result as accurate yields an accuracy of up to 98%, and makes it adoptable in a real-world setting. Without confidence scoring, the system achieved an accuracy of 78%. We anticipate that our deep learning system will serve as a foundation enabling faster diagnosis of skin cancer, identification of cases for specialist review, and targeted diagnostic classifications

    Spatiotemporal Identification of Cell Divisions Using Symmetry Properties in Time-Lapse Phase Contrast Microscopy

    Get PDF
    A variety of biological and pharmaceutical studies, such as for anti-cancer drugs, require the quantification of cell responses over long periods of time. This is performed with time-lapse video microscopy that gives a long sequence of frames. For this purpose, phase contrast imaging is commonly used since it is minimally invasive. The cell responses of interest in this study are the mitotic cell divisions. Their manual measurements are tedious, subjective, and restrictive. This study introduces an automated method for these measurements. The method starts with preprocessing for restoration and reconstruction of the phase contrast time-lapse sequences. The data are first restored from intensity non-uniformities. Subsequently, the circular symmetry of the contour of the mitotic cells in phase contrast images is used by applying a Circle Hough Transform (CHT) to reconstruct the entire cells. The CHT is also enhanced with the ability to “vote” exclusively towards the center of curvature. The CHT image sequence is then registered for misplacements between successive frames. The sequence is subsequently processed to detect cell centroids in individual frames and use them as starting points to form spatiotemporal trajectories of cells along the positive as well as along the negative time directions, that is, anti-causally. The connectivities of different trajectories enhanced by the symmetry of the trajectories of the daughter cells provide as topological by-products the events of cell divisions together with the corresponding entries into mitoses as well as exits from cytokineses. The experiments use several experimental video sequences from three different cell lines with many cells undergoing mitoses and divisions. The quantitative validations of the results of the processing demonstrate the high performance and efficiency of the method

    Volumetric MRI Reconstruction from 2D Slices in the Presence of Motion

    Get PDF
    Despite recent advances in acquisition techniques and reconstruction algorithms, magnetic resonance imaging (MRI) remains challenging in the presence of motion. To mitigate this, ultra-fast two-dimensional (2D) MRI sequences are often used in clinical practice to acquire thick, low-resolution (LR) 2D slices to reduce in-plane motion. The resulting stacks of thick 2D slices typically provide high-quality visualizations when viewed in the in-plane direction. However, the low spatial resolution in the through-plane direction in combination with motion commonly occurring between individual slice acquisitions gives rise to stacks with overall limited geometric integrity. In further consequence, an accurate and reliable diagnosis may be compromised when using such motion-corrupted, thick-slice MRI data. This thesis presents methods to volumetrically reconstruct geometrically consistent, high-resolution (HR) three-dimensional (3D) images from motion-corrupted, possibly sparse, low-resolution 2D MR slices. It focuses on volumetric reconstructions techniques using inverse problem formulations applicable to a broad field of clinical applications in which associated motion patterns are inherently different, but the use of thick-slice MR data is current clinical practice. In particular, volumetric reconstruction frameworks are developed based on slice-to-volume registration with inter-slice transformation regularization and robust, complete-outlier rejection for the reconstruction step that can either avoid or efficiently deal with potential slice-misregistrations. Additionally, this thesis describes efficient Forward-Backward Splitting schemes for image registration for any combination of differentiable (not necessarily convex) similarity measure and convex (not necessarily smooth) regularization with a tractable proximal operator. Experiments are performed on fetal and upper abdominal MRI, and on historical, printed brain MR films associated with a uniquely long-term study dating back to the 1980s. The results demonstrate the broad applicability of the presented frameworks to achieve robust reconstructions with the potential to improve disease diagnosis and patient management in clinical practice

    Prostate biopsy tracking with deformation estimation

    Full text link
    Transrectal biopsies under 2D ultrasound (US) control are the current clinical standard for prostate cancer diagnosis. The isoechogenic nature of prostate carcinoma makes it necessary to sample the gland systematically, resulting in a low sensitivity. Also, it is difficult for the clinician to follow the sampling protocol accurately under 2D US control and the exact anatomical location of the biopsy cores is unknown after the intervention. Tracking systems for prostate biopsies make it possible to generate biopsy distribution maps for intra- and post-interventional quality control and 3D visualisation of histological results for diagnosis and treatment planning. They can also guide the clinician toward non-ultrasound targets. In this paper, a volume-swept 3D US based tracking system for fast and accurate estimation of prostate tissue motion is proposed. The entirely image-based system solves the patient motion problem with an a priori model of rectal probe kinematics. Prostate deformations are estimated with elastic registration to maximize accuracy. The system is robust with only 17 registration failures out of 786 (2%) biopsy volumes acquired from 47 patients during biopsy sessions. Accuracy was evaluated to 0.76±\pm0.52mm using manually segmented fiducials on 687 registered volumes stemming from 40 patients. A clinical protocol for assisted biopsy acquisition was designed and implemented as a biopsy assistance system, which allows to overcome the draw-backs of the standard biopsy procedure.Comment: Medical Image Analysis (2011) epub ahead of prin

    Ultra-High Field Magnetic Resonance Imaging for Stereotactic Neurosurgery

    Get PDF
    Stereotactic neurosurgery is a subspecialty within neurosurgery concerned with accurate targeting of brain structures. Deep brain stimulation (DBS) is a specific type of stereotaxy in which electrodes are implanted in deep brain structures. It has proven therapeutic efficacy in Parkinson’s disease and Essential Tremor, but with an expanding number of indications under evaluation including Alzheimer’s disease, depression, epilepsy, and obesity, many more Canadians with chronic health conditions may benefit. Accurate surgical targeting is crucial with millimeter deviations resulting in unwanted side effects including muscle contractions, or worse, vessel injury. Lack of adequate visualization of surgical targets with conventional lower field strengths (1.5/3 Tesla) has meant that standard-of-care surgical treatment has relied on indirect targeting using standardized landmarks to find a correspondence with a histological ``template\u27\u27 of the brain. For this reason, these procedures routinely require awake testing and microelectrode recording, which increases operating room time, patient discomfort, and risk of complications. Advances in ultra-high field (\u3e= 7 Tesla or 7T) imaging have important potential implications for targeting structures enabling better visualization as a result of its increased (sub-millimeter) spatial resolution, tissue contrast, and signal-to-noise ratio. The work in this thesis explores ways in which ultra-high field magnetic resonance imaging can be integrated into the practice of stereotactic neurosurgery. In Chapter 2, an ultra-high field MRI template is integrated into the surgical workflow to assist with planning for deep brain stimulation surgery cases. Chapter 3 describes a novel anatomical fiducial placement protocol that is developed, validated, and used prospectively to quantify the limits of template-assisted surgical planning. In Chapter 4, geometric distortions at 7T that may impede the ability to perform accurate surgical targeting are characterized in participant data, and generally noted to be away from areas of interest for stereotactic targeting. Finally, Chapter 5 discusses a number of important stereotactic targets that are directly visualized and described for the first time in vivo, paving the way for patient-specific surgical planning using ultra-high field MRI
    • …
    corecore