820 research outputs found
Simultaneous Multiparametric and Multidimensional Cardiovascular Magnetic Resonance Imaging
No abstract available
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Integrated Geophysical Analysis of Passive Continental Margins: Insights into the Crustal Structure of the Namibian Margin from Magnetotelluric, Gravity, and Seismic Data
Passive continental margin research amalgamates the investigation of many broad topics, such as the emergence of oceanic crust, lithospheric stress patterns and plume-lithosphere interaction, reservoir potential, methane cycle, and general global geodynamics. Central tasks in this field of research are geophysical investigations of the structure, composition, and dynamic of the passive margin crust and upper mantle. A key practice to improve geophysical models and their interpretation, is the integrated analysis of multiple data, or the integration of complementary models and data. In this thesis, I compare four different inversion results based on data from the Namibian passive continental margin. These are a) a single method MT inversion; b) constrained inversion of MT data, cross-gradient coupled with a fixed structural density model; c) cross-gradient coupled joint inversion of MT and satellite gravity data; d) constrained inversion of MT data, cross-gradient coupled with a fixed gradient velocity model. To bridge the formal analysis of geophysical models with geological interpretations, I define a link between the physical parameter models and geological units. Therefore, the results from the joint MT and gravity inversion (c) are correlated through a user-unbiased clustering analysis. This clustering analysis results in a distinct difference in the signature of the transitional crust south of- and along the supposed hot-spot track Walvis Ridge. I ascribe this contrast to an increase in magmatic activity above the volcanic center along Walvis Ridge. Furthermore, the analysis helps to clearly identify areas of interlayered massive, and weathered volcanic flows, which are usually only identified in reflection seismic studies as seaward dipping reflectors. Lastly, the clustering helps to differentiate two types of sediment cover. Namely, one of near-shore, thick, clastic sediments, and one of further offshore located, more biogenic, marine sediments
Geometric Data Analysis: Advancements of the Statistical Methodology and Applications
Data analysis has become fundamental to our society and comes in multiple facets and approaches. Nevertheless, in research and applications, the focus was primarily on data from Euclidean vector spaces. Consequently, the majority of methods that are applied today are not suited for more general data types. Driven by needs from fields like image processing, (medical) shape analysis, and network analysis, more and more attention has recently been given to data from non-Euclidean spaces–particularly (curved) manifolds. It has led to the field of geometric data analysis whose methods explicitly take the structure (for example, the topology and geometry) of the underlying space into account.
This thesis contributes to the methodology of geometric data analysis by generalizing several fundamental notions from multivariate statistics to manifolds. We thereby focus on two different viewpoints.
First, we use Riemannian structures to derive a novel regression scheme for general manifolds that relies on splines of generalized BĂ©zier curves. It can accurately model non-geodesic relationships, for example, time-dependent trends with saturation effects or cyclic trends. Since BĂ©zier curves can be evaluated with the constructive de Casteljau algorithm, working with data from manifolds of high dimensions (for example, a hundred thousand or more) is feasible. Relying on the regression, we further develop
a hierarchical statistical model for an adequate analysis of longitudinal data in manifolds, and a method to control for confounding variables.
We secondly focus on data that is not only manifold- but even Lie group-valued, which is frequently the case in applications. We can only achieve this by endowing the group with an affine connection structure that is generally not Riemannian. Utilizing it, we derive generalizations of several well-known dissimilarity measures between data distributions that can be used for various tasks, including hypothesis testing. Invariance under data translations is proven, and a connection to continuous distributions is given for one measure.
A further central contribution of this thesis is that it shows use cases for all notions in real-world applications, particularly in problems from shape analysis in medical imaging and archaeology. We can replicate or further quantify several known findings for shape changes of the femur and the right hippocampus under osteoarthritis and Alzheimer's, respectively. Furthermore, in an archaeological application, we obtain new insights into the construction principles of ancient sundials. Last but not least, we use the geometric structure underlying human brain connectomes to predict cognitive scores. Utilizing a sample selection procedure, we obtain state-of-the-art results
Development of Quantitative Bone SPECT Analysis Methods for Metastatic Bone Disease
Prostate cancer is one of the most prevalent types of cancer in males in the United States. Bone is a common site of metastases for metastatic prostate cancer. However, bone metastases are often considered “unmeasurable” using standard anatomic imaging and the RECIST 1.1 criteria. As a result, response to therapy is often suboptimally evaluated by visual interpretation of planar bone scintigraphy with response criteria related to the presence or absence of new lesions. With the commercial availability of quantitative single-photon emission computed tomography (SPECT) methods, it is now feasible to establish quantitative metrics of therapy response by skeletal metastases. Quantitative bone SPECT (QBSPECT) may provide the ability to estimate bone lesion uptake, volume, and the number of lesions more accurately than planar imaging. However, the accuracy of activity quantification in QBSPECT relies heavily on the precision with which bone metastases and bone structures are delineated. In this research, we aim at developing automated image segmentation methods for fast and accurate delineation of bone and bone metastases in QBSPECT. To begin, we developed registration methods to generate a dataset of realistic and anatomically-varying computerized phantoms for use in QBSPECT simulations. Using these simulations, we develop supervised computer-automated segmentation methods to minimize intra- and inter-observer variations in delineating bone metastases. This project provides accurate segmentation techniques for QBSPECT and paves the way for the development of QBSPECT methods for assessing bone metastases’ therapy response
Deep Multimodality Image-Guided System for Assisting Neurosurgery
Intrakranielle Hirntumoren gehören zu den zehn häufigsten bösartigen Krebsarten und sind für eine erhebliche Morbidität und Mortalität verantwortlich. Die größte histologische Kategorie der primären Hirntumoren sind die Gliome, die ein äußerst heterogenes Erschei-nungsbild aufweisen und radiologisch schwer von anderen Hirnläsionen zu unterscheiden sind. Die Neurochirurgie ist meist die Standardbehandlung für neu diagnostizierte Gliom-Patienten und kann von einer Strahlentherapie und einer adjuvanten Temozolomid-Chemotherapie gefolgt werden.
Die Hirntumorchirurgie steht jedoch vor großen Herausforderungen, wenn es darum geht, eine maximale Tumorentfernung zu erreichen und gleichzeitig postoperative neurologische Defizite zu vermeiden. Zwei dieser neurochirurgischen Herausforderungen werden im Folgenden vorgestellt. Erstens ist die manuelle Abgrenzung des Glioms einschließlich seiner Unterregionen aufgrund seines infiltrativen Charakters und des Vorhandenseins einer heterogenen Kontrastverstärkung schwierig. Zweitens verformt das Gehirn seine Form ̶ die so genannte "Hirnverschiebung" ̶ als Reaktion auf chirurgische Manipulationen, Schwellungen durch osmotische Medikamente und Anästhesie, was den Nutzen präopera-tiver Bilddaten für die Steuerung des Eingriffs einschränkt.
Bildgesteuerte Systeme bieten Ärzten einen unschätzbaren Einblick in anatomische oder pathologische Ziele auf der Grundlage moderner Bildgebungsmodalitäten wie Magnetreso-nanztomographie (MRT) und Ultraschall (US). Bei den bildgesteuerten Instrumenten handelt es sich hauptsächlich um computergestützte Systeme, die mit Hilfe von Computer-Vision-Methoden die Durchführung perioperativer chirurgischer Eingriffe erleichtern. Die Chirurgen müssen jedoch immer noch den Operationsplan aus präoperativen Bildern gedanklich mit Echtzeitinformationen zusammenführen, während sie die chirurgischen Instrumente im Körper manipulieren und die Zielerreichung überwachen. Daher war die Notwendigkeit einer Bildführung während neurochirurgischer Eingriffe schon immer ein wichtiges Anliegen der Ärzte.
Ziel dieser Forschungsarbeit ist die Entwicklung eines neuartigen Systems für die peri-operative bildgeführte Neurochirurgie (IGN), nämlich DeepIGN, mit dem die erwarteten Ergebnisse der Hirntumorchirurgie erzielt werden können, wodurch die Gesamtüberle-bensrate maximiert und die postoperative neurologische Morbidität minimiert wird. Im Rahmen dieser Arbeit werden zunächst neuartige Methoden für die Kernbestandteile des DeepIGN-Systems der Hirntumor-Segmentierung im MRT und der multimodalen präope-rativen MRT zur intraoperativen US-Bildregistrierung (iUS) unter Verwendung der jüngs-ten Entwicklungen im Deep Learning vorgeschlagen. Anschließend wird die Ergebnisvor-hersage der verwendeten Deep-Learning-Netze weiter interpretiert und untersucht, indem für den Menschen verständliche, erklärbare Karten erstellt werden. Schließlich wurden Open-Source-Pakete entwickelt und in weithin anerkannte Software integriert, die für die Integration von Informationen aus Tracking-Systemen, die Bildvisualisierung und -fusion sowie die Anzeige von Echtzeit-Updates der Instrumente in Bezug auf den Patientenbe-reich zuständig ist.
Die Komponenten von DeepIGN wurden im Labor validiert und in einem simulierten Operationssaal evaluiert. Für das Segmentierungsmodul erreichte DeepSeg, ein generisches entkoppeltes Deep-Learning-Framework für die automatische Abgrenzung von Gliomen in der MRT des Gehirns, eine Genauigkeit von 0,84 in Bezug auf den Würfelkoeffizienten für das Bruttotumorvolumen. Leistungsverbesserungen wurden bei der Anwendung fort-schrittlicher Deep-Learning-Ansätze wie 3D-Faltungen über alle Schichten, regionenbasier-tes Training, fliegende Datenerweiterungstechniken und Ensemble-Methoden beobachtet.
Um Hirnverschiebungen zu kompensieren, wird ein automatisierter, schneller und genauer deformierbarer Ansatz, iRegNet, für die Registrierung präoperativer MRT zu iUS-Volumen als Teil des multimodalen Registrierungsmoduls vorgeschlagen. Es wurden umfangreiche Experimente mit zwei Multi-Location-Datenbanken durchgeführt: BITE und RESECT. Zwei erfahrene Neurochirurgen führten eine zusätzliche qualitative Validierung dieser Studie durch, indem sie MRT-iUS-Paare vor und nach der deformierbaren Registrierung überlagerten. Die experimentellen Ergebnisse zeigen, dass das vorgeschlagene iRegNet schnell ist und die besten Genauigkeiten erreicht. Darüber hinaus kann das vorgeschlagene iRegNet selbst bei nicht trainierten Bildern konkurrenzfähige Ergebnisse liefern, was seine Allgemeingültigkeit unter Beweis stellt und daher für die intraoperative neurochirurgische Führung von Nutzen sein kann.
Für das Modul "Erklärbarkeit" wird das NeuroXAI-Framework vorgeschlagen, um das Vertrauen medizinischer Experten in die Anwendung von KI-Techniken und tiefen neuro-nalen Netzen zu erhöhen. Die NeuroXAI umfasst sieben Erklärungsmethoden, die Visuali-sierungskarten bereitstellen, um tiefe Lernmodelle transparent zu machen. Die experimen-tellen Ergebnisse zeigen, dass der vorgeschlagene XAI-Rahmen eine gute Leistung bei der Extraktion lokaler und globaler Kontexte sowie bei der Erstellung erklärbarer Salienzkar-ten erzielt, um die Vorhersage des tiefen Netzwerks zu verstehen. Darüber hinaus werden Visualisierungskarten erstellt, um den Informationsfluss in den internen Schichten des Encoder-Decoder-Netzwerks zu erkennen und den Beitrag der MRI-Modalitäten zur end-gültigen Vorhersage zu verstehen. Der Erklärungsprozess könnte medizinischen Fachleu-ten zusätzliche Informationen über die Ergebnisse der Tumorsegmentierung liefern und somit helfen zu verstehen, wie das Deep-Learning-Modell MRT-Daten erfolgreich verar-beiten kann.
Außerdem wurde ein interaktives neurochirurgisches Display für die Eingriffsführung entwickelt, das die verfügbare kommerzielle Hardware wie iUS-Navigationsgeräte und Instrumentenverfolgungssysteme unterstützt. Das klinische Umfeld und die technischen Anforderungen des integrierten multimodalen DeepIGN-Systems wurden mit der Fähigkeit zur Integration von (1) präoperativen MRT-Daten und zugehörigen 3D-Volumenrekonstruktionen, (2) Echtzeit-iUS-Daten und (3) positioneller Instrumentenver-folgung geschaffen. Die Genauigkeit dieses Systems wurde anhand eines benutzerdefi-nierten Agar-Phantom-Modells getestet, und sein Einsatz in einem vorklinischen Operati-onssaal wurde simuliert. Die Ergebnisse der klinischen Simulation bestätigten, dass die Montage des Systems einfach ist, in einer klinisch akzeptablen Zeit von 15 Minuten durchgeführt werden kann und mit einer klinisch akzeptablen Genauigkeit erfolgt.
In dieser Arbeit wurde ein multimodales IGN-System entwickelt, das die jüngsten Fort-schritte im Bereich des Deep Learning nutzt, um Neurochirurgen präzise zu führen und prä- und intraoperative Patientenbilddaten sowie interventionelle Geräte in das chirurgi-sche Verfahren einzubeziehen. DeepIGN wurde als Open-Source-Forschungssoftware entwickelt, um die Forschung auf diesem Gebiet zu beschleunigen, die gemeinsame Nut-zung durch mehrere Forschungsgruppen zu erleichtern und eine kontinuierliche Weiter-entwicklung durch die Gemeinschaft zu ermöglichen. Die experimentellen Ergebnisse sind sehr vielversprechend für die Anwendung von Deep-Learning-Modellen zur Unterstützung interventioneller Verfahren - ein entscheidender Schritt zur Verbesserung der chirurgi-schen Behandlung von Hirntumoren und der entsprechenden langfristigen postoperativen Ergebnisse
Quantum chemistry meets astrobiology: Approximate vibrational spectral data for potential biosignatures
The chemical characterisation of exoplanet atmospheres plays a crucial role in profoundly enriching our comprehension of exoplanets. By deciphering the array of molecular species shaping the chemical composition of a given exoplanet atmosphere, we unlock invaluable insights into its chemical evolution, climate, physical dynamics, and even its potential for harbouring life.
High-resolution molecular spectroscopy provides the fundamental data needed for robustly identifying molecules in exoplanet atmospheric spectra, as recorded from ground- and/or space-based telescopes. However, the availability of high-resolution molecular spectroscopic data is limited, given its intensive and time-consuming generation process, involving costly quantum chemistry calculations and exhaustive experimental measurements. By the time this thesis was submitted, the repository of high-resolution infrared molecular spectroscopic data encompasses around 100 molecular species. This constrain considerably hinders the scope of new molecular detections in exoplanet atmospheres; if there is no spectroscopic data for a given molecule, we simply cannot find it.
To address this challenge, this thesis introduces a pioneering approach that complements the traditional method for generating high-resolution molecular spectroscopic data. By leveraging routine quantum chemistry calculations, specifically harmonic frequency calculations, this research provides a high-throughput method to rapidly generate approximate vibrational spectral data for thousands of potential biosignature molecules.
This thesis is organised into two main themes. Firstly, it focuses on refining harmonic frequency calculations for large-scale spectral data generation. Previous research lacked comprehensive benchmarking, making it difficult for users to choose appropriate levels of theory and basis set pairs (aka model chemistries) for these calculations. This work addresses such limitation by performing an extensive evaluation of over 600 model chemistries using a newly developed vibrational frequency benchmark data set. The findings from this work highlight the B97-1/def2-TZVPD model chemistry for its exceptional balance between accuracy and computational cost. Indeed, a median error of 10 cm-1 is expected in the calculated harmonic frequencies after scaling, along with good transition intensity predictions due to its accurate dipole moment calculations.
With the optimised harmonic frequency calculations in place, the second theme of the thesis focuses on generating approximate spectral data for thousands of astrochemistry-relevant molecules, specifically potential biosignatures. Employing an automated high-throughput approach, this thesis produces approximate vibrational spectra for 2743 molecules, most of which had limited, or completely absent, spectroscopic data available in the literature. While these approximate spectral data are not accurate enough to enable definitive molecular detections in exoplanet atmospheres, and cannot replace the generation of high-resolution spectroscopic data, they have powerful applications in identifying potential molecular candidates responsible for unknown spectral features. This application is firstly explored using the SO2 detection in the atmospheric spectrum of WASP-39b as a proof-of-concept, and then applied to shortlist potential molecular candidates to the 4.25 microns (2352 cm-1) spectral feature in the same spectrum, which by the time this thesis was submitted had not been assigned to any molecular species yet.
Beyond screening potential molecular candidates to unknown spectral features, this large-scale approximate spectral data generation offers broader applications, such as identifying molecules with strong absorption features that may be detectable at low abundances, and serving as a training set for machine learning predictions of vibrational frequencies.
The approximate spectral data generated in this thesis will play a crucial role in supporting our understanding of the chemical composition of exoplanet atmospheres. By highlighting potential candidates to unknown spectral features, this approach seeks to complement the generation of high-resolution molecular spectroscopic data, directing attention towards prioritised molecules warranting meticulous data acquisition. This synergy between approximate and high-resolution spectroscopic data will certainly amplify our potential to unveil the chemical composition of exoplanet atmospheres, providing directions into possible initial identifications of the more unusual molecules to emerge
Advanced Characterization and On-Line Process Monitoring of Additively Manufactured Materials and Components
This reprint is concerned with the microstructural characterization and the defect analysis of metallic additively manufactured (AM) materials and parts. Special attention is paid to the determination of residual stress in such parts and to online monitoring techniques devised to predict the appearance of defects. Finally, several non-destructive testing techniques are employed to assess the quality of AM materials and parts
Elastic shape analysis of geometric objects with complex structures and partial correspondences
In this dissertation, we address the development of elastic shape analysis frameworks for the registration, comparison and statistical shape analysis of geometric objects with complex topological structures and partial correspondences. In particular, we introduce a variational framework and several numerical algorithms for the estimation of geodesics and distances induced by higher-order elastic Sobolev metrics on the space of parametrized and unparametrized curves and surfaces. We extend our framework to the setting of shape graphs (i.e., geometric objects with branching structures where each branch is a curve) and surfaces with complex topological structures and partial correspondences. To do so, we leverage the flexibility of varifold fidelity metrics in order to augment our geometric objects with a spatially-varying weight function, which in turn enables us to indirectly model topological changes and handle partial matching constraints via the estimation of vanishing weights within the registration process. In the setting of shape graphs, we prove the existence of solutions to the relaxed registration problem with weights, which is the main theoretical contribution of this thesis. In the setting of surfaces, we leverage our surface matching algorithms to develop a comprehensive collection of numerical routines for the statistical shape analysis of sets of 3D surfaces, which includes algorithms to compute Karcher means, perform dimensionality reduction via multidimensional scaling and tangent principal component analysis, and estimate parallel transport across surfaces (possibly with partial matching constraints).
Moreover, we also address the development of numerical shape analysis pipelines for large-scale data-driven applications with geometric objects. Towards this end, we introduce a supervised deep learning framework to compute the square-root velocity (SRV) distance for curves. Our trained network provides fast and accurate estimates of the SRV distance between pairs of geometric curves, without the need to find optimal reparametrizations. As a proof of concept for the suitability of such approaches in practical contexts, we use it to perform optical character recognition (OCR), achieving comparable performance in terms of computational speed and accuracy to other existing OCR methods.
Lastly, we address the difficulty of extracting high quality shape structures from imaging data in the field of astronomy. To do so, we present a state-of-the-art expectation-maximization approach for the challenging task of multi-frame astronomical image deconvolution and super-resolution. We leverage our approach to obtain a high-fidelity reconstruction of the night sky, from which high quality shape data can be extracted using appropriate segmentation and photometric techniques
Fast Non-Rigid Radiance Fields from Monocularized Data
The reconstruction and novel view synthesis of dynamic scenes recently gained
increased attention. As reconstruction from large-scale multi-view data
involves immense memory and computational requirements, recent benchmark
datasets provide collections of single monocular views per timestamp sampled
from multiple (virtual) cameras. We refer to this form of inputs as
"monocularized" data. Existing work shows impressive results for synthetic
setups and forward-facing real-world data, but is often limited in the training
speed and angular range for generating novel views. This paper addresses these
limitations and proposes a new method for full 360{\deg} inward-facing novel
view synthesis of non-rigidly deforming scenes. At the core of our method are:
1) An efficient deformation module that decouples the processing of spatial and
temporal information for accelerated training and inference; and 2) A static
module representing the canonical scene as a fast hash-encoded neural radiance
field. In addition to existing synthetic monocularized data, we systematically
analyze the performance on real-world inward-facing scenes using a newly
recorded challenging dataset sampled from a synchronized large-scale multi-view
rig. In both cases, our method is significantly faster than previous methods,
converging in less than 7 minutes and achieving real-time framerates at 1K
resolution, while obtaining a higher visual accuracy for generated novel views.
Our source code and data is available at our project page
https://graphics.tu-bs.de/publications/kappel2022fast.Comment: 18 pages, 14 figures; project page:
https://graphics.tu-bs.de/publications/kappel2022fas
- …