831 research outputs found

    "Mariage des Maillages": A new numerical approach for 3D relativistic core collapse simulations

    Full text link
    We present a new 3D general relativistic hydrodynamics code for simulations of stellar core collapse to a neutron star, as well as pulsations and instabilities of rotating relativistic stars. It uses spectral methods for solving the metric equations, assuming the conformal flatness approximation for the three-metric. The matter equations are solved by high-resolution shock-capturing schemes. We demonstrate that the combination of a finite difference grid and a spectral grid can be successfully accomplished. This "Mariage des Maillages" (French for grid wedding) approach results in high accuracy of the metric solver and allows for fully 3D applications using computationally affordable resources, and ensures long term numerical stability of the evolution. We compare our new approach to two other, finite difference based, methods to solve the metric equations. A variety of tests in 2D and 3D is presented, involving highly perturbed neutron star spacetimes and (axisymmetric) stellar core collapse, demonstrating the ability to handle spacetimes with and without symmetries in strong gravity. These tests are also employed to assess gravitational waveform extraction, which is based on the quadrupole formula.Comment: 29 pages, 16 figures; added more information about convergence tests and grid setu

    Investigation of Sparsifying Transforms in Compressed Sensing for Magnetic Resonance Imaging with Fasttestcs

    Get PDF
    The goal of this contribution is to achieve higher reduction factors for faster Magnetic Resonance Imaging (MRI) scans with better Image Quality (IQ) by using Compressed Sensing (CS). This can be accomplished by adopting and understanding better sparsifying transforms for CS in MRI. There is a tremendous number of transforms and optional settings potentially available. Additionally, the amount of research in CS is growing, with possible duplication and difficult practical evaluation and comparison. However, no in-depth analysis of the effectiveness of different redundant sparsifying transforms on MRI images with CS has been undertaken until this work. New theoretical sparsity bounds for the dictionary restricted isometry property constants in CS are presented with mathematical proof. In order to verify the sparsifying transforms in this setting, the experiments focus on several redundant transforms contrasting them with orthogonal transforms. The transforms investigated are Wavelet (WT), Cosine (CT), contourlet, curvelet, k-means singular value decomposition, and Gabor. Several variations of these transforms with corresponding filter options are developed and tested in compression and CS simulations. Translation Invariance (TI) in transforms is found to be a key contributing factor in producing good IQ because any particular translation of the signal will not effect the transform representation. Some transforms tested here are TI and many others are made TI by transforming small overlapping image patches. These transforms are tested by comparing different under-sampling patterns and reduction ratios with varying image types including MRI data. Radial, spiral, and various random patterns are implemented and demonstrate that the TIWT is very robust across all under-sampling patterns. Results of the TIWT simulations show improvements in de-noising and artifact suppression over that of individual orthogonal wavelets and total variation ell-1 minimization in CS simulations. Some of these transforms add considerable time to the CS simulations and prohibit extensive testing of large 3D MRI datasets. Therefore, the FastTestCS software simulation framework is developed and customized for testing images, under-samping patterns and sparsifying transforms. This novel software is offered as a practical, robust, universal framework for evaluating and developing simulations in order to quickly test sparsifying transforms for CS MRI

    Acceleration Methods for MRI

    Full text link
    Acceleration methods are a critical area of research for MRI. Two of the most important acceleration techniques involve parallel imaging and compressed sensing. These advanced signal processing techniques have the potential to drastically reduce scan times and provide radiologists with new information for diagnosing disease. However, many of these new techniques require solving difficult optimization problems, which motivates the development of more advanced algorithms to solve them. In addition, acceleration methods have not reached maturity in some applications, which motivates the development of new models tailored to these applications. This dissertation makes advances in three different areas of accelerations. The first is the development of a new algorithm (called B1-Based, Adaptive Restart, Iterative Soft Thresholding Algorithm or BARISTA), that solves a parallel MRI optimization problem with compressed sensing assumptions. BARISTA is shown to be 2-3 times faster and more robust to parameter selection than current state-of-the-art variable splitting methods. The second contribution is the extension of BARISTA ideas to non-Cartesian trajectories that also leads to a 2-3 times acceleration over previous methods. The third contribution is the development of a new model for functional MRI that enables a 3-4 factor of acceleration of effective temporal resolution in functional MRI scans. Several variations of the new model are proposed, with an ROC curve analysis showing that a combination low-rank/sparsity model giving the best performance in identifying the resting-state motor network.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120841/1/mmuckley_1.pd

    Numerical Approaches Towards the Galactic Synchrotron Emission

    Get PDF
    The Galactic synchrotron emission contains abundant physics of the magnetized Galactic interstellar medium and has a non-negligible influence on detecting the B-mode polarization of the Cosmic microwave background radiation and understanding the physics during the re-ionization epoch. To catch up with the growing precision in astrophysical measurements, we need not only better theoretical modelings, but also more powerful numerical simulations and analyzing pipelines for acquiring deeper understandings in both the Galactic environment and the origin of the Universe. In this dissertation, we focus on the Galactic synchrotron emission which involves the turbulent and magnetized interstellar medium and energetic cosmic-ray electrons. To study the Galactic synchrotron emission consistently we need a non-trivial Bayesian analyzer with specially designed likelihood function, a fast and precise radiative transfer simulator, and cosmic ray electron propagation solver. We first present version X of the hammurabi package, the HEALPix-based numeric simulator for Galactic polarized emission. Two fast methods are proposed for realizing divergence-free Gaussian random magnetic fields either on the Galactic scale where a field alignment and strength modulation are imposed or on a local scale where more physically motivated models like a parameterized magneto-hydrodynamic turbulence can be applied. Secondly, we present our effort in using the finite element method for solving the cosmic ray (electron) transport equation within the phase-space domain that has a number of dimensions varying from two to six. The numeric package BIFET is developed on top of the deal.ii library with support in the adaptive mesh refinement. Our first aim with BIFET is to build the basic framework that can support a high dimensional PDE solving. Finally, we introduce the work related to the complete design of IMAGINE, which is proposed particularly with the ensemble likelihood for inferring the distributions of Galactic components

    Astronomy with integral field spectroscopy:: observation, data analysis and results

    Get PDF
    With a new generation of facility instruments being commissioned for 8 metre telescopes, integral field spectroscopy will soon be a standard tool in astronomy, opening a range of exciting new research opportunities. It is clear, however, that reducing and analyzing integral field data is a complex problem, which will need considerable attention before the full potential of the hardware can be realized. The purpose of this thesis is therefore to explore some of the scientific capabilities of integral field spectroscopy, developing the techniques needed to produce astrophysical results from the data. Two chapters are dedicated to the problem of analyzing observations from the densely-packed optical fibre instruments pioneered at Durham. It is shown that, in the limit where each spectrum is sampled by only one detector row, data maybe treated in a similar way to those from an image slicer. The properties of raw fibre data are considered in the context of the Sampling Theorem and methods for three dimensional image reconstruction are discussed. These ideas are implemented in an IRAF data reduction package for the Thousand Element Integral Field Unit (TEIFU), with source code provided on the accompanying compact disc. Two observational studies are also presented. In the first case, the 3D infrared image slicer has been used to test for the presence of a super-massive black hole in the giant early-type galaxy NGC 1316. Measurements of the stellar kinematics do not reveal a black hole of mass 5 x l0(^9)M©, as predicted from bulge luminosity using the relationship of Kormendy & Richstone (1995). The second study is an investigation into the origin of [Fell] line emission in the Seyfert galaxy NGC4151, using Durham University's SMIRFS-IFU. By mapping [Fell] line strength and velocity at the galaxy centre, it is shown that the emission is associated with the optical narrow line region, rather than the radio jet, indicating that the excitation is primarily due to photoionizing X-rays.Finally, a report is given on the performance of TEIFU, which was commissioned at the William Herschel Telescope in 1999. Measurements of throughput and fibre response variation are given and a reconstructed test observation of the radio galaxy 3C 327 is shown, demonstrating the functionality of the instrument and software

    High-resolution diffusion-weighted brain MRI under motion

    Get PDF
    Magnetic resonance imaging is one of the fastest developing medical imaging techniques. It provides excellent soft tissue contrast and has been a leading tool for neuroradiology and neuroscience research over the last decades. One of the possible MR imaging contrasts is the ability to visualize diffusion processes. The method, referred to as diffusion-weighted imaging, is one of the most common clinical contrasts but is prone to artifacts and is challenging to acquire at high resolutions. This thesis aimed to improve the resolution of diffusion weighted imaging, both in a clinical and in a research context. While diffusion-weighted imaging traditionally has been considered a 2D technique the manuscripts and methods presented here explore 3D diffusion acquisitions with isotropic resolution. Acquiring multiple small 3D volumes, or slabs, which are combined into one full volume has been the method of choice in this work. The first paper presented explores a parallel imaging driven multi-echo EPI readout to enable high resolution with reduced geometric distortions. The work performed on diffusion phase correction lead to an understanding that was used for the subsequent multi-slab papers. The second and third papers introduce the diffusion-weighted 3D multi-slab echo-planar imaging technique and explore its advantages and performance. As the method requires a slightly increased acquisition time the need for prospective motion correction became apparent. The forth paper suggests a new motion navigator using the subcutaneous fat surrounding the skull for rigid body head motion estimation, dubbed FatNav. The spatially sparse representation of the fat signal allowed for high parallel imaging acceleration factors, short acquisition times, and reduced geometric distortions of the navigator. The fifth manuscript presents a combination of the high-resolution 3D multi-slab technique and a modified FatNav module. Unlike our first FatNav implementation, using a single sagittal slab, this modified navigator acquired orthogonal projections of the head using the fat signal alone. The combined use of both presented methods provides a promising start for a fully motion corrected high-resolution diffusion acquisition in a clinical setting

    Reconstructing the galactic magnetic field

    Get PDF
    Diese Dissertation befasst sich mit der Rekonstruktion des Magnetfeldes der Milchstraße (GMF für Galaktisches Magnetfeld). Eine genaue Beschreibung des Magnetfeldes ist für mehrere Fragestellungen der Astrophysik relevant. Erstens spielt es eine wichtige Rolle dabei, wie sich die Struktur der Milchstraße entwickelt, da die Ströme von interstellarem Gas und kosmischer Strahlung durch das GMF abgelenkt werden. Zweitens stört es die Messung und Analyse von Strahlung extra-galaktischer Quellen. Drittens lenkt es ultra-hoch-energetische kosmische Strahung (UHECR) derartig stark ab, dass die Zuordnung von gemessenen UHECR zu potentiellen Quellen nicht ohne Korrekturrechnung möglich ist. Viertens kann mit dem GMF ein kosmischer Dynamo-Prozess inklusive dessen innerer Strukturen studiert werden. Im Gegensatz zum GMF ist bei Sternen und Planeten nur das äußere Magnetfeld zugänglich und messbar. So großen Einfluss das GMF auf eine Vielzahl von Effekten hat, genauso schwer ist es auch zu ermitteln. Der Grund dafür ist, dass das Magnetfeld nicht direkt, sondern nur durch seinen Einfluss auf verschiedene physikalische Observablen messbar ist. Messungen dieser Observablen liefern für eine konkrete Sichtlinie ihren gesamt-akkumulierten Wert. Aufgrund der festen Position des Sonnensystems in der Milchstraße ist es daher eine Herausforderung der gemessenen Wirkung des Magnetfelds einer räumlichen Tiefe zuzuordnen. Als Informationsquelle dienen vor allem Messungen der Intensität und Polarisation von Radiound Mikrowellen, sowohl für den gesamten Himmel, als auch für einzelne Sterne, deren Position im Raum bekannt ist. Durch die Betrachtung der zugrunde liegenden physikalischen Prozesse wie Synchrotronemission und Faraday Rotation kann auf das GMF rückgeschlossen werden. Voraussetzung dafür sind jedoch dreidimensionale Dichte-Karten anderer Konstituenten der Milchstraße, beispielsweise der thermischen Elektronen oder des interstellaren Staubes. Für die Erstellung dieser Hilfskarten sind physikalische Prozesse wie Dispersion und Staubabsorption von entscheidender Bedeutung. Um das GMF anhand der vorhandenen Messdaten zu rekonstruieren, gibt es im Wesentlichen zwei Herangehensweisen. Zum einen benutzt man den phänomenologischen Ansatz parametrischer Magnetfeld-Modelle. Dabei wird die Struktur des Magnetfeldes durch analytische Formeln mit einer begrenzten Anzahl von Parametern festgelegt. Diese Modelle beinhalten die generelle Morphologie des Magnetfeldes, wie etwa Galaxie-Arme und Feld-Umkehrungen, aber auch lokale Charakteristika wie Nebel in der Nachbarschaft des Sonnensystems. Gegeben einem Satz Messdaten versucht man nun, jene Modellparameter zu finden, die eine möglichst gute Übereinstimmung mit den Observablen ergeben. Zu diesem Zweck wurde im Rahmen dieser Doktorarbeit Imagine, die Interstellar MAGnetic field INference Engine, entwickelt. Aufgrund der verhältnismäßig geringen Anzahl an Parametern ist eine Parameteranpassung auch mit robusten all-sky maps möglich, auch wenn diese keine Tiefen-Information enthalten. Allerdings gibt es bei der Herangehensweise über parametrische Modelle das Problem der Beliebigkeit: es gibt eine Vielzahl an Modellen verschiedenster Komplexität, die sich darüber hinaus häufig gegenseitig widersprechen. In der Vergangenheit wurden dann meist auch noch die Unsicherheit der Parameter-Rekonstruktionen unterschätzt. Im Gegensatz dazu ermöglicht eine rigorose Bayes’sche Analyse, beispielsweise mit dem in dieser Doktorarbeit entwickelten Imagine, eine verlässliche Bestimmung der Modellparameter. Neben parametrischen Modellen kann das GMF auch über einen nicht-parametrischen Ansatz rekonstruiert werden. Dabei hat jedes Raumvoxel zwei unabhängige Freiheitsgrade für das Magnetfeld. Diese Art der Rekonstruktion stellt deutlich höhere Ansprüche an die Datenmenge und -qualität, die Algorithmik, und die Rechenkapazität. Aufgrund der hohen Anzahl an Freiheitsgraden werden Messdaten benötigt, die direkte (Parallax-Messungen) oder indirekte (über das Hertzsprung Russel Diagramm) Tiefeninformation beinhalten. Zudem sind starke Prior für jene Raumbereiche notwendig, die von den Daten nur schwach abgedeckt werden. Einfache Bayes’sche Methoden reichen hierfür nicht mehr aus. Vielmehr ist nun Informationsfeldtheorie (IFT) nötig, um die verschiedenen Informationsquellen korrekt zu kombinieren, und verlässliche Unsicherheiten zu erhalten. Für diese Aufgabe ist das Python Framework NIFTy (Numerical Information Field Theory) prädestiniert. In seiner ersten Release-Version war NIFTy jedoch noch nicht für Magnetfeldrekonstruktionen und die benötigten Größenordnungen geeignet. Um die Datenmengen verarbeiten zu können wurde daher zunächst d2o als eigenständiges Werkzeug für Daten-Parallelisierung entwickelt. Damit kann parallelisierter Code entwickelt werden, ohne das die eigentliche Entwicklungsarbeit behindert wird. Da im Grunde alle numerischen Disziplinen mit großen Datensätzen, die sich nicht in Teilmengen zerlegen lassen davon profitieren können, wurde d2o als eigenständiges Paket veröffentlicht. Darüber hinaus wurde NIFTy so umfassend in seinem Funktionsumfang und seiner Struktur überarbeitet, sodass nun unter anderem auch hochaufgelöste Magnetfeldrekonstruktionen durchgeführt werden können. Außerdem ist es jetzt mit NIFTy auch möglich Karten der thermischen Elektronendichte und des interstellaren Staubes auf Basis neuer und gleichzeitig auch sehr großer Datensätze zu erstellen. Damit wurde der Weg zu einer nicht-parametrischen Rekonstruktionen des GMF geebnet.This thesis deals with the reconstruction of the magnetic field of the MilkyWay (GMF for Galactic Magnetic Field). A detailed description of the magnetic field is relevant for several problems in astrophysics. First, it plays an important role in how the structure of the Milky Way develops as the currents of interstellar gas and cosmic rays are deflected by the GMF. Second, it interferes with the measurement and analysis of radiation from extra-galactic sources. Third, it deflects ultra-high energetic cosmic rays (UHECR) to such an extent that the assignment of measured UHECR to potential sources is not possible without a correcting calculations. Fourth, the GMF can be used to study a cosmic dynamo process including its internal structures. In contrast to the GMF, normally only the outer magnetic field of stars and planets is accessible and measurable. As much as the GMF has an impact on a variety of effects, it is just as diffcult to determine. The reason for this is that the magnetic field cannot be measured directly, but only by its influence on various physical observables. Measurements of these observables yield their total accumulated value for a certain line of sight. Due to the fixed position of the solar system in the Milky Way, it is therefore a challenge to map the measured effect of the magnetic field to a spatial depth. Measurements of the intensity and polarization of radio and microwaves, both for the entire sky and for individual stars whose position in space is known, serve as a source of information. Based on physical processes such as synchrotron emission and Faraday rotation, the GMF can be deduced. However, this requires three-dimensional density maps of other constituents of the Milky Way, such as thermal electrons or interstellar dust. Physical processes like dispersion and dust absorption are crucial for the creation of these auxiliary maps. To reconstruct the GMF on the basis of existing measurement data, there are basically two approaches. On the one hand, the phenomenological approach of parametric magnetic field models can be used. This involves defining the structure of the magnetic field using analytical formulas with a limited number of parameters. These models include the general morphology of the magnetic field, such as galaxy arms and field reversals, but also local characteristics like nebulae in the solar system’s neighbourhood. If a set of measurement data is given, one tries to find those model parameter values that are in concordance with the observables as closely as possible. For this purpose, within the course of this doctoral thesis Imagine, the Interstellar MAGnetic field INference Engine was developed. Due to parametric model’s relatively small number of parameters, a fit is also possible with robust all-sky maps, even if they do not contain any depth information. However, there is the problem of arbitrariness in the approach of parametric models: there is a large number of models of different complexity available, which on top of that often contradict each other. In the past, the reconstructed parameter’s uncertainty was often underestimated. In contrast, a rigorous Bayesian analysis, as for example developed in this doctoral thesis with Imagine, provides a reliable analysis. On the other hand, in addition to parametric models the GMF can also be reconstructed following a non-parametric approach. In this case, each space voxel has two independent degrees of freedom for the magnetic field. Hence, this type of reconstruction places much higher demands on the amount and quality of data, the algorithms, and the computing capacity. Due to the high number of degrees of freedom, measurement data are required which contain direct (parallax measurements) or indirect (by means of the Russel diagram) depth information. In addition, strong priors are necessary for those areas of space that are only weakly covered by the data. Simple Bayesian methods are no longer suffcient for this. Rather, information field theory (IFT) is now needed to combine the various sources of information correctly and to obtain reliable uncertainties. The Python framework NIFTy (Numerical Information Field Theory) is predestined for this task. In its first release version, however, NIFTy was not yet natively capable of reconstructing a magnetic field and dealing with the order of magnitude of the problem’s data. To be able to process given data, d2o was developed as an independent tool for data parallelization. With d2o parallel code can be developed without any hindrance of the actual development work. Basically all numeric disciplines with large datasets that cannot be broken down into subsets can benefit from this, which is the reason why d2o has been released as an independent package. In addition, NIFTy has been comprehensively revised in its functional scope and structure, so that now, among other things, high-resolution magnetic field reconstructions can be carried out. With NIFTy it is now also possible to create maps of thermal electron density and interstellar dust on the basis of new and at the same time very large datasets. This paved the way for a non-parametric reconstruction of the GMF

    Optimal Phase Masks for High Contrast Imaging Applications

    Get PDF
    Phase-only optical elements can provide a number of important functions for high-contrast imaging. This thesis presents analytical and numerical optical design methods for accomplishing specific tasks, the most significant of which is the precise suppression of light from a distant point source. Instruments designed for this purpose are known as coronagraphs. Here, advanced coronagraph designs are presented that offer improved theoretical performance in comparison to the current state-of-the-art. Applications of these systems include the direct imaging and characterization of exoplanets and circumstellar disks with high sensitivity. Several new coronagraph designs are introduced and, in some cases, experimental support is provided. In addition, two novel high-contrast imaging applications are discussed: the measurement of sub-resolution information using coronagraphic optics and the protection of sensors from laser damage. The former is based on experimental measurements of the sensitivity of a coronagraph to source displacement. The latter discussion presents the current state of ongoing theoretical work. Beyond the mentioned applications, the main outcome of this thesis is a generalized theory for the design of optical systems with one of more phase masks that provide precise control of radiation over a large dynamic range, which is relevant in various high-contrast imaging scenarios. The optimal phase masks depend on the necessary tasks, the maximum number of optics, and application specific performance measures. The challenges and future prospects of this work are discussed in detail

    The Unified-FFT Method for Fast Solution of Integral Equations as Applied to Shielded-Domain Electromagnetics

    Get PDF
    Electromagnetic (EM) solvers are widely used within computer-aided design (CAD) to improve and ensure success of circuit designs. Unfortunately, due to the complexity of Maxwell\u27s equations, they are often computationally expensive. While considerable progress has been made in the realm of speed-enhanced EM solvers, these fast solvers generally achieve their results through methods that introduce additional error components by way of geometric approximations, sparse-matrix approximations, multilevel decomposition of interactions, and more. This work introduces the new method, Unified-FFT (UFFT). A derivative of method of moments, UFFT scales as O(N log N), and achieves fast analysis by the unique combination of FFT-enhanced matrix fill operations (MFO) with FFT-enhanced matrix solve operations (MSO). In this work, two versions of UFFT are developed, UFFT-Precorrected (UFFT-P) and UFFT-Grid Totalizing (UFFT-GT). UFFT-P uses precorrected FFT for MSO and allows the use of basis functions that do not conform to a regular grid. UFFT-GT uses conjugate gradient FFT for MSO and features the capability of reducing the error of the solution down to machine precision. The main contribution of UFFT-P is a fast solver, which utilizes FFT for both MFO and MSO. It is demonstrated in this work to not only provide simulation results for large problems considerably faster than state of the art commercial tools, but also to be capable of simulating geometries which are too complex for conventional simulation. In UFFT-P these benefits come at the expense of a minor penalty to accuracy. UFFT-GT contains further contributions as it demonstrates that such a fast solver can be accurate to numerical precision as compared to a full, direct analysis. It is shown to provide even more algorithmic efficiency and faster performance than UFFT-P. UFFT-GT makes an additional contribution in that it is developed not only for planar geometries, but also for the case of multilayered dielectrics and metallization. This functionality is particularly useful for multi-layered printed circuit boards (PCBs) and integrated circuits (ICs). Finally, UFFT-GT contributes a 3D planar solver, which allows for current to be discretized in the z-direction. This allows for similar fast and accurate simulation with the inclusion of some 3D features, such as vias connecting metallization planes
    • …
    corecore