682 research outputs found

    The Application of Tomographic Reconstruction Techniques to Ill-Conditioned Inverse Problems in Atmospheric Science and Biomedical Imaging

    Get PDF
    A methodology is presented for creating tomographic reconstructions from various projection data, and the relevance of the results to applications in atmospheric science and biomedical imaging is analyzed. The fundamental differences between transform and iterative methods are described and the properties of the imaging configurations are addressed. The presented results are particularly suited for highly ill-conditioned inverse problems in which the imaging data are restricted as a result of poor angular coverage, limited detector arrays, or insufficient access to an imaging region. The class of reconstruction algorithms commonly used in sparse tomography, the algebraic reconstruction techniques, is presented, analyzed, and compared. These algorithms are iterative in nature and their accuracy depends significantly on the initialization of the algorithm, the so-called initial guess. A considerable amount of research was conducted into novel initialization techniques as a means of improving the accuracy. The main body of this paper is comprised of three smaller papers, which describe the application of the presented methods to atmospheric and medical imaging modalities. The first paper details the measurement of mesospheric airglow emissions at two camera sites operated by Utah State University. Reconstructions of vertical airglow emission profiles are presented, including three-dimensional models of the layer formed using a novel fanning technique. The second paper describes the application of the method to the imaging of polar mesospheric clouds (PMCs) by NASA’s Aeronomy of Ice in the Mesosphere (AIM) satellite. The contrasting elements of straight-line and diffusive tomography are also discussed in the context of ill-conditioned imaging problems. A number of developing modalities in medical tomography use near-infrared light, which interacts strongly with biological tissue and results in significant optical scattering. In order to perform tomography on the diffused signal, simulations must be incorporated into the algorithm, which describe the sporadic photon migration. The third paper presents a novel Monte Carlo technique derived from the optical scattering solution for spheroidal particles designed to mimic mitochondria and deformed cell nuclei. Simulated results of optical diffusion are presented. The potential for improving existing imaging modalities through continual development of sparse tomography and optical scattering methods is discussed

    Three-Dimensional Imaging of Cold Atoms in a Magneto-Optical Trap with a Light Field Microscope

    Get PDF
    Imaging of trapped atoms in three dimensions utilizing a light field microscope is demonstrated in this work. Such a system is of interest in the development of atom interferometer accelerometers in dynamic systems where strictly defined focal planes may be impractical. A light field microscope was constructed utilizing a LytroŸ Development Kit micro-lens array and sensor. It was used to image fluorescing rubidium atoms in a magneto-optical trap. The three-dimensional (3D) volume of the atoms is reconstructed using a modeled point spread function (PSF), taking into consideration the low magnification (1.25) of the system which changed typical assumptions in the optics model for the PSF. The 3D reconstruction is analyzed with respect to a standard off-axis fluorescence image. Optical axis separation between two atom clouds is measured to a 100”m accuracy in a 3mm deep volume, with a 16”m in-focus standard resolution and a 3.9mm by 3.9mm field of view. Optical axis spreading is observed in the reconstruction and discussed. Absorption imaging with the light field microscope is also analyzed. The 3D images can be used to determine properties of the atom cloud with a single camera and single atom image which will be needed to create atom interferometers capable of inertial navigation

    Luminescence in Lithium Borates

    Get PDF
    Spectrometry methods are used to identify and characterize point defects in single crystals of lithium tetraborate (Li2B4O7) and lithium triborate (LiB3O5) doped with silver or copper, and explore the role of these point defects in luminescence. New defects are identified in Ag-doped including: lithium vacancy substitutional-silver-ion defect-pairs (hole trap); isolated lithium vacancies (hole trap); isolated oxygen vacancies (electron trap); interstitial-silver-ion substitutional-silver-ion defect pairs (electron trap); isolated interstitial silver ions (electron trap); and interstitial-silver-ion lithium-vacancy defect pairs (electron trap). Defect models are proposed, and adjustments made to defect models known defects. Defects in Ag-doped LiB3O5 and Cu-doped LiB3O5 are identified including: two species of interstitial-silver-ions (electron traps); isolated-substitutional-silver-ion (hole trap); lithium vacancy substitutional-silver-ion defect pairs (hole trap); interstitial-silver-ion substitutional-silver-ion defect pairs (electron trap); a species of interstitial-copper-ion (electron trap); isolated-substitutional-copper-ion (hole trap); and lithium vacancy substitutional-copper-ion defect pairs (hole trap). Based on this assessment, Ag-doped LiB3O5 is a promising TL and OSL dosimetry material while Cu-doped LiB3O5 is not

    Focal Spot, Summer/Fall 2009

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1112/thumbnail.jp

    Real Time Structured Light and Applications

    Get PDF

    Signal inference in radio astronomy

    Get PDF
    Diese Dissertation befasst sich mit dem Rekonstruieren von unvollstĂ€ndig gemessenen Signalen in der Radioastronomie. Es werden zwei bildgebende Algorithmen entwickelt, die im Formalismus der Informationsfeldtheorie hergeleitet werden. Beide basieren auf dem Prinzip der Bayesischen Analyse, die Informationen aus der unvollstĂ€ndigen Messung werden dabei durch a priori Informationen ergĂ€nzt. HierfĂŒr werden beide Informationsquellen in Form von Wahrscheinlichkeitsdichten formuliert und zu einer a posteriori Wahrscheinlichkeitsdichte zusammengefĂŒhrt. Die a priori Informationen werden dabei minimal gehalten und beschrĂ€nken sich auf die Annahme, dass das ursprĂŒngliche Signal bezĂŒglich des Ortes nicht beliebig stark fluktuiert. Dies erlaubt eine statistische AbschĂ€tzung des ursprĂŒnglichen Signales auf allen GrĂ¶ĂŸenskalen. Der erste bildgebende Algorithmus errechnet eine AbschĂ€tzung der dreidimensionalen freien Elektronendichte im interstellaren Medium der Milchstraße aus Dispersionsmessungen von Pulsaren. Die Dispersion der Radiostrahlung eines Pulsars ist proportional zu der Gesamtanzahl der freien Eletronen auf der Sichtlinie zwischen Pulsar und Beobachter. Jede gemessene Sichtlinie enthĂ€lt somit Informationen ĂŒber die Verteilung von freien Elektronen im Raum. Das Rekonstruktionsproblem ist damit ein Tomographieproblem Ă€hnlich dem in der medizinischen Bildgebung. Anhand einer Simulation wird untersucht, wie detailliert die Elektronendichte mit Daten des noch im Bau befindlichen Square Kilometre Array (SKA) rekonstruiert werden kann. Die Ergebnisse zeigen, dass die großen Strukturen der freien Elektronendichte der Milchstraße mit den Daten des SKA rekonstruiert werden können. Der zweite bildgebende Algorithmus trĂ€gt den Namen fastResolve und rekonstruiert die IntensitĂ€t von Radiostrahlung anhand von Messdaten eines Radiointerferometers. fastResolve baut auf dem bestehenden Algorithmus Resolve auf. fastResolve erweitert dessen FunktionalitĂ€t um die separate AbschĂ€tzung von Punktquellen und rekonstruiert simultan auch die Messunsicherheit. Gleichzeitig ist fastResolve etwa 100 mal schneller. Ein Vergleich des Algorithmus’ mit CLEAN, dem Standardalgorithmus in der Radioastronomie, wird anhand von Beobachtungsdaten des Galaxienhaufens Abell 2199, aufgenommen mit dem Very Large Array, durchgefĂŒhrt. fastResolve kann feinere Details des IntensitĂ€tsverlaufs rekonstruieren als CLEAN. Gleichzeitig erzeugt fastResolve weniger Artefakte wie negative IntensitĂ€t. Außerdem liefert fastResolve eine AbschĂ€tzung der Rekonstruktionsunsicherheit. Diese ist wichtig fĂŒr die wissenschaftliche Weiterverarbeitung und kann mit CLEAN nicht errechnet werden. Weiterhin wird ein Verfahren entwickelt, mit dem die Leistungsspektren von Gaußschen Feldern und die von log-normal Feldern ineinander umgewandelt werden können. Dieses ermöglicht die VerjĂŒngung des Leistungsspektrums der großskaligen Dichtestruktur des Universums, was durch Vergleiche mit einer störungstheoretischen Methode und einem kosmischen Emulator validiert wird.This dissertation addresses the problem of inferring a signal from an incomplete measurement in the field of radio astronomy. Two imaging algorithms are developed within the framework of information field theory. Both are based on Bayesian analysis; information from the incomplete measurement is complemented by a priori information. To that end both sources of information are formulated as probability distributions and merged to an a posteriori probability distribution. The a priori information is kept minimal. It reduces to the assumption that the real signal does not fluctuate arbitrarily strong with respect to position. This construction allows for a statistical estimation of the original signal on all scales. The first imaging algorithm calculates a three-dimensional map of the Galactic free electron density using dispersion measure data from pulsars. The dispersion of electromagnetic waves in the radio spectrum that a pulsar emits is proportional to the total number of free electrons on the line of sight between pulsar and observer. Therefore, each measured line of sight contains information about the distribution of free electrons in space. The reconstruction problem is a tomography problem similar to the one in medical imaging. We investigate which level of detail of the free electron density can be reconstructed with data of the upcoming Square Kilometre Array (SKA) by setting up a simulation. The results show that the large-scale features free electron density of the Milky Way will be reconstructible with the SKA. The second imaging algorithm is named fastResolve. It reconstructs the radio intensity of the sky from interferometric data. fastResolve is based on Resolve, but adds the capability to separate point sources and to estimate the measurement uncertainty. Most importantly, it is 100 times faster. A comparison of the algorithm with CLEAN, the standard imaging method for interferometric data in radio astronomy, is performed using observational data of the galaxy cluster Abell 2199 recorded with the Very Large Array. fastResolve reconstructs finer details than CLEAN while introducing fewer artifacts such as negative intensity. Furthermore, fastResolve provides an uncertainty map. This quantity is important for proper scientific use of the result, but is not available using CLEAN. Furthermore, a formalism is developed, which allows the conversion of power spectra of Gaussian fields into the power spectra of log-normal fields and vice versa. This allows the rejuvenation of the power spectrum of the large-scale matter distribution of the Universe. We validate the approach by comparison with a perturbative method and a cosmic emulator

    Models and image: reconstruction in electrical impedance tomography of human brain function

    Get PDF
    Electrical Impedance Tomography (EIT) of brain function has the potential to provide a rapid portable bedside neuroimaging device. Recently, our group published the first ever EIT images of evoked activity recorded with scalp electrodes. While the raw data showed encouraging, reproducible changes of a few per cent, the images were noisy. The poor image quality was due, in part, to the use of a simplified reconstruction algorithm which modelled the head as a homogeneous sphere. The purpose of this work has been to develop new algorithms in which the model incorporates extracerebral layers and realistic geometry, and to assess their effect on image quality. An algorithm was suggested which allowed fair comparison between reconstructions assuming analytical and numerical (Finite Element Method - FEM) models of the head as a homogeneous sphere and as concentric spheres representing the brain, CSF, skull and scalp. Comparison was also made between these and numerical models of the head as a homogeneous, head-shaped volume and as a head-shaped volume with internal compartments of contrasting resistivity. The models were tested on computer simulations, on spherical and head-shaped, saline-filled tanks and on data collected during human evoked response studies. EIT also has the potential to image resistance changes which occur during neuronal depolarization in the cortex and last tens of milliseconds. Also presented in this thesis is an estimate of their magnitude made using a mathematical model, based on cable theory, of resistance changes at DC during depolarization in the cerebral cortex. Published values were used for the electrical properties and geometry of cell processes (Rail, 1975). The study was performed in order to estimate the resultant scalp signal that might be obtained and to assess the ability of EIT to produce images of neuronal depolarization

    Cost-effective 3D scanning and printing technologies for outer ear reconstruction: Current status

    Get PDF
    Current 3D scanning and printing technologies offer not only state-of-the-art developments in the field of medical imaging and bio-engineering, but also cost and time effective solutions for surgical reconstruction procedures. Besides tissue engineering, where living cells are used, bio-compatible polymers or synthetic resin can be applied. The combination of 3D handheld scanning devices or volumetric imaging, (open-source) image processing packages, and 3D printers form a complete workflow chain that is capable of effective rapid prototyping of outer ear replicas. This paper reviews current possibilities and latest use cases for 3D-scanning, data processing and printing of outer ear replicas with a focus on low-cost solutions for rehabilitation engineering

    Applied AI/ML for automatic customisation of medical implants

    Get PDF
    Most knee replacement surgeries are performed using ‘off-the-shelf’ implants, supplied with a set number of standardised sizes. X-rays are taken during pre-operative assessment and used by clinicians to estimate the best options for patients. Manual templating and implant size selection have, however, been shown to be inaccurate, and frequently the generically shaped products do not adequately fit patients’ unique anatomies. Furthermore, off-the-shelf implants are typically made from solid metal and do not exhibit mechanical properties like the native bone. Consequently, the combination of these factors often leads to poor outcomes for patients. Various solutions have been outlined in the literature for customising the size, shape, and stiffness of implants for the specific needs of individuals. Such designs can be fabricated via additive manufacturing which enables bespoke and intricate geometries to be produced in biocompatible materials. Despite this, all customisation solutions identified required some level of manual input to segment image files, identify anatomical features, and/or drive design software. These tasks are time consuming, expensive, and require trained resource. Almost all currently available solutions also require CT imaging, which adds further expense, incurs high levels of potentially harmful radiation, and is not as commonly accessible as X-ray imaging. This thesis explores how various levels of knee replacement customisation can be completed automatically by applying artificial intelligence, machine learning and statistical methods. The principal output is a software application, believed to be the first true ‘mass-customisation’ solution. The software is compatible with both 2D X-ray and 3D CT data and enables fully automatic and accurate implant size prediction, shape customisation and stiffness matching. It is therefore seen to address the key limitations associated with current implant customisation solutions and will hopefully enable the benefits of customisation to be more widely accessible.Open Acces
    • 

    corecore