45 research outputs found

    Low-dimensional data embedding for scalable astronomical imaging in the SKA telescope era

    Get PDF
    Astronomy is one of the oldest sciences known to humanity. We have been studying celestial objects for millennia, and continue to peer deeper into space in our thirst for knowledge about our origins and the universe that surrounds us. Radio astronomy -- observing celestial objects at radio frequencies -- has helped push the boundaries on the kind of objects we can study. Indeed, some of the most important discoveries about the structure of our universe, like the cosmic microwave background, and entire classes of objects like quasars and pulsars, were made using radio astronomy. Radio interferometers are telescopes made of multiple antennas spread over a distance. Signals detected at different antennas are combined to provide images with much higher resolution and sensitivity than with a traditional single-dish radio telescope. The Square Kilometre Array (SKA) is one such radio interferometer, with plans to have antennas separated by as much as 3000\,km. In its quest for ever-higher resolution and ever-wider coverage of the sky, the SKA heralds a data explosion, with an expected acquisition rate of 5\,terabits per second. The high data rate fed into the pipeline can be handled with a two-pronged approach -- (i) scalable, parallel imaging algorithms that fully utilize the latest computing technologies like accelerators and distributed clusters, and (ii) dimensionality reduction methods that embed the high-dimensional telescope data to much smaller sizes without losing information and guaranteeing accurate recovery of the images, thereby enabling imaging methods to scale to big data sizes and alleviating heavy loads on pipeline buffers without compromising on the science goals of the SKA. In this thesis we propose fast and robust dimensionality reduction methods that embed data to very low sizes while preserving information present in the original data. These methods are presented in the context of compressed sensing theory and related signal recovery techniques. The effectiveness of the reduction methods is illustrated by coupling them with advanced convex optimization algorithms to solve a sparse recovery problem. Images thus reconstructed from extremely low-sized embedded data are shown to have quality comparable to those obtained from full data without any reduction. Comparisons with other standard `data compression' techniques in radio interferometry (like averaging) show a clear advantage in using our methods which provide higher quality images from much lower data sizes. We confirm these claims on both synthetic data simulating SKA data patterns as well as actual telescope data from a state-of-the-art radio interferometer. Additionally, imaging with reduced data is shown to have a lighter computational load -- smaller memory footprint owing to the size and faster iterative image recovery owing to the fast embedding. Extensions to the work presented in this thesis are already underway. We propose an `on-line' version of our reduction methods that works on blocks of data and thus can be applied on-the-fly on data as they are being acquired by telescopes in real-time. This is of immediate interest to the SKA where large buffers in the data acquisition pipeline are very expensive and thus undesirable. Some directions to be probed in the immediate future are in transient imaging, and imaging hyperspectral data to test computational load while in a high resolution, multi-frequency setting

    Sampling the Multiple Facets of Light

    Get PDF
    The theme of this thesis revolves around three important manifestations of light, namely its corpuscular, wave and electromagnetic nature. Our goal is to exploit these principles to analyze, design and build imaging modalities by developing new signal processing and algorithmic tools, based in particular on sampling and sparsity concepts. First, we introduce a new sampling scheme called variable pulse width, which is based on the finite rate of innovation (FRI) sampling paradigm. This new framework enables to sample and perfectly reconstruct weighted sums of Lorentzians; perfect reconstruction from sampled signals is guaranteed by a set of theorems. Second, we turn to the context of light and study its reflection, which is based on the corpuscular model of light. More precisely, we propose to use our FRI-based model to represent bidirectional reflectance distribution functions. We develop dedicated light domes to acquire reflectance functions and use the measurements obtained to demonstrate the usefulness and versatility of our model. In particular, we concentrate on the representation of specularities, which are sharp and bright components generated by the direct reflection of light on surfaces. Third, we explore the wave nature of light through Lippmann photography, a century-old photography technique that acquires the entire spectrum of visible light. This fascinating process captures interferences patterns created by the exposed scene inside the depth of a photosensitive plate. By illuminating the developed plate with a neutral light source, the reflected spectrum corresponds to that of the exposed scene. We propose a mathematical model which precisely explains the technique and demonstrate that the spectrum reproduction suffers from a number of distortions due to the finite depth of the plate and the choice of reflector. In addition to describing these artifacts, we describe an algorithm to invert them, essentially recovering the original spectrum of the exposed scene. Next, the wave nature of light is further generalized to the electromagnetic theory, which we invoke to leverage the concept of polarization of light. We also return to the topic of the representation of reflectance functions and focus this time on the separation of the specular component from the other reflections. We exploit the fact that the polarization of light is preserved in specular reflections and investigate camera designs with polarizing micro-filters with different orientations placed just in front of the camera sensor; the different polarizations of the filters create a mosaic image, from which we propose to extract the specular component. We apply our demosaicing method to several scenes and additionally demonstrate that our approach improves photometric stereo. Finally, we delve into the problem of retrieving the phase information of a sparse signal from the magnitude of its Fourier transform. We propose an algorithm that resolves the phase retrieval problem for sparse signals in three stages. Unlike traditional approaches that recover a discrete approximation of the underlying signal, our algorithm estimates the signal on a continuous domain, which makes it the first of its kind. The concluding chapter outlines several avenues for future research, like new optical devices such as displays and digital cameras, inspired by the topic of Lippmann photography

    Rekonstrukcija signala iz nepotpunih merenja sa primenom u ubrzanju algoritama za rekonstrukciju slike magnetne rezonance

    Get PDF
    In dissertation a problem of reconstruction of images from undersampled measurements is considered which has direct application in creation of magnetic resonance images. The topic of the research is proposition of new regularization based methods for image reconstruction which are based on statistical Markov random field models and theory of compressive sensing. With the proposed signal model which follows the statistics of images, a new regularization functions are defined and four methods for reconstruction of magnetic resonance images are derived.У докторској дисертацији разматран је проблем реконструкције сигнала слике из непотпуних мерења који има директну примену у креирању слика магнетне резнонаце. Предмет истраживања је везан за предлог нових регуларизационих метода реконструкције коришћењем статистичких модела Марковљевог случајног поља и теорије ретке репрезентације сигнала. На основу предложеног модела који на веродостојан начин репрезентује статистику сигнала слике предложене су регуларизационе функције и креирана четири алгоритма за реконструкцију слике магнетне резонанце.U doktorskoj disertaciji razmatran je problem rekonstrukcije signala slike iz nepotpunih merenja koji ima direktnu primenu u kreiranju slika magnetne reznonace. Predmet istraživanja je vezan za predlog novih regularizacionih metoda rekonstrukcije korišćenjem statističkih modela Markovljevog slučajnog polja i teorije retke reprezentacije signala. Na osnovu predloženog modela koji na verodostojan način reprezentuje statistiku signala slike predložene su regularizacione funkcije i kreirana četiri algoritma za rekonstrukciju slike magnetne rezonance

    Mineral identification using data-mining in hyperspectral infrared imagery

    Get PDF
    Les applications de l’imagerie infrarouge dans le domaine de la géologie sont principalement des applications hyperspectrales. Elles permettent entre autre l’identification minérale, la cartographie, ainsi que l’estimation de la portée. Le plus souvent, ces acquisitions sont réalisées in-situ soit à l’aide de capteurs aéroportés, soit à l’aide de dispositifs portatifs. La découverte de minéraux indicateurs a permis d’améliorer grandement l’exploration minérale. Ceci est en partie dû à l’utilisation d’instruments portatifs. Dans ce contexte le développement de systèmes automatisés permettrait d’augmenter à la fois la qualité de l’exploration et la précision de la détection des indicateurs. C’est dans ce cadre que s’inscrit le travail mené dans ce doctorat. Le sujet consistait en l’utilisation de méthodes d’apprentissage automatique appliquées à l’analyse (au traitement) d’images hyperspectrales prises dans les longueurs d’onde infrarouge. L’objectif recherché étant l’identification de grains minéraux de petites tailles utilisés comme indicateurs minéral -ogiques. Une application potentielle de cette recherche serait le développement d’un outil logiciel d’assistance pour l’analyse des échantillons lors de l’exploration minérale. Les expériences ont été menées en laboratoire dans la gamme relative à l’infrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m à 11.8 m. Ces essais ont permis de proposer une méthode pour calculer l’annulation du continuum. La méthode utilisée lors de ces essais utilise la factorisation matricielle non négative (NMF). En utlisant une factorisation du premier ordre on peut déduire le rayonnement de pénétration, lequel peut ensuite être comparé et analysé par rapport à d’autres méthodes plus communes. L’analyse des résultats spectraux en comparaison avec plusieurs bibliothèques existantes de données a permis de mettre en évidence la suppression du continuum. Les expérience ayant menés à ce résultat ont été conduites en utilisant une plaque Infragold ainsi qu’un objectif macro LWIR. L’identification automatique de grains de différents matériaux tels que la pyrope, l’olivine et le quartz a commencé. Lors d’une phase de comparaison entre des approches supervisées et non supervisées, cette dernière s’est montrée plus approprié en raison du comportement indépendant par rapport à l’étape d’entraînement. Afin de confirmer la qualité de ces résultats quatre expériences ont été menées. Lors d’une première expérience deux algorithmes ont été évalués pour application de regroupements en utilisant l’approche FCC (False Colour Composite). Cet essai a permis d’observer une vitesse de convergence, jusqu’a vingt fois plus rapide, ainsi qu’une efficacité significativement accrue concernant l’identification en comparaison des résultats de la littérature. Cependant des essais effectués sur des données LWIR ont montré un manque de prédiction de la surface du grain lorsque les grains étaient irréguliers avec présence d’agrégats minéraux. La seconde expérience a consisté, en une analyse quantitaive comparative entre deux bases de données de Ground Truth (GT), nommée rigid-GT et observed-GT (rigide-GT: étiquet manuel de la région, observée-GT:étiquetage manuel les pixels). La précision des résultats était 1.5 fois meilleur lorsque l’on a utlisé la base de données observed-GT que rigid-GT. Pour les deux dernières epxérience, des données venant d’un MEB (Microscope Électronique à Balayage) ainsi que d’un microscopie à fluorescence (XRF) ont été ajoutées. Ces données ont permis d’introduire des informations relatives tant aux agrégats minéraux qu’à la surface des grains. Les résultats ont été comparés par des techniques d’identification automatique des minéraux, utilisant ArcGIS. Cette dernière a montré une performance prometteuse quand à l’identification automatique et à aussi été utilisée pour la GT de validation. Dans l’ensemble, les quatre méthodes de cette thèse représentent des méthodologies bénéfiques pour l’identification des minéraux. Ces méthodes présentent l’avantage d’être non-destructives, relativement précises et d’avoir un faible coût en temps calcul ce qui pourrait les qualifier pour être utilisée dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7μm to 11.8μm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field

    Dynamics and correlations in sparse signal acquisition

    Get PDF
    One of the most important parts of engineered and biological systems is the ability to acquire and interpret information from the surrounding world accurately and in time-scales relevant to the tasks critical to system performance. This classical concept of efficient signal acquisition has been a cornerstone of signal processing research, spawning traditional sampling theorems (e.g. Shannon-Nyquist sampling), efficient filter designs (e.g. the Parks-McClellan algorithm), novel VLSI chipsets for embedded systems, and optimal tracking algorithms (e.g. Kalman filtering). Traditional techniques have made minimal assumptions on the actual signals that were being measured and interpreted, essentially only assuming a limited bandwidth. While these assumptions have provided the foundational works in signal processing, recently the ability to collect and analyze large datasets have allowed researchers to see that many important signal classes have much more regularity than having finite bandwidth. One of the major advances of modern signal processing is to greatly improve on classical signal processing results by leveraging more specific signal statistics. By assuming even very broad classes of signals, signal acquisition and recovery can be greatly improved in regimes where classical techniques are extremely pessimistic. One of the most successful signal assumptions that has gained popularity in recet hears is notion of sparsity. Under the sparsity assumption, the signal is assumed to be composed of a small number of atomic signals from a potentially large dictionary. This limit in the underlying degrees of freedom (the number of atoms used) as opposed to the ambient dimension of the signal has allowed for improved signal acquisition, in particular when the number of measurements is severely limited. While techniques for leveraging sparsity have been explored extensively in many contexts, typically works in this regime concentrate on exploring static measurement systems which result in static measurements of static signals. Many systems, however, have non-trivial dynamic components, either in the measurement system's operation or in the nature of the signal being observed. Due to the promising prior work leveraging sparsity for signal acquisition and the large number of dynamical systems and signals in many important applications, it is critical to understand whether sparsity assumptions are compatible with dynamical systems. Therefore, this work seeks to understand how dynamics and sparsity can be used jointly in various aspects of signal measurement and inference. Specifically, this work looks at three different ways that dynamical systems and sparsity assumptions can interact. In terms of measurement systems, we analyze a dynamical neural network that accumulates signal information over time. We prove a series of bounds on the length of the input signal that drives the network that can be recovered from the values at the network nodes~[1--9]. We also analyze sparse signals that are generated via a dynamical system (i.e. a series of correlated, temporally ordered, sparse signals). For this class of signals, we present a series of inference algorithms that leverage both dynamics and sparsity information, improving the potential for signal recovery in a host of applications~[10--19]. As an extension of dynamical filtering, we show how these dynamic filtering ideas can be expanded to the broader class of spatially correlated signals. Specifically, explore how sparsity and spatial correlations can improve inference of material distributions and spectral super-resolution in hyperspectral imagery~[20--25]. Finally, we analyze dynamical systems that perform optimization routines for sparsity-based inference. We analyze a networked system driven by a continuous-time differential equation and show that such a system is capable of recovering a large variety of different sparse signal classes~[26--30].Ph.D

    Recent Techniques for Regularization in Partial Differential Equations and Imaging

    Get PDF
    abstract: Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain. This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges. Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.Dissertation/ThesisDoctoral Dissertation Mathematics 201

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    corecore