1,885 research outputs found

    Compressive Sensing for Dynamic XRF Scanning

    Full text link
    X-Ray Fluorescence (XRF) scanning is a widespread technique of high importance and impact since it provides chemical composition maps crucial for several scientific investigations. There are continuous requirements for larger, faster and highly resolved acquisitions in order to study complex structures. Among the scientific applications that benefit from it, some of them, such as wide scale brain imaging, are prohibitively difficult due to time constraints. However, typically the overall XRF imaging performance is improving through technological progress on XRF detectors and X-ray sources. This paper suggests an additional approach where XRF scanning is performed in a sparse way by skipping specific points or by varying dynamically acquisition time or other scan settings in a conditional manner. This paves the way for Compressive Sensing in XRF scans where data are acquired in a reduced manner allowing for challenging experiments, currently not feasible with the traditional scanning strategies. A series of different compressive sensing strategies for dynamic scans are presented here. A proof of principle experiment was performed at the TwinMic beamline of Elettra synchrotron. The outcome demonstrates the potential of Compressive Sensing for dynamic scans, suggesting its use in challenging scientific experiments while proposing a technical solution for beamline acquisition software.Comment: 16 pages, 7 figures, 1 tabl

    Online Super-Resolution For Fibre-Bundle-Based Confocal Laser Endomicroscopy

    Get PDF
    Probe-based Confocal Laser Endomicroscopy (pCLE) produces microscopic images enabling real-time in vivo optical biopsy. However, the miniaturisation of the optical hardware, specifically the reliance on an optical fibre bundle as an imaging guide, fundamentally limits image quality by producing artefacts, noise, and relatively low contrast and resolution. The reconstruction approaches in clinical pCLE products do not fully alleviate these problems. Consequently, image quality remains a barrier that curbs the full potential of pCLE. Enhancing the image quality of pCLE in real-time remains a challenge. The research in this thesis is a response to this need. I have developed dedicated online super-resolution methods that account for the physics of the image acquisition process. These methods have the potential to replace existing reconstruction algorithms without interfering with the fibre design or the hardware of the device. In this thesis, novel processing pipelines are proposed for enhancing the image quality of pCLE. First, I explored a learning-based super-resolution method that relies on mapping from the low to the high-resolution space. Due to the lack of high-resolution pCLE, I proposed to simulate high-resolution data and use it as a ground truth model that is based on the pCLE acquisition physics. However, pCLE images are reconstructed from irregularly distributed fibre signals, and grid-based Convolutional Neural Networks are not designed to take irregular data as input. To alleviate this problem, I designed a new trainable layer that embeds Nadaraya- Watson regression. Finally, I proposed a novel blind super-resolution approach by deploying unsupervised zero-shot learning accompanied by a down-sampling kernel crafted for pCLE. I evaluated these new methods in two ways: a robust image quality assessment and a perceptual quality test assessed by clinical experts. The results demonstrate that the proposed super-resolution pipelines are superior to the current reconstruction algorithm in terms of image quality and clinician preference

    Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems

    Full text link
    Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300 GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including security sensing, industrial packaging, medical imaging, and non-destructive testing. Traditional methods for perception and imaging are challenged by novel data-driven algorithms that offer improved resolution, localization, and detection rates. Over the past decade, deep learning technology has garnered substantial popularity, particularly in perception and computer vision applications. Whereas conventional signal processing techniques are more easily generalized to various applications, hybrid approaches where signal processing and learning-based algorithms are interleaved pose a promising compromise between performance and generalizability. Furthermore, such hybrid algorithms improve model training by leveraging the known characteristics of radio frequency (RF) waveforms, thus yielding more efficiently trained deep learning algorithms and offering higher performance than conventional methods. This dissertation introduces novel hybrid-learning algorithms for improved mmWave imaging systems applicable to a host of problems in perception and sensing. Various problem spaces are explored, including static and dynamic gesture classification; precise hand localization for human computer interaction; high-resolution near-field mmWave imaging using forward synthetic aperture radar (SAR); SAR under irregular scanning geometries; mmWave image super-resolution using deep neural network (DNN) and Vision Transformer (ViT) architectures; and data-level multiband radar fusion using a novel hybrid-learning architecture. Furthermore, we introduce several novel approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen

    Intelligent Sensing and Learning for Advanced MIMO Communication Systems

    Get PDF

    Deep Probabilistic Models for Camera Geo-Calibration

    Get PDF
    The ultimate goal of image understanding is to transfer visual images into numerical or symbolic descriptions of the scene that are helpful for decision making. Knowing when, where, and in which direction a picture was taken, the task of geo-calibration makes it possible to use imagery to understand the world and how it changes in time. Current models for geo-calibration are mostly deterministic, which in many cases fails to model the inherent uncertainties when the image content is ambiguous. Furthermore, without a proper modeling of the uncertainty, subsequent processing can yield overly confident predictions. To address these limitations, we propose a probabilistic model for camera geo-calibration using deep neural networks. While our primary contribution is geo-calibration, we also show that learning to geo-calibrate a camera allows us to implicitly learn to understand the content of the scene
    corecore