44 research outputs found

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Diffusion MRI tractography for oncological neurosurgery planning:Clinical research prototype

    Get PDF

    Compressive MRI with deep convolutional and attentive models

    Get PDF
    Since its advent in the last century, Magnetic Resonance Imaging (MRI) has demonstrated a significant impact on modern medicine and spectroscopy and witnessed widespread use in medical imaging and clinical practice, owing to the flexibility and excellent ability in viewing anatomical structures. Although it provides a non-invasive and ionizing radiation-free tool to create images of the anatomy of the human body being inspected, the long data acquisition process hinders its growth and development in time-critical applications. To shorten the scanning time and reduce the discomfort of patients, the sampling process can be accelerated by leaving out an amount of sampling steps and performing image reconstruction from a subset of measurements. However, the images created with under-sampled signals can suffer from strong aliasing artifacts which unfavorably affect the quality of diagnosis and treatment. Compressed sensing (CS) methods were introduced to alleviate the aliasing artifacts by reconstructing an image from the observed measurements via model-based optimization algorithms. Despite achieved success, the sparsity prior assumed by CS methods can be difficult to hold in real-world practice and challenging to capture complex anatomical structures. The iterative optimization algorithms are often computationally expensive and time-consuming, against the speed demand of modern MRI. Those factors limit the quality of reconstructed images and put restrictions on the achievable acceleration rates. This thesis mainly focuses on developing deep learning-based methods, specifically using modern over-parametrized models, for MRI reconstruction, by leveraging the powerful learning ability and representation capacity of such models. Firstly, we introduce an attentive selection generative adversarial network to achieve fine-grained reconstruction by performing large-field contextual information integration and attention selection mechanism. To incorporate domain-specific knowledge into the reconstruction procedure, an optimization-inspired deep cascaded framework is proposed with a novel deep data consistency block to leverage domain-specific knowledge and an adaptive spatial attention selection module to capture the correlations among high-resolution features, aiming to enhance the quality of recovered images. To efficiently utilize the contextual information hidden in the spatial dimensions, a novel region-guided channel-wise attention network is introduced to incorporate the spatial semantics into a channel-based attention mechanism, demonstrating a light-weight and flexible design to attain improved reconstruction performance. Secondly, a coil-agnostic reconstruction framework is introduced to solve the unknown forward process problem in parallel MRI reconstruction. To avoid the estimation of sensitivity maps, a novel data aggregation consistency block is proposed to approximately perform the data consistency enforcement without resorting to coil sensitivity information. A locality-aware spatial attention module is devised and embedded into the reconstruction pipeline to enhance the model performance by capturing spatial contextual information via data-adaptive kernel prediction. It is demonstrated by experiments that the proposed coil-agnostic method is robust and resilient to different machine configurations and outperforms other sensitivity estimation-based methods. Finally, the research work focusing on dynamic MRI reconstruction is presented. We introduce an optimization-inspired deep cascaded framework to recover a sequence of MRI images, which utilizes a novel mask-guided motion feature incorporation method to explicitly extract and incorporate the motion information into the reconstruction iterations, showing to better preserve the dynamic content. A spatio-temporal Fourier neural block is proposed and embedded into the network to improve the model performance by efficiently retrieving useful information in both spatial and temporal domains. It is demonstrated that the devised framework surpasses other competing methods and can generalize well on other reconstruction models and unseen data, validating its transferability and generalization capacity

    Diffusion MRI tractography for oncological neurosurgery planning:Clinical research prototype

    Get PDF

    ROBUST DEEP LEARNING METHODS FOR SOLVING INVERSE PROBLEMS IN MEDICAL IMAGING

    Get PDF
    The medical imaging field has a long history of incorporating machine learning algorithms to address inverse problems in image acquisition and analysis. With the impressive successes of deep neural networks on natural images, we seek to answer the obvious question: do these successes also transfer to the medical image domain? The answer may seem straightforward on the surface. Tasks like image-to-image transformation, segmentation, detection, etc., have direct applications for medical images. For example, metal artifact reduction for Computed Tomography (CT) and reconstruction from undersampled k-space signal for Magnetic Resonance (MR) imaging can be formulated as an image-to-image transformation; lesion/tumor detection and segmentation are obvious applications for higher level vision tasks. While these tasks may be similar in formulation, many practical constraints and requirements exist in solving these tasks for medical images. Patient data is highly sensitive and usually only accessible from individual institutions. This creates constraints on the available groundtruth, dataset size, and computational resources in these institutions to train performant models. Due to the mission-critical nature in healthcare applications, requirements such as performance robustness and speed are also stringent. As such, the big-data, dense-computation, supervised learning paradigm in mainstream deep learning is often insufficient to address these situations. In this dissertation, we investigate ways to benefit from the powerful representational capacity of deep neural networks while still satisfying the above-mentioned constraints and requirements. The first part of this dissertation focuses on adapting supervised learning to account for variations such as different medical image modality, image quality, architecture designs, tasks, etc. The second part of this dissertation focuses on improving model robustness on unseen data through domain adaptation, which ameliorates performance degradation due to distribution shifts. The last part of this dissertation focuses on self-supervised learning and learning from synthetic data with a focus in tomographic imaging; this is essential in many situations where the desired groundtruth may not be accessible

    Generative Models for Inverse Imaging Problems

    Get PDF

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    corecore