17 research outputs found

    Learning a Dilated Residual Network for SAR Image Despeckling

    Full text link
    In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows superior performance over the state-of-the-art methods on both quantitative and visual assessments, especially for strong speckle noise.Comment: 18 pages, 13 figures, 7 table

    Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Fundamental and Harmonic Ultrasound Image Joint Restoration

    Get PDF
    L'imagerie ultrasonore conserve sa place parmi les principales modalités d'imagerie en raison de ses capacités à révéler l'anatomie et à inspecter le mouvement des organes et le flux sanguin en temps réel, d'un manière non invasive et non ionisante, avec un faible coût, une facilité d'utilisation et une grande vitesse de reconstruction des images. Néanmoins, l'imagerie ultrasonore présente des limites intrinsèques en termes de résolution spatiale. L'amélioration de la résolution spatiale des images ultrasonores est un défi actuel et de nombreux travaux ont longtemps porté sur l'optimisation du dispositif d'acquisition. L'imagerie ultrasonore à haute résolution atteint cet objectif grâce à l'utilisation de sondes spécialisées, mais se confronte aujourd'hui à des limites physiques et technologiques. L'imagerie harmonique est la solution intuitive des spécialistes pour augmenter la résolution lors de l'acquisition. Cependant, elle souffre d'une atténuation en profondeur. Une solution alternative pour améliorer la résolution est de développer des techniques de post-traitement comme la restauration d'images ultrasonores. L'objectif de cette thèse est d'étudier la non-linéarité des échos ultrasonores dans le processus de restauration et de présenter l'intérêt d'incorporer des images US harmoniques dans ce processus. Par conséquent, nous présentons une nouvelle méthode de restauration d'images US qui utilise les composantes fondamentales et harmoniques de l'image observée. La plupart des méthodes existantes sont basées sur un modèle linéaire de formation d'image. Sous l'approximation de Born du premier ordre, l'image RF est supposée être une convolution 2D entre la fonction de réflectivité et la réponse impulsionelle du système. Par conséquent, un problème inverse résultant est formé et résolu en utilisant un algorithme de type ADMM. Plus précisément, nous proposons de récupérer la fonction de reflectivité inconnue en minimisant une fonction composée de deux termes de fidélité des données correspondant aux composantes linéaires (fondamentale) et non linéaires (première harmonique) de l'image observée, et d'un terme de régularisation basé sur la parcimonie afin de stabiliser la solution. Pour tenir compte de l'atténuation en profondeur des images harmoniques, un terme d'atténuation dans le modèle direct de l'image harmonique est proposé sur la base d'une analyse spectrale effectuée sur les signaux RF observés. La méthode proposée a d'abord été appliquée en deux étapes, en estimant d'abord la réponse impulsionelle, suivi par la fonction de réflectivité. Dans un deuxième temps, une solution pour estimer simultanément le réponse impulsionelle et la fonction de réflectivité est proposée, et une autre solution pour prendre en compte la variabilité spatiale du la réponse impulsionelle est présentée. L'intérêt de la méthode proposée est démontré par des résultats synthétiques et in vivo et comparé aux méthodes de restauration conventionnelles

    Machine Learning for Beamforming in Audio, Ultrasound, and Radar

    Get PDF
    Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of audio, ultrasound, and radar. Machine learning is the other central part of this thesis. Machine learning, and especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers many of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more. In this dissertation, we look at beamforming pipelines in audio, ultrasound, and radar from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. We start off in the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zooming. Staying in the audio domain, we then demonstrate how deep learning can be used to improve the perceptual qualities of speech by denoising speech clipping, codec distortions, and gaps in speech. Transitioning to the ultrasound domain, we improve the performance of short-lag spatial coherence ultrasound imaging by exploiting the differences in tissue texture at each short lag value by applying robust principal component analysis. Next, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data. Finally, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar

    Innovative Techniques for the Retrieval of Earth’s Surface and Atmosphere Geophysical Parameters: Spaceborne Infrared/Microwave Combined Analyses

    Get PDF
    With the advent of the first satellites for Earth Observation: Landsat-1 in July 1972 and ERS-1 in May 1991, the discipline of environmental remote sensing has become, over time, increasingly fundamental for the study of phenomena characterizing the planet Earth. The goal of environmental remote sensing is to perform detailed analyses and to monitor the temporal evolution of different physical phenomena, exploiting the mechanisms of interaction between the objects that are present in an observed scene and the electromagnetic radiation detected by sensors, placed at a distance from the scene, operating at different frequencies. The analyzed physical phenomena are those related to climate change, weather forecasts, global ocean circulation, greenhouse gas profiling, earthquakes, volcanic eruptions, soil subsidence, and the effects of rapid urbanization processes. Generally, remote sensing sensors are of two primary types: active and passive. Active sensors use their own source of electromagnetic radiation to illuminate and analyze an area of interest. An active sensor emits radiation in the direction of the area to be investigated and then detects and measures the radiation that is backscattered from the objects contained in that area. Passive sensors, on the other hand, detect natural electromagnetic radiation (e.g., from the Sun in the visible band and the Earth in the infrared and microwave bands) emitted or reflected by the object contained in the observed scene. The scientific community has dedicated many resources to developing techniques to estimate, study and analyze Earth’s geophysical parameters. These techniques differ for active and passive sensors because they depend strictly on the type of the measured physical quantity. In my P.h.D. work, inversion techniques for estimating Earth’s surface and atmosphere geophysical parameters will be addressed, emphasizing methods based on machine learning (ML). In particular, the study of cloud microphysics and the characterization of Earth’s surface changes phenomenon are the critical points of this work
    corecore