619 research outputs found

    Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Détection de changement par fusion d'images de télédétection de résolutions et modalités différentes

    Get PDF
    La détection de changements dans une scène est l’un des problèmes les plus complexes en télédétection. Il s’agit de détecter des modifications survenues dans une zone géographique donnée par comparaison d’images de cette zone acquises à différents instants. La comparaison est facilitée lorsque les images sont issues du même type de capteur c’est-à-dire correspondent à la même modalité (le plus souvent optique multi-bandes) et possèdent des résolutions spatiales et spectrales identiques. Les techniques de détection de changements non supervisées sont, pour la plupart, conçues spécifiquement pour ce scénario. Il est, dans ce cas, possible de comparer directement les images en calculant la différence de pixels homologues, c’est-à-dire correspondant au même emplacement au sol. Cependant, dans certains cas spécifiques tels que les situations d’urgence, les missions ponctuelles, la défense et la sécurité, il peut s’avérer nécessaire d’exploiter des images de modalités et de résolutions différentes. Cette hétérogénéité dans les images traitées introduit des problèmes supplémentaires pour la mise en œuvre de la détection de changements. Ces problèmes ne sont pas traités par la plupart des méthodes de l’état de l’art. Lorsque la modalité est identique mais les résolutions différentes, il est possible de se ramener au scénario favorable en appliquant des prétraitements tels que des opérations de rééchantillonnage destinées à atteindre les mêmes résolutions spatiales et spectrales. Néanmoins, ces prétraitements peuvent conduire à une perte d’informations pertinentes pour la détection de changements. En particulier, ils sont appliqués indépendamment sur les deux images et donc ne tiennent pas compte des relations fortes existant entre les deux images. L’objectif de cette thèse est de développer des méthodes de détection de changements qui exploitent au mieux l’information contenue dans une paire d’images observées, sans condition sur leur modalité et leurs résolutions spatiale et spectrale. Les restrictions classiquement imposées dans l’état de l’art sont levées grâce à une approche utilisant la fusion des deux images observées. La première stratégie proposée s’applique au cas d’images de modalités identiques mais de résolutions différentes. Elle se décompose en trois étapes. La première étape consiste à fusionner les deux images observées ce qui conduit à une image de la scène à haute résolution portant l’information des changements éventuels. La deuxième étape réalise la prédiction de deux images non observées possédant des résolutions identiques à celles des images observées par dégradation spatiale et spectrale de l’image fusionnée. Enfin, la troisième étape consiste en une détection de changements classique entre images observées et prédites de mêmes résolutions. Une deuxième stratégie modélise les images observées comme des versions dégradées de deux images non observées caractérisées par des résolutions spectrales et spatiales identiques et élevées. Elle met en œuvre une étape de fusion robuste qui exploite un a priori de parcimonie des changements observés. Enfin, le principe de la fusion est étendu à des images de modalités différentes. Dans ce cas où les pixels ne sont pas directement comparables, car correspondant à des grandeurs physiques différentes, la comparaison est réalisée dans un domaine transformé. Les deux images sont représentées par des combinaisons linéaires parcimonieuses des éléments de deux dictionnaires couplés, appris à partir des données. La détection de changements est réalisée à partir de l’estimation d’un code couplé sous condition de parcimonie spatiale de la différence des codes estimés pour chaque image. L’expérimentation de ces différentes méthodes, conduite sur des changements simulés de manière réaliste ou sur des changements réels, démontre les avantages des méthodes développées et plus généralement de l’apport de la fusion pour la détection de changement

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Detection and height estimation of buildings from SAR and optical images using conditional random fields

    Get PDF
    [no abstract

    NIAC Phase II Orbiting Rainbows: Future Space Imaging with Granular Systems

    Get PDF
    Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication. Typically, the cost of an optical system is driven by the size and mass of the primary aperture. The ideal system is a cloud of spatially disordered dust-like objects that can be optically manipulated: it is highly reconfigurable, fault-tolerant, and allows very large aperture sizes at low cost. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft swarms to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exo-planet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exo-planet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Furthermore, future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities. Our objective in Phase II was to experimentally and numerically investigate how to optically manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an adaptable ultra-lightweight surface. Our solution is based on the aperture being an engineered granular medium, instead of a conventional monolithic aperture. This allows building of apertures at a reduced cost, enables extremely fault-tolerant apertures that cannot otherwise be made, and directly enables classes of missions for exoplanet detection based on Fourier spectroscopy with tight angular resolution and innovative radar systems for remote sensing. In this task, we have examined the advanced feasibility of a crosscutting concept that contributes new technological approaches for space imaging systems, autonomous systems, and space applications of optical manipulation. The proposed investigation has matured the concept that we started in Phase I to TRL 3, identifying technology gaps and candidate system architectures for the space-borne cloud as an aperture

    Novel neural network-based algorithms for urban classification and change detection from satellite imagery

    Get PDF
    L`attività umana sta cambiando radicalmente l`ecosistema ambientale, unito anche alla rapida espansione demografica dei sistemi urbani. Benche` queste aree rappresentano solo una minima frazione della Terra, il loro impatto sulla richiesta di energia, cibo, acqua e materiali primi, e` enorme. Per cui, una informazione accurata e tempestiva risulta essere essenziale per gli enti di protezione civile in caso, ad esempio, di catastrofi ambientali. Negli ultimi anni il forte sviluppo di sistemi satellitari, sia dal punto di vista della risoluzione spaziale che di quella radiometrica e temporale, ha permesso una sempre piu` accurato monitoraggio della Terra, sia con sistemi ottici che con quelli RADAR. Ad ogni modo, una piu` alta risoluzione (sia spaziale, che spettrale o temporale) presenta tanti vantaggi e miglioramenti quanti svantaggi e limitazioni. In questa tesi sono discussi in dettaglio i diversi aspetti e tecniche per la classificazione e monitoraggio dei cambiamenti di aree urbane, utilizzando sia sistemi ottici che RADAR. Particolare enfasi e` data alla teoria ed all`uso di reti neurali.Human activity dominates the Earth's ecosystems with structural modifications. The rapid population growth over recent decades and the concentration of this population in and around urban areas have significantly impacted the environment. Although urban areas represent a small fraction of the land surface, they affect large areas due to the magnitude of the associated energy, food, water, and raw material demands. Reliable information in populated areas is essential for urban planning and strategic decision making, such as civil protection departments in cases of emergency. Remote sensing is increasingly being used as a timely and cost-effective source of information in a wide number of applications, from environment monitoring to location-aware systems. However, mapping human settlements represents one of the most challenging areas for the remote sensing community due to its high spatial and spectral diversity. From the physical composition point of view, several different materials can be used for the same man-made element (for example, building roofs can be made of clay tiles, metal, asphalt, concrete, plastic, grass or stones). On the other hand, the same material can be used for different purposes (for example, concrete can be found in paved roads or building roofs). Moreover, urban areas are often made up of materials present in the surrounding region, making them indistinguishable from the natural or agricultural areas (examples can be unpaved roads and bare soil, clay tiles and bare soil, or parks and vegetated open spaces) [1]. During the last two decades, significant progress has been made in developing and launching satellites with instruments, in both the optical/infrared and microwave regions of the spectra, well suited for Earth observation with an increasingly finer spatial, spectral and temporal resolution. Fine spatial sensors with metric or sub-metric resolution allow the detection of small-scale objects, such as elements of residential housing, commercial buildings, transportation systems and utilities. Multi-spectral and hyper-spectral remote sensing systems provide additional discriminative features for classes that are spectrally similar, due to their higher spectral resolution. The temporal component, integrated with the spectral and spatial dimensions, provides essential information, for example on vegetation dynamics. Moreover, the delineation of temporal homogeneous patches reduces the effect of local spatial heterogeneity that often masks larger spatial patterns. Nevertheless, higher resolution (spatial, spectral or temporal) imagery comes with limits and challenges that equal the advantages and improvements, and this is valid for both optical and synthetic aperture radar data [2]. This thesis addresses the different aspects of mapping and change detection of human settlements, discussing the main issues related to the use of optical and synthetic aperture radar data. Novel approaches and techniques are proposed and critically discussed to cope with the challenges of urban areas, including data fusion, image information mining, and active learning. The chapters are subdivided into three main parts. Part I addresses the theoretical aspects of neural networks, including their different architectures, design, and training. The proposed neural networks-based algorithms, their applications to classification and change detection problems, and the experimental results are described in Part II and Part III
    • …
    corecore