81 research outputs found

    Desert roughness retrieval using CYGNSS GNSS-R data

    Get PDF
    The aim of this paper is to assess the potential use of data recorded by the Global Navigation Satellite System Reflectometry (GNSS-R) Cyclone Global Navigation Satellite System (CYGNSS) constellation to characterize desert surface roughness. The study is applied over the Sahara, the largest non-polar desert in the world. This is based on a spatio-temporal analysis of variations in Cyclone Global Navigation Satellite System (CYGNSS) data, expressed as changes in reflectivity (G). In general, the reflectivity of each type of land surface (reliefs, dunes, etc.) encountered at the studied site is found to have a high temporal stability. A grid of CYGNSS G measurements has been developed, at the relatively fine resolution of 0.03° x 0.03°, and the resulting map of average reflectivity, computed over a 2.5-year period, illustrates the potential of CYGNSS data for the characterization of the main types of desert land surface (dunes, reliefs, etc.). A discussion of the relationship between aerodynamic or geometric roughness and CYGNSS reflectivity is proposed. A high correlation is observed between these roughness parameters and reflectivity. The behaviors of the GNSS-R reflectivity and the Advanced Land Observing Satellite-2 (ALOS-2) Synthetic Aperture Radar (SAR) backscattering coeffcient are compared and found to be strongly correlated. An aerodynamic roughness (Z0) map of the Sahara is proposed, using four distinct classes of terrain roughness

    Supervised detection of bomb craters in historical aerial images using convolutional neural networks

    Get PDF
    The aftermath of the air strikes during World War II is still present today. Numerous bombs dropped by planes did not explode, still exist in the ground and pose a considerable explosion hazard. Tracking down these duds can be tackled by detecting bomb craters. The existence of a dud can be inferred from the existence of a crater. This work proposes a method for the automatic detection of bomb craters in aerial wartime images. First of all, crater candidates are extracted from an image using a blob detector. Based on given crater references, for every candidate it is checked whether it, in fact, represents a crater or not. Candidates from various aerial images are used to train, validate and test Convolutional Neural Networks (CNNs) in the context of a two-class classification problem. A loss function (controlling what the CNNs are learning) is adapted to the given task. The trained CNNs are then used for the classification of crater candidates. Our work focuses on the classification of crater candidates and we investigate if combining data from related domains is beneficial for the classification. We achieve a F1-score of up to 65.4% when classifying crater candidates with a realistic class distribution. © Authors 2019. CC BY 4.0 License

    FUSION OF 3D POINT CLOUDS WITH TIR IMAGES FOR INDOOR SCENE RECONSTRUCTION

    Get PDF
    Obtaining accurate 3D descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3D data from another sensor is able to overcome most of the limitations in the 3D geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras and profile laserscanners is suitable. As a laserscanner is an active sensor in the visible red or near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications are independent from external illumination or textures in the scene. This contribution focusses on the fusion of point clouds from terrestrial laserscanners and RGB cameras with images from thermal infrared mounted together on a robot for indoor 3D reconstruction. The system is geometrical calibrated including the lever arm between the different sensors. As the field of view is different for the sensors, the different sensors record the same scene points not exactly at the same time. Thus, the 3D scene points of the laserscanner and the photogrammetric point cloud from the RGB camera have to be synchronized before point cloud fusion and adding the thermal channel to the 3D points

    Simulation Tools for Interpretation of High Resolution SAR Images of Urban Areas

    Full text link
    New powerful spaceborne sensors for monitoring urban areas have been designed and are ready for launch. More detailed SAR images will be soon available and, consequently, new tools for their interpretation are needed, above all when urban scenes are illuminated. In this paper, the authors propose some tools for the study and the analysis of high resolution SAR images based on a SAR raw signal simulator for urban areas. Comparing simulated SAR images with the real one, interpretation of SAR data is improved and fundamental support of the employed tools is further assessed

    Building feature extraction via a deterministic approach: application to real high resolution SAR images

    Get PDF
    Interpretation of high resolution SAR (synthetic aperture radar) images is still a hard task, especially when man-made objects crowd the scene under detection. This paper contributes to the analysis of this kind of data by adopting an approach, based on a scattering model, for the retrieval of buildings height from real SAR images and presenting first numerical results

    Recursive Cluster Elimination Based Support Vector Machine for Disease State Prediction Using Resting State Functional and Effective Brain Connectivity

    Get PDF
    Brain state classification has been accomplished using features such as voxel intensities, derived from functional magnetic resonance imaging (fMRI) data, as inputs to efficient classifiers such as support vector machines (SVM) and is based on the spatial localization model of brain function. With the advent of the connectionist model of brain function, features from brain networks may provide increased discriminatory power for brain state classification.In this study, we introduce a novel framework where in both functional connectivity (FC) based on instantaneous temporal correlation and effective connectivity (EC) based on causal influence in brain networks are used as features in an SVM classifier. In order to derive those features, we adopt a novel approach recently introduced by us called correlation-purged Granger causality (CPGC) in order to obtain both FC and EC from fMRI data simultaneously without the instantaneous correlation contaminating Granger causality. In addition, statistical learning is accelerated and performance accuracy is enhanced by combining recursive cluster elimination (RCE) algorithm with the SVM classifier. We demonstrate the efficacy of the CPGC-based RCE-SVM approach using a specific instance of brain state classification exemplified by disease state prediction. Accordingly, we show that this approach is capable of predicting with 90.3% accuracy whether any given human subject was prenatally exposed to cocaine or not, even when no significant behavioral differences were found between exposed and healthy subjects.The framework adopted in this work is quite general in nature with prenatal cocaine exposure being only an illustrative example of the power of this approach. In any brain state classification approach using neuroimaging data, including the directional connectivity information may prove to be a performance enhancer. When brain state classification is used for disease state prediction, our approach may aid the clinicians in performing more accurate diagnosis of diseases in situations where in non-neuroimaging biomarkers may be unable to perform differential diagnosis with certainty

    Spatial Language Processing in the Blind: Evidence for a Supramodal Representation and Cortical Reorganization

    Get PDF
    Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations

    Multisensory visual–tactile object related network in humans: insights gained using a novel crossmodal adaptation approach

    Get PDF
    Neuroimaging techniques have provided ample evidence for multisensory integration in humans. However, it is not clear whether this integration occurs at the neuronal level or whether it reflects areal convergence without such integration. To examine this issue as regards visuo-tactile object integration we used the repetition suppression effect, also known as the fMRI-based adaptation paradigm (fMR-A). Under some assumptions, fMR-A can tag specific neuronal populations within an area and investigate their characteristics. This technique has been used extensively in unisensory studies. Here we applied it for the first time to study multisensory integration and identified a network of occipital (LOtv and calcarine sulcus), parietal (aIPS), and prefrontal (precentral sulcus and the insula) areas all showing a clear crossmodal repetition suppression effect. These results provide a crucial first insight into the neuronal basis of visuo-haptic integration of objects in humans and highlight the power of using fMR-A to study multisensory integration using non-invasinve neuroimaging techniques
    • …
    corecore