1,826 research outputs found

    Multitemporal Very High Resolution from Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest

    Get PDF
    In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Fusion of Urban TanDEM-X raw DEMs using variational models

    Get PDF
    Recently, a new global Digital Elevation Model (DEM) with pixel spacing of 0.4 arcseconds and relative height accuracy finer than 2m for flat areas (slopes 20%) was created through the TanDEM-X mission. One important step of the chain of global DEM generation is to mosaic and fuse multiple raw DEM tiles to reach the target height accuracy. Currently, Weighted Averaging (WA) is applied as a fast and simple method for TanDEM-X raw DEM fusion in which the weights are computed from height error maps delivered from the Interferometric TanDEM-X Processor (ITP). However, evaluations show that WA is not the perfect DEM fusion method for urban areas especially in confrontation with edges such as building outlines. The main focus of this paper is to investigate more advanced variational approaches such as TV-L1 and Huber models. Furthermore, we also assess the performance of variational models for fusing raw DEMs produced from data takes with different baseline configurations and height of ambiguities. The results illustrate the high efficiency of variational models for TanDEM-X raw DEM fusion in comparison to WA. Using variational models could improve the DEM quality by up to 2m particularly in inner-city subsets.Comment: This is the pre-acceptance version, to read the final version, please go to IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing on IEEE Xplor

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Landscape scale mapping of tundra vegetation structure at ultra-high resolution using UAVs and computer vision

    Get PDF
    Ilmastomuutoksella on voimakkain vaikutus suurten leveysasteiden ekosysteemeissä, jotka ovat sopeutuneet viileään ilmastoon. Jotta suurella mittakaavalla havaittuja muutoksia tundrakasvillisuudessa ja niiden takaisinkytkentävaikutuksia ilmastoon voidaan ymmärtää ja ennustaa luotettavammin, on syytä tarkastella mitä tapahtuu pienellä mittakaavalla; jopa yksittäisissä kasveissa. Lähivuosikymmenten aikana tapahtunut teknologinen kehitys on mahdollistanut kustannustehokkaiden, kevyiden ja pienikokoisten miehittämättömien ilma-alusten (UAV) yleistymisen. Erittäin korkearesoluutioisten aineistojen (pikselikoko <10cm) lisääntyessä ja tullessa yhä helpommin saataville, ympäristön tarkastelussa käytetyt kaukokartoitusmenetelmät altistuvat paradigmanmuutokselle, kun konenäköön ja -oppimiseen perustuvat algoritmit ja analyysit yleistyvät. Menetelmien käyttöönotto on houkuttelevaa, koska ne mahdollistavat joustavan ja pitkälle automatisoidun aineistonkeruun ja erittäin tarkkojen kaukokartoitustuotteiden tuottamisen vaikeasti tavoitettavilta alueilta, kuten tundralla. Luotettavien tulosten saaminen vaatii kuitenkin huolellista suunnittelua sekä prosessointialgoritmien ja -parametrien pitkäjänteistä testaamista. Tässä tutkimuksessa tarkasteltiin, kuinka tarkasti tavallisella digitaalikameralla kerätyistä ilmakuvista johdetuilla muuttujilla voidaan kartoittaa kasvillisuuden rakennetta maisemamittakaavalla. Kilpisjärvellä Pohjois-Fennoskandiassa kerättiin dronella kolmensadan hehtaarin kokoiselta alueelta yhteensä noin 10 000 ilmakuvasta koostuva aineisto. Lisäksi alueella määritettiin 1183 pisteestä dominantti putkilokasvillisuus, sekä kasvillisuuden korkeus. Ilmakuvat prosessoitiin tiheiksi kolmiulotteisiksi pistepilviksi konenäköön ja fotogrammetriaan perustuvalla SfM (Structure from Motion) menetelmällä. Pistepilvien pohjalta interpoloitiin maastomalli sekä kasvillisuuden korkeusmalli. Lisäksi tuotettiin koko alueen kattava ilmakuvamosaiikki. Näiden aineistojen pohjalta laskettiin muuttujia, joita käytettiin yhdessä maastoreferenssiaineiston kanssa kasvillisuuden objektipohjaisessa analyysissä (GEOBIA, Geographical Object-Based Image Analysis). Suodatetut maanpintapisteet vastasivat luotettavasti todellista maanpinnan korkeutta koko alueella ja tuotetut korkeusmallit korreloivat voimakkaasti maastoreferenssiaineiston kanssa. Maastomallin virhe oli suurin alueilla, joilla oli korkeaa kasvillisuutta. Valaistusolosuhteissa ja kasvillisuudessa tapahtuneet muutokset ilmakuvien keruun aikana aiheuttivat haasteita objektipohjaisen analyysin molemmissa vaiheissa: segmentoinnissa ja luokittelussa. mutta kokonaistarkkuus parani 0,27:stä 0,,54:n kun luokitteluun lisättiin topografiaa, kasvillisuuden korkeutta ja tekstuuria kuvaavia muuttujia ja kohdeluokkien lukumäärää vähennettiin. Konenäköön ja –oppimiseen perustuvat menetelmät pystyvät tuottamaan tärkeää tietoa tundran kasvillisuuden rakenteesta, erityisesti kasvillisuuden korkeudesta, maisemassa. Lisää tutkimusta kuitenkin tarvitaan parhaiden algoritmien ja parametrien määrittämiseksi tundraympäristössä, jossa ympäristöolosuhteet muuttuvat nopeasti ja kasvillisuus on heterogeenistä ja sekoittunutta, mikä aiheuttaa eroja ilmakuvien välillä ja lisää vaikeuksia analyyseissä.Climate change has the strongest impact on high-latitude ecosystems that are adapted to cool climates. In order to better understand and predict the changes in tundra vegetation observed on large scales as well as their feedbacks onto climate, it is necessary to look at what is happening at finer scales; even in individual plants. Technological developments over the past few decades have enabled the spread of cost-effective, light and small unmanned aerial vehicles (UAVs). As very high-resolution data (pixel size <10cm) becomes more and more available, the remote sensing methods used in environmental analysis become subject to a paradigm shift as algorithms and analyzes based on machine vision and learning turn out to be more common. Harnessing new methods is attractive because they allow flexible and highly automated data collection and the production of highly accurate remote sensing products from hard-to-reach areas such as the tundra. However, obtaining reliable results requires careful planning and testing of processing algorithms and parameters. This study looked at how accurately variables derived from aerial images collected with an off-the-shelf digital camera can map the vegetation structure on a landscape scale. In Kilpisjärvi, northern Fennoscandia, a total of ~ 10,000 aerial photographs were collected by drone covering an area of three hundred hectares. In addition, dominant vascular plants were identified from 1183 points in the area, as well as vegetation height. Aerial images were processed into dense three-dimensional point clouds by using SfM (Structure from Motion) method, which is based on computer vision and digital photogrammetry. From the point clouds terrain models and vegetation height models were interpolated. In addition, image mosaic covering the entire area was produced. Based on these data, predictive variables were calculated, which were used together with the terrain reference data in Geographical Object-Based Image Analysis (GEOBIA). The filtered ground points corresponded to observations throughout the region, and the produced elevation models strongly correlated with the ground reference data. The terrain model error was greatest in areas with tall vegetation. Changes in lighting conditions and vegetation during aerial image surveys posed challenges in both phases of object-based analysis: segmentation and classification. but overall accuracy improved from 0.27 to 0.54 when topography, vegetation height and texture variables were added to the classifier and the number of target classes was reduced. Methods based on machine vision and learning can produce important information about vegetation structure, vegetation height, in a landscape. However, more research is needed to determine the best algorithms and parameters in a tundra environment where environmental conditions change rapidly and vegetation is heterogeneous and mixed, causing differences between aerial images and difficulties in analyses

    Remote sensing applications: an overview

    Get PDF
    Remote Sensing (RS) refers to the science of identification of earth surface features and estimation of their geo-biophysical properties using electromagnetic radiation as a medium of interaction. Spectral, spatial, temporal and polarization signatures are major characteristics of the sensor/target, which facilitate target discrimination. Earth surface data as seen by the sensors in different wavelengths (reflected, scattered and/or emitted) is radiometrically and geometrically corrected before extraction of spectral information. RS data, with its ability for a synoptic view, repetitive coverage with calibrated sensors to detect changes, observations at different resolutions, provides a better alternative for natural resources management as compared to traditional methods. Indian Earth Observation (EO) programme has been applications-driven and national development has been its prime motivation. From Bhaskara to Cartosat, India's EO capability has increased manifold. Improvements are not only in spatial, spectral, temporal and radiometric resolutions, but also in their coverage and value-added products. Some of the major operational application themes, in which India has extensively used remote sensing data are agriculture, forestry, water resources, land use, urban sprawl, geology, environment, coastal zone, marine resources, snow and glacier, disaster monitoring and mitigation, infrastructure development, etc. The paper reviews RS techniques and applications carried out using both optical and microwave sensors. It also analyses the gap areas and discusses the future perspectives

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    • …
    corecore