2,287 research outputs found

    Vision Science and Technology at NASA: Results of a Workshop

    Get PDF
    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program

    A Global Human Settlement Layer from optical high resolution imagery - Concept and first results

    Get PDF
    A general framework for processing of high and very-high resolution imagery for creating a Global Human Settlement Layer (GHSL) is presented together with a discussion on the results of the first operational test of the production workflow. The test involved the mapping of 24.3 millions of square kilometres of the Earth surface spread over four continents, corresponding to an estimated population of 1.3 billion of people in 2010. The resolution of the input image data ranges from 0.5 to 10 meters, collected by a heterogeneous set of platforms including satellite SPOT (2 and 5), CBERS-2B, RapidEye (2 and 4), WorldView (1 and 2), GeoEye-1, QuickBird-2, Ikonos-2, and airborne sensors. Several imaging modes were tested including panchromatic, multispectral and pan-sharpened images. A new fully automatic image information extraction, generalization and mosaic workflow is presented that is based on multiscale textural and morphological image features extraction. New image feature compression and optimization are introduced, together with new learning and classification techniques allowing for the processing of HR/VHR image data using low-resolution thematic layers as reference. A new systematic approach for quality control and validation allowing global spatial and thematic consistency checking is proposed and applied. The quality of the results are discussed by sensor, by band, by resolution, and eco-regions. Critical points, lessons learned and next steps are highlighted.JRC.G.2-Global security and crisis managemen

    A Comparative Analysis of Phytovolume Estimation Methods Based on UAV-Photogrammetry and Multispectral Imagery in a Mediterranean Forest

    Get PDF
    Management and control operations are crucial for preventing forest fires, especially in Mediterranean forest areas with dry climatic periods. One of them is prescribed fires, in which the biomass fuel present in the controlled plot area must be accurately estimated. The most used methods for estimating biomass are time-consuming and demand too much manpower. Unmanned aerial vehicles (UAVs) carrying multispectral sensors can be used to carry out accurate indirect measurements of terrain and vegetation morphology and their radiometric characteristics. Based on the UAV-photogrammetric project products, four estimators of phytovolume were compared in a Mediterranean forest area, all obtained using the difference between a digital surface model (DSM) and a digital terrain model (DTM). The DSM was derived from a UAV-photogrammetric project based on the structure from a motion algorithm. Four different methods for obtaining a DTM were used based on an unclassified dense point cloud produced through a UAV-photogrammetric project (FFU), an unsupervised classified dense point cloud (FFC), a multispectral vegetation index (FMI), and a cloth simulation filter (FCS). Qualitative and quantitative comparisons determined the ability of the phytovolume estimators for vegetation detection and occupied volume. The results show that there are no significant differences in surface vegetation detection between all the pairwise possible comparisons of the four estimators at a 95% confidence level, but FMI presented the best kappa value (0.678) in an error matrix analysis with reference data obtained from photointerpretation and supervised classification. Concerning the accuracy of phytovolume estimation, only FFU and FFC presented differences higher than two standard deviations in a pairwise comparison, and FMI presented the best RMSE (12.3 m) when the estimators were compared to 768 observed data points grouped in four 500 m2 sample plots. The FMI was the best phytovolume estimator of the four compared for low vegetation height in a Mediterranean forest. The use of FMI based on UAV data provides accurate phytovolume estimations that can be applied on several environment management activities, including wildfire prevention. Multitemporal phytovolume estimations based on FMI could help to model the forest resources evolution in a very realistic way

    Urban tree classification using discrete-return LiDAR and an object-level local binary pattern algorithm

    Full text link
    © 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group. Urban trees have the potential to mitigate some of the harm brought about by rapid urbanization and population growth, as well as serious environmental degradation (e.g. soil erosion, carbon pollution and species extirpation), in cities. This paper presents a novel urban tree extraction modelling approach that uses discrete laser scanning point clouds and object-based textural analysis to (1) develop a model characterised by four sub-models, including (a) height-based split segmentation, (b) feature extraction, (c) texture analysis and (d) classification, and (2) apply this model to classify urban trees. The canopy height model is integrated with the object-level local binary pattern algorithm (LBP) to achieve high classification accuracy. The results of each sub-model reveal that the classification of urban trees based on the height at 47.14 (high) and 2.12 m (low), respectively, while based on crown widths were highest and lowest at 22.5 and 2.55 m, respectively. Results also indicate that the proposed algorithm of urban tree modelling is effective for practical use

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    IMAGE-2006 Mosaic: Data Ingestion and Organisation v1.0

    Get PDF
    This report details how the IMAGE-2006 data has been ingested and organised in view of creating the IMAGE-2006 mosaic. In particular, it details the method developed for merging the two coverages (delivered on a country basis) into a unique pan-European coverage. The concept of data and country regions of interest is introduced and a method for compositing identical scenes originating from more than one country is detailed. The resulting reference coverage contains 3,533 unique scenes, starting from a total of 3,699 delivered scenes. While the number of received scenes matches those reported by the DLR/Metria report, it is not possible to check whether the received scenes actually match the IMAGE-2006 data set since to this date no master list is available.JRC.H.6-Digital Earth and Reference Dat

    Doctor of Philosophy

    Get PDF
    dissertationBalancing the trade off between the spatial and temporal quality of interactive computer graphics imagery is one of the fundamental design challenges in the construction of rendering systems. Inexpensive interactive rendering hardware may deliver a high level of temporal performance if the level of spatial image quality is sufficiently constrained. In these cases, the spatial fidelity level is an independent parameter of the system and temporal performance is a dependent variable. The spatial quality parameter is selected for the system by the designer based on the anticipated graphics workload. Interactive ray tracing is one example; the algorithm is often selected due to its ability to deliver a high level of spatial fidelity, and the relatively lower level of temporal performance isreadily accepted. This dissertation proposes an algorithm to perform fine-grained adjustments to the trade off between the spatial quality of images produced by an interactive renderer, and the temporal performance or quality of the rendered image sequence. The approach first determines the minimum amount of sampling work necessary to achieve a certain fidelity level, and then allows the surplus capacity to be directed towards spatial or temporal fidelity improvement. The algorithm consists of an efficient parallel spatial and temporal adaptive rendering mechanism and a control optimization problem which adjusts the sampling rate based on a characterization of the rendered imagery and constraints on the capacity of the rendering system
    • …
    corecore