1,991 research outputs found

    2020 NASA Technology Taxonomy

    Get PDF
    This document is an update (new photos used) of the PDF version of the 2020 NASA Technology Taxonomy that will be available to download on the OCT Public Website. The updated 2020 NASA Technology Taxonomy, or "technology dictionary", uses a technology discipline based approach that realigns like-technologies independent of their application within the NASA mission portfolio. This tool is meant to serve as a common technology discipline-based communication tool across the agency and with its partners in other government agencies, academia, industry, and across the world

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Development of mobile agent framework in wireless sensor networks for multi-sensor collaborative processing

    Get PDF
    Recent advances in processor, memory and radio technology have enabled production of tiny, low-power, low-cost sensor nodes capable of sensing, communication and computation. Although a single node is resource constrained with limited power, limited computation and limited communication bandwidth, these nodes deployed in large number form a new type of network called the wireless sensor network (WSN). One of the challenges brought by WSNs is an efficient computing paradigm to support the distributed nature of the applications built on these networks considering the resource limitations of the sensor nodes. Collaborative processing between multiple sensor nodes is essential to generate fault-tolerant, reliable information from the densely-spatial sensing phenomenon. The typical model used in distributed computing is the client/server model. However, this computing model is not appropriate in the context of sensor networks. This thesis develops an energy-efficient, scalable and real-time computing model for collaborative processing in sensor networks called the mobile agent computing paradigm. In this paradigm, instead of each sensor node sending data or result to a central server which is typical in the client/server model, the information processing code is moved to the nodes using mobile agents. These agents carry the execution code and migrate from one node to another integrating result at each node. This thesis develops the mobile agent framework on top of an energy-efficient routing protocol called directed diffusion. The mobile agent framework described has been mapped to collaborative target classification application. This application has been tested in three field demos conducted at Twentynine palms, CA; BAE Austin, TX; and BBN Waltham, MA

    Euler characteristic surfaces

    Get PDF
    We study the use of the Euler characteristic for multiparameter topological data analysis. Euler characteristic is a classical, well-understood topological invariant that has appeared in numerous applications, including in the context of random fields. The goal of this paper is to present the extension of using the Euler characteristic in higher-dimensional parameter spaces. While topological data analysis of higher-dimensional parameter spaces using stronger invariants such as homology continues to be the subject of intense research, Euler characteristic is more manageable theoretically and computationally, and this analysis can be seen as an important intermediary step in multi-parameter topological data analysis. We show the usefulness of the techniques using artificially generated examples, and a real-world application of detecting diabetic retinopathy in retinal images

    From Images to Hydrologic Networks - Understanding the Arctic Landscape with Graphs

    Get PDF
    Remote sensing-based Earth Observation plays an important role in assessing environmental changes throughout our planet. As an image-heavy domain, the evaluation of the data strongly focuses on statistical and pixel-based spatial analysis methods. However, considering the complexity of our Earth system, there are some environmental structures and dependencies that are not possible to accurately describe with these traditional image analysis approaches. One example for such a limitation is the representation of (spatial) networks and their characteristics. In this study, we thus propose a computer vision approach that enables the representation of semantic information gained from images as graphs. As an example, we investigate digital terrain models of Arctic permafrost landscapes with its very characteristic polygonal patterned ground. These regular patterns, which are clearly visible in high-resolution image and elevation data, are formed by subsurface ice bodies that are very vulnerable to rising temperatures in a warming Arctic. Observing these networks’ topologies and metrics in space and time with graph analysis thus allows insights into the landscape’s complex geomorphology, hydrology, and ecology and therefore helps to quantify how they interact with climate change. We show that results extracted with this analytical and highly automated approach are in line with those gathered from other manual studies or from manual validation. Thus, with this approach, we introduce a method that, for the first time, enables upscaling of such terrain and network analysis to potentially pan-Arctic scales where collecting in-situ field data is strongly limited

    AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR

    Get PDF
    Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level
    • …
    corecore