7,589 research outputs found
Learning Action Maps of Large Environments via First-Person Vision
When people observe and interact with physical spaces, they are able to
associate functionality to regions in the environment. Our goal is to automate
dense functional understanding of large spaces by leveraging sparse activity
demonstrations recorded from an ego-centric viewpoint. The method we describe
enables functionality estimation in large scenes where people have behaved, as
well as novel scenes where no behaviors are observed. Our method learns and
predicts "Action Maps", which encode the ability for a user to perform
activities at various locations. With the usage of an egocentric camera to
observe human activities, our method scales with the size of the scene without
the need for mounting multiple static surveillance cameras and is well-suited
to the task of observing activities up-close. We demonstrate that by capturing
appearance-based attributes of the environment and associating these attributes
with activity demonstrations, our proposed mathematical framework allows for
the prediction of Action Maps in new environments. Additionally, we offer a
preliminary glance of the applicability of Action Maps by demonstrating a
proof-of-concept application in which they are used in concert with activity
detections to perform localization.Comment: To appear at CVPR 201
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images
In hyperspectral remote sensing data mining, it is important to take into
account of both spectral and spatial information, such as the spectral
signature, texture feature and morphological property, to improve the
performances, e.g., the image classification accuracy. In a feature
representation point of view, a nature approach to handle this situation is to
concatenate the spectral and spatial features into a single but high
dimensional vector and then apply a certain dimension reduction technique
directly on that concatenated vector before feed it into the subsequent
classifier. However, multiple features from various domains definitely have
different physical meanings and statistical properties, and thus such
concatenation hasn't efficiently explore the complementary properties among
different features, which should benefit for boost the feature
discriminability. Furthermore, it is also difficult to interpret the
transformed results of the concatenated vector. Consequently, finding a
physically meaningful consensus low dimensional feature representation of
original multiple features is still a challenging task. In order to address the
these issues, we propose a novel feature learning framework, i.e., the
simultaneous spectral-spatial feature selection and extraction algorithm, for
hyperspectral images spectral-spatial feature representation and
classification. Specifically, the proposed method learns a latent low
dimensional subspace by projecting the spectral-spatial feature into a common
feature space, where the complementary information has been effectively
exploited, and simultaneously, only the most significant original features have
been transformed. Encouraging experimental results on three public available
hyperspectral remote sensing datasets confirm that our proposed method is
effective and efficient
A REVIEW ON MULTIPLE-FEATURE-BASED ADAPTIVE SPARSE REPRESENTATION (MFASR) AND OTHER CLASSIFICATION TYPES
A new technique Multiple-feature-based adaptive sparse representation (MFASR) has been demonstrated for Hyperspectral Images (HSI's) classification. This method involves mainly in four steps at the various stages. The spectral and spatial information reflected from the original Hyperspectral Images with four various features. A shape adaptive (SA) spatial region is obtained in each pixel region at the second step. The algorithm namely sparse representation has applied to get the coefficients of sparse for each shape adaptive region in the form of matrix with multiple features. For each test pixel, the class label is determined with the help of obtained coefficients. The performances of MFASR have much better classification results than other classifiers in the terms of quantitative and qualitative percentage of results. This MFASR will make benefit of strong correlations that are obtained from different extracted features and this make use of effective features and effective adaptive sparse representation. Thus, the very high classification performance was achieved through this MFASR technique
Fire and Fuels: Vegetation Change Over Time in the Zuni Mountains, New Mexico
The Zuni Mountains are a region that has been dramatically changed by human interference. Anthropogenically, fire suppression practices have allowed a buildup of fuels and caused a change in the fire-adapted ponderosa pine ecosystem such that the new ecosystem now incorporates many fire-intolerant species. As a result, the low-severity fires that the ecosystem once depended on to regenerate the forest are much reduced, and these low-severity fires are now replaced by crown-level infernos that threaten the forest and nearby towns. In order to combat these effects, land managers are implementing fuel reduction practices and are striving to better understand the local ecosystem.
In this study, a predictive fire spread model (FARSITE) was implemented to predict spatio-temporal distribution of fire in the Zuni Mountains based on change in vegetation types that are most prone to fire. Using Landsat imagery and historical fire spread data from 2001 to 2014, the following research questions were investigated: (1) What variables are responsible for fire spread in the Zuni Mountains, New Mexico? (2) Which areas are prone to destructive and canopy level fires? and (3) How have the fuel model types that are most conducive to fire spread changed in the past twenty years? The utilization of spatial modeling and remote sensing to understand the interaction of meteorological variables and vegetation in predicting fire spread in this region is a novel approach. This study showed that (i) fires are more likely to occur in the valleys and high elevation grassland areas of the Zuni Mountains, (ii) certain vegetation types including grass and shrub lands in the area present a greater danger to canopy fire than others, and (iii) that these vegetation types have changed in the past sixteen years
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Multi-task deep learning for large-scale building detail extraction from high-resolution satellite imagery
Understanding urban dynamics and promoting sustainable development requires
comprehensive insights about buildings. While geospatial artificial
intelligence has advanced the extraction of such details from Earth
observational data, existing methods often suffer from computational
inefficiencies and inconsistencies when compiling unified building-related
datasets for practical applications. To bridge this gap, we introduce the
Multi-task Building Refiner (MT-BR), an adaptable neural network tailored for
simultaneous extraction of spatial and attributional building details from
high-resolution satellite imagery, exemplified by building rooftops, urban
functional types, and roof architectural types. Notably, MT-BR can be
fine-tuned to incorporate additional building details, extending its
applicability. For large-scale applications, we devise a novel spatial sampling
scheme that strategically selects limited but representative image samples.
This process optimizes both the spatial distribution of samples and the urban
environmental characteristics they contain, thus enhancing extraction
effectiveness while curtailing data preparation expenditures. We further
enhance MT-BR's predictive performance and generalization capabilities through
the integration of advanced augmentation techniques. Our quantitative results
highlight the efficacy of the proposed methods. Specifically, networks trained
with datasets curated via our sampling method demonstrate improved predictive
accuracy relative to those using alternative sampling approaches, with no
alterations to network architecture. Moreover, MT-BR consistently outperforms
other state-of-the-art methods in extracting building details across various
metrics. The real-world practicality is also demonstrated in an application
across Shanghai, generating a unified dataset that encompasses both the spatial
and attributional details of buildings
- …