15,431 research outputs found
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Online Tensor Methods for Learning Latent Variable Models
We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.Comment: JMLR 201
Using Quantum Computers for Quantum Simulation
Numerical simulation of quantum systems is crucial to further our
understanding of natural phenomena. Many systems of key interest and
importance, in areas such as superconducting materials and quantum chemistry,
are thought to be described by models which we cannot solve with sufficient
accuracy, neither analytically nor numerically with classical computers. Using
a quantum computer to simulate such quantum systems has been viewed as a key
application of quantum computation from the very beginning of the field in the
1980s. Moreover, useful results beyond the reach of classical computation are
expected to be accessible with fewer than a hundred qubits, making quantum
simulation potentially one of the earliest practical applications of quantum
computers. In this paper we survey the theoretical and experimental development
of quantum simulation using quantum computers, from the first ideas to the
intense research efforts currently underway.Comment: 43 pages, 136 references, review article, v2 major revisions in
response to referee comments, v3 significant revisions, identical to
published version apart from format, ArXiv version has table of contents and
references in alphabetical orde
Activity recognition from videos with parallel hypergraph matching on GPUs
In this paper, we propose a method for activity recognition from videos based
on sparse local features and hypergraph matching. We benefit from special
properties of the temporal domain in the data to derive a sequential and fast
graph matching algorithm for GPUs.
Traditionally, graphs and hypergraphs are frequently used to recognize
complex and often non-rigid patterns in computer vision, either through graph
matching or point-set matching with graphs. Most formulations resort to the
minimization of a difficult discrete energy function mixing geometric or
structural terms with data attached terms involving appearance features.
Traditional methods solve this minimization problem approximately, for instance
with spectral techniques.
In this work, instead of solving the problem approximatively, the exact
solution for the optimal assignment is calculated in parallel on GPUs. The
graphical structure is simplified and regularized, which allows to derive an
efficient recursive minimization algorithm. The algorithm distributes
subproblems over the calculation units of a GPU, which solves them in parallel,
allowing the system to run faster than real-time on medium-end GPUs
Chaos in computer performance
Modern computer microprocessors are composed of hundreds of millions of
transistors that interact through intricate protocols. Their performance during
program execution may be highly variable and present aperiodic oscillations. In
this paper, we apply current nonlinear time series analysis techniques to the
performances of modern microprocessors during the execution of prototypical
programs. Our results present pieces of evidence strongly supporting that the
high variability of the performance dynamics during the execution of several
programs display low-dimensional deterministic chaos, with sensitivity to
initial conditions comparable to textbook models. Taken together, these results
show that the instantaneous performances of modern microprocessors constitute a
complex (or at least complicated) system and would benefit from analysis with
modern tools of nonlinear and complexity science
- …