20,049 research outputs found
Noise Response Data Reveal Novel Controllability Gramian for Nonlinear Network Dynamics
Control of nonlinear large-scale dynamical networks, e.g., collective
behavior of agents interacting via a scale-free connection topology, is a
central problem in many scientific and engineering fields. For the linear
version of this problem, the so-called controllability Gramian has played an
important role to quantify how effectively the dynamical states are reachable
by a suitable driving input. In this paper, we first extend the notion of the
controllability Gramian to nonlinear dynamics in terms of the Gibbs
distribution. Next, we show that, when the networks are open to environmental
noise, the newly defined Gramian is equal to the covariance matrix associated
with randomly excited, but uncontrolled, dynamical state trajectories. This
fact theoretically justifies a simple Monte Carlo simulation that can extract
effectively controllable subdynamics in nonlinear complex networks. In
addition, the result provides a novel insight into the relationship between
controllability and statistical mechanics.Comment: 9 pages, 3 figures; to appear in Scientific Report
Challenges of Big Data Analysis
Big Data bring new opportunities to modern society and challenges to data
scientists. On one hand, Big Data hold great promises for discovering subtle
population patterns and heterogeneities that are not possible with small-scale
data. On the other hand, the massive sample size and high dimensionality of Big
Data introduce unique computational and statistical challenges, including
scalability and storage bottleneck, noise accumulation, spurious correlation,
incidental endogeneity, and measurement errors. These challenges are
distinguished and require new computational and statistical paradigm. This
article give overviews on the salient features of Big Data and how these
features impact on paradigm change on statistical and computational methods as
well as computing architectures. We also provide various new perspectives on
the Big Data analysis and computation. In particular, we emphasis on the
viability of the sparsest solution in high-confidence set and point out that
exogeneous assumptions in most statistical methods for Big Data can not be
validated due to incidental endogeneity. They can lead to wrong statistical
inferences and consequently wrong scientific conclusions
Error Metrics for Learning Reliable Manifolds from Streaming Data
Spectral dimensionality reduction is frequently used to identify
low-dimensional structure in high-dimensional data. However, learning
manifolds, especially from the streaming data, is computationally and memory
expensive. In this paper, we argue that a stable manifold can be learned using
only a fraction of the stream, and the remaining stream can be mapped to the
manifold in a significantly less costly manner. Identifying the transition
point at which the manifold is stable is the key step. We present error metrics
that allow us to identify the transition point for a given stream by
quantitatively assessing the quality of a manifold learned using Isomap. We
further propose an efficient mapping algorithm, called S-Isomap, that can be
used to map new samples onto the stable manifold. We describe experiments on a
variety of data sets that show that the proposed approach is computationally
efficient without sacrificing accuracy
- …