6,044 research outputs found
Consistent distribution-free -sample and independence tests for univariate random variables
A popular approach for testing if two univariate random variables are
statistically independent consists of partitioning the sample space into bins,
and evaluating a test statistic on the binned data. The partition size matters,
and the optimal partition size is data dependent. While for detecting simple
relationships coarse partitions may be best, for detecting complex
relationships a great gain in power can be achieved by considering finer
partitions. We suggest novel consistent distribution-free tests that are based
on summation or maximization aggregation of scores over all partitions of a
fixed size. We show that our test statistics based on summation can serve as
good estimators of the mutual information. Moreover, we suggest regularized
tests that aggregate over all partition sizes, and prove those are consistent
too. We provide polynomial-time algorithms, which are critical for computing
the suggested test statistics efficiently. We show that the power of the
regularized tests is excellent compared to existing tests, and almost as
powerful as the tests based on the optimal (yet unknown in practice) partition
size, in simulations as well as on a real data example.Comment: arXiv admin note: substantial text overlap with arXiv:1308.155
On aggregation operators of transitive similarity and dissimilarity relations
Similarity and dissimilarity are widely used concepts. One of the most studied matters is their combination or aggregation. However, transitivity property is often ignored when aggregating despite being a highly important property, studied by many authors but from different points of view. We collect here some results in preserving transitivity when aggregating, intending to clarify the relationship between aggregation and transitivity and making it useful to design aggregation operators that keep transitivity property. Some examples of the utility of the results are also shown.Peer ReviewedPostprint (published version
Coarsening of Sand Ripples in Mass Transfer Models with Extinction
Coarsening of sand ripples is studied in a one-dimensional stochastic model,
where neighboring ripples exchange mass with algebraic rates, , and ripples of zero mass are removed from the system. For ripples vanish through rare fluctuations and the average ripples mass grows
as \avem(t) \sim -\gamma^{-1} \ln (t). Temporal correlations decay as
or depending on the symmetry of the mass transfer, and
asymptotically the system is characterized by a product measure. The stationary
ripple mass distribution is obtained exactly. For ripple evolution
is linearly unstable, and the noise in the dynamics is irrelevant. For the problem is solved on the mean field level, but the mean-field theory
does not adequately describe the full behavior of the coarsening. In
particular, it fails to account for the numerically observed universality with
respect to the initial ripple size distribution. The results are not restricted
to sand ripple evolution since the model can be mapped to zero range processes,
urn models, exclusion processes, and cluster-cluster aggregation.Comment: 10 pages, 8 figures, RevTeX4, submitted to Phys. Rev.
COORDINATION OF LEADER-FOLLOWER MULTI-AGENT SYSTEM WITH TIME-VARYING OBJECTIVE FUNCTION
This thesis aims to introduce a new framework for the distributed control of multi-agent systems with adjustable swarm control objectives. Our goal is twofold: 1) to provide an overview to how time-varying objectives in the control of autonomous systems may be applied to the distributed control of multi-agent systems with variable autonomy level, and 2) to introduce a framework to incorporate the proposed concept to fundamental swarm behaviors such as aggregation and leader tracking. Leader-follower multi-agent systems are considered in this study, and a general form of time-dependent artificial potential function is proposed to describe the varying objectives of the system in the case of complete information exchange. Using Lyapunov methods, the stability and boundedness of the agents\u27 trajectories under single order and higher order dynamics are analyzed. Illustrative numerical simulations are presented to demonstrate the validity of our results. Then, we extend these results for multi-agent systems with limited information exchange and switching communication topology. The first steps of the realization of an experimental framework have been made with the ultimate goal of verifying the simulation results in practice
Quick inference for log Gaussian Cox processes with non-stationary underlying random fields
For point patterns observed in natura, spatial heterogeneity is more the rule
than the exception. In numerous applications, this can be mathematically
handled by the flexible class of log Gaussian Cox processes (LGCPs); in brief,
a LGCP is a Cox process driven by an underlying log Gaussian random field (log
GRF). This allows the representation of point aggregation, point vacuum and
intermediate situations, with more or less rapid transitions between these
different states depending on the properties of GRF. Very often, the covariance
function of the GRF is assumed to be stationary. In this article, we give two
examples where the sizes (that is, the number of points) and the spatial
extents of point clusters are allowed to vary in space. To tackle such
features, we propose parametric and semiparametric models of non-stationary
LGCPs where the non-stationarity is included in both the mean function and the
covariance function of the GRF. Thus, in contrast to most other work on
inhomogeneous LGCPs, second-order intensity-reweighted stationarity is not
satisfied and the usual two step procedure for parameter estimation based on
e.g. composite likelihood does not easily apply. Instead we propose a fast
three step procedure based on composite likelihood. We apply our modelling and
estimation framework to analyse datasets dealing with fish aggregation in a
reservoir and with dispersal of biological particles
Decorrelation of Neutral Vector Variables: Theory and Applications
In this paper, we propose novel strategies for neutral vector variable
decorrelation. Two fundamental invertible transformations, namely serial
nonlinear transformation and parallel nonlinear transformation, are proposed to
carry out the decorrelation. For a neutral vector variable, which is not
multivariate Gaussian distributed, the conventional principal component
analysis (PCA) cannot yield mutually independent scalar variables. With the two
proposed transformations, a highly negatively correlated neutral vector can be
transformed to a set of mutually independent scalar variables with the same
degrees of freedom. We also evaluate the decorrelation performances for the
vectors generated from a single Dirichlet distribution and a mixture of
Dirichlet distributions. The mutual independence is verified with the distance
correlation measurement. The advantages of the proposed decorrelation
strategies are intensively studied and demonstrated with synthesized data and
practical application evaluations
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
- …