4,633 research outputs found
Gaussian Process Decentralized Data Fusion Meets Transfer Learning in Large-Scale Distributed Cooperative Perception
This paper presents novel Gaussian process decentralized data fusion
algorithms exploiting the notion of agent-centric support sets for distributed
cooperative perception of large-scale environmental phenomena. To overcome the
limitations of scale in existing works, our proposed algorithms allow every
mobile sensing agent to choose a different support set and dynamically switch
to another during execution for encapsulating its own data into a local summary
that, perhaps surprisingly, can still be assimilated with the other agents'
local summaries (i.e., based on their current choices of support sets) into a
globally consistent summary to be used for predicting the phenomenon. To
achieve this, we propose a novel transfer learning mechanism for a team of
agents capable of sharing and transferring information encapsulated in a
summary based on a support set to that utilizing a different support set with
some loss that can be theoretically bounded and analyzed. To alleviate the
issue of information loss accumulating over multiple instances of transfer
learning, we propose a new information sharing mechanism to be incorporated
into our algorithms in order to achieve memory-efficient lazy transfer
learning. Empirical evaluation on real-world datasets show that our algorithms
outperform the state-of-the-art methods.Comment: 32nd AAAI Conference on Artificial Intelligence (AAAI 2018), Extended
version with proofs, 14 page
Robustness of large-scale stochastic matrices to localized perturbations
Upper bounds are derived on the total variation distance between the
invariant distributions of two stochastic matrices differing on a subset W of
rows. Such bounds depend on three parameters: the mixing time and the minimal
expected hitting time on W for the Markov chain associated to one of the
matrices; and the escape time from W for the Markov chain associated to the
other matrix. These results, obtained through coupling techniques, prove
particularly useful in scenarios where W is a small subset of the state space,
even if the difference between the two matrices is not small in any norm.
Several applications to large-scale network problems are discussed, including
robustness of Google's PageRank algorithm, distributed averaging and consensus
algorithms, and interacting particle systems.Comment: 12 pages, 4 figure
A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions
In a general Hilbert framework, we consider continuous gradient-like
dynamical systems for constrained multiobjective optimization involving
non-smooth convex objective functions. Our approach is in the line of a
previous work where was considered the case of convex di erentiable objective
functions. Based on the Yosida regularization of the subdi erential operators
involved in the system, we obtain the existence of strong global trajectories.
We prove a descent property for each objective function, and the convergence of
trajectories to weak Pareto minima. This approach provides a dynamical
endogenous weighting of the objective functions. Applications are given to
cooperative games, inverse problems, and numerical multiobjective optimization
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
The goal of decentralized optimization over a network is to optimize a global
objective formed by a sum of local (possibly nonsmooth) convex functions using
only local computation and communication. It arises in various application
domains, including distributed tracking and localization, multi-agent
co-ordination, estimation in sensor networks, and large-scale optimization in
machine learning. We develop and analyze distributed algorithms based on dual
averaging of subgradients, and we provide sharp bounds on their convergence
rates as a function of the network size and topology. Our method of analysis
allows for a clear separation between the convergence of the optimization
algorithm itself and the effects of communication constraints arising from the
network structure. In particular, we show that the number of iterations
required by our algorithm scales inversely in the spectral gap of the network.
The sharpness of this prediction is confirmed both by theoretical lower bounds
and simulations for various networks. Our approach includes both the cases of
deterministic optimization and communication, as well as problems with
stochastic optimization and/or communication.Comment: 40 pages, 4 figure
Show Me the Money: Dynamic Recommendations for Revenue Maximization
Recommender Systems (RS) play a vital role in applications such as e-commerce
and on-demand content streaming. Research on RS has mainly focused on the
customer perspective, i.e., accurate prediction of user preferences and
maximization of user utilities. As a result, most existing techniques are not
explicitly built for revenue maximization, the primary business goal of
enterprises. In this work, we explore and exploit a novel connection between RS
and the profitability of a business. As recommendations can be seen as an
information channel between a business and its customers, it is interesting and
important to investigate how to make strategic dynamic recommendations leading
to maximum possible revenue. To this end, we propose a novel \model that takes
into account a variety of factors including prices, valuations, saturation
effects, and competition amongst products. Under this model, we study the
problem of finding revenue-maximizing recommendation strategies over a finite
time horizon. We show that this problem is NP-hard, but approximation
guarantees can be obtained for a slightly relaxed version, by establishing an
elegant connection to matroid theory. Given the prohibitively high complexity
of the approximation algorithm, we also design intelligent heuristics for the
original problem. Finally, we conduct extensive experiments on two real and
synthetic datasets and demonstrate the efficiency, scalability, and
effectiveness our algorithms, and that they significantly outperform several
intuitive baselines.Comment: Conference version published in PVLDB 7(14). To be presented in the
VLDB Conference 2015, in Hawaii. This version gives a detailed submodularity
proo
- …