1,513 research outputs found
Distributed Computing with Adaptive Heuristics
We use ideas from distributed computing to study dynamic environments in
which computational nodes, or decision makers, follow adaptive heuristics (Hart
2005), i.e., simple and unsophisticated rules of behavior, e.g., repeatedly
"best replying" to others' actions, and minimizing "regret", that have been
extensively studied in game theory and economics. We explore when convergence
of such simple dynamics to an equilibrium is guaranteed in asynchronous
computational environments, where nodes can act at any time. Our research
agenda, distributed computing with adaptive heuristics, lies on the borderline
of computer science (including distributed computing and learning) and game
theory (including game dynamics and adaptive heuristics). We exhibit a general
non-termination result for a broad class of heuristics with bounded
recall---that is, simple rules of behavior that depend only on recent history
of interaction between nodes. We consider implications of our result across a
wide variety of interesting and timely applications: game theory, circuit
design, social networks, routing and congestion control. We also study the
computational and communication complexity of asynchronous dynamics and present
some basic observations regarding the effects of asynchrony on no-regret
dynamics. We believe that our work opens a new avenue for research in both
distributed computing and game theory.Comment: 36 pages, four figures. Expands both technical results and discussion
of v1. Revised version will appear in the proceedings of Innovations in
Computer Science 201
Convergence of adaptive stochastic Galerkin FEM
We propose and analyze novel adaptive algorithms for the numerical solution
of elliptic partial differential equations with parametric uncertainty. Four
different marking strategies are employed for refinement of stochastic Galerkin
finite element approximations. The algorithms are driven by the energy error
reduction estimates derived from two-level a posteriori error indicators for
spatial approximations and hierarchical a posteriori error indicators for
parametric approximations. The focus of this work is on the mathematical
foundation of the adaptive algorithms in the sense of rigorous convergence
analysis. In particular, we prove that the proposed algorithms drive the
underlying energy error estimates to zero
An Infeasible-Point Subgradient Method Using Adaptive Approximate Projections
We propose a new subgradient method for the minimization of nonsmooth convex
functions over a convex set. To speed up computations we use adaptive
approximate projections only requiring to move within a certain distance of the
exact projections (which decreases in the course of the algorithm). In
particular, the iterates in our method can be infeasible throughout the whole
procedure. Nevertheless, we provide conditions which ensure convergence to an
optimal feasible point under suitable assumptions. One convergence result deals
with step size sequences that are fixed a priori. Two other results handle
dynamic Polyak-type step sizes depending on a lower or upper estimate of the
optimal objective function value, respectively. Additionally, we briefly sketch
two applications: Optimization with convex chance constraints, and finding the
minimum l1-norm solution to an underdetermined linear system, an important
problem in Compressed Sensing.Comment: 36 pages, 3 figure
- …