792,632 research outputs found
A-posteriori error estimates for the localized reduced basis multi-scale method
We present a localized a-posteriori error estimate for the localized reduced
basis multi-scale (LRBMS) method [Albrecht, Haasdonk, Kaulmann, Ohlberger
(2012): The localized reduced basis multiscale method]. The LRBMS is a
combination of numerical multi-scale methods and model reduction using reduced
basis methods to efficiently reduce the computational complexity of parametric
multi-scale problems with respect to the multi-scale parameter
and the online parameter simultaneously. We formulate the LRBMS based on
a generalization of the SWIPDG discretization presented in [Ern, Stephansen,
Vohralik (2010): Guaranteed and robust discontinuous Galerkin a posteriori
error estimates for convection-diffusion-reaction problems] on a coarse
partition of the domain that allows for any suitable discretization on the fine
triangulation inside each coarse grid element. The estimator is based on the
idea of a conforming reconstruction of the discrete diffusive flux, that can be
computed using local information only. It is offline/online decomposable and
can thus be efficiently used in the context of model reduction
Web Service Retrieval by Structured Models
Much of the information available on theWorldWideWeb cannot effectively be found by the help of search engines because the information is dynamically generated on a userâs request.This applies to online decision support services as well as Deep Web information. We present in this paper a retrieval system that uses a variant of structured modeling to describe such information services, and similarity of models for retrieval. The computational complexity of the similarity problem is discussed, and graph algorithms for retrieval on repositories of service descriptions are introduced. We show how bounds for combinatorial optimization problems can provide filter algorithms in a retrieval context. We report about an evaluation of the retrieval system in a classroom experiment and give computational results on a benchmark library.Economics ;
Second-Order Kernel Online Convex Optimization with Adaptive Sketching
Kernel online convex optimization (KOCO) is a framework combining the
expressiveness of non-parametric kernel models with the regret guarantees of
online learning. First-order KOCO methods such as functional gradient descent
require only time and space per iteration, and, when the only
information on the losses is their convexity, achieve a minimax optimal
regret. Nonetheless, many common losses in kernel
problems, such as squared loss, logistic loss, and squared hinge loss posses
stronger curvature that can be exploited. In this case, second-order KOCO
methods achieve regret, which
we show scales as , where
is the effective dimension of the problem and is usually much smaller than
. The main drawback of second-order methods is their
much higher space and time complexity. In this paper, we
introduce kernel online Newton step (KONS), a new second-order KOCO method that
also achieves regret. To address the
computational complexity of second-order methods, we introduce a new matrix
sketching algorithm for the kernel matrix , and show that for
a chosen parameter our Sketched-KONS reduces the space and time
complexity by a factor of to space and
time per iteration, while incurring only times more regret
The Complexity of Online Graph Games
Online computation is a concept to model uncertainty where not all
information on a problem instance is known in advance. An online algorithm
receives requests which reveal the instance piecewise and has to respond with
irrevocable decisions. Often, an adversary is assumed that constructs the
instance knowing the deterministic behavior of the algorithm. From a game
theoretical point of view, the adversary and the online algorithm are players
in a two-player game. By applying this view on combinatorial graph problems,
especially on problems where the solution is a subset of the vertices, we
analyze their complexity. For this, we introduce a framework based on gadget
reductions from 3-Satisfiability and extend it to an online setting where the
graph is a priori known by a map. This is done by identifying a set of rules
for the reductions and providing schemes for gadgets. The extension of the
framework to the online setting enable reductions from TQBF. We provide example
reductions to the well-known problems Vertex Cover, Independent Set and
Dominating Set and prove that they are PSPACE-complete. Thus, this paper
establishes that the online version with a map of NP-complete graph problems
form a large class of PSPACE-complete problems
Randomization can be as helpful as a glimpse of the future in online computation
We provide simple but surprisingly useful direct product theorems for proving
lower bounds on online algorithms with a limited amount of advice about the
future. As a consequence, we are able to translate decades of research on
randomized online algorithms to the advice complexity model. Doing so improves
significantly on the previous best advice complexity lower bounds for many
online problems, or provides the first known lower bounds. For example, if
is the number of requests, we show that:
(1) A paging algorithm needs bits of advice to achieve a
competitive ratio better than , where is the cache
size. Previously, it was only known that bits of advice were
necessary to achieve a constant competitive ratio smaller than .
(2) Every -competitive vertex coloring algorithm must
use bits of advice. Previously, it was only known that
bits of advice were necessary to be optimal.
For certain online problems, including the MTS, -server, paging, list
update, and dynamic binary search tree problem, our results imply that
randomization and sublinear advice are equally powerful (if the underlying
metric space or node set is finite). This means that several long-standing open
questions regarding randomized online algorithms can be equivalently stated as
questions regarding online algorithms with sublinear advice. For example, we
show that there exists a deterministic -competitive -server
algorithm with advice complexity if and only if there exists a
randomized -competitive -server algorithm without advice.
Technically, our main direct product theorem is obtained by extending an
information theoretical lower bound technique due to Emek, Fraigniaud, Korman,
and Ros\'en [ICALP'09]
- âŠ