792,632 research outputs found

    A-posteriori error estimates for the localized reduced basis multi-scale method

    Full text link
    We present a localized a-posteriori error estimate for the localized reduced basis multi-scale (LRBMS) method [Albrecht, Haasdonk, Kaulmann, Ohlberger (2012): The localized reduced basis multiscale method]. The LRBMS is a combination of numerical multi-scale methods and model reduction using reduced basis methods to efficiently reduce the computational complexity of parametric multi-scale problems with respect to the multi-scale parameter Δ\varepsilon and the online parameter Ό\mu simultaneously. We formulate the LRBMS based on a generalization of the SWIPDG discretization presented in [Ern, Stephansen, Vohralik (2010): Guaranteed and robust discontinuous Galerkin a posteriori error estimates for convection-diffusion-reaction problems] on a coarse partition of the domain that allows for any suitable discretization on the fine triangulation inside each coarse grid element. The estimator is based on the idea of a conforming reconstruction of the discrete diffusive flux, that can be computed using local information only. It is offline/online decomposable and can thus be efficiently used in the context of model reduction

    Web Service Retrieval by Structured Models

    Get PDF
    Much of the information available on theWorldWideWeb cannot effectively be found by the help of search engines because the information is dynamically generated on a user’s request.This applies to online decision support services as well as Deep Web information. We present in this paper a retrieval system that uses a variant of structured modeling to describe such information services, and similarity of models for retrieval. The computational complexity of the similarity problem is discussed, and graph algorithms for retrieval on repositories of service descriptions are introduced. We show how bounds for combinatorial optimization problems can provide filter algorithms in a retrieval context. We report about an evaluation of the retrieval system in a classroom experiment and give computational results on a benchmark library.Economics ;

    Second-Order Kernel Online Convex Optimization with Adaptive Sketching

    Get PDF
    Kernel online convex optimization (KOCO) is a framework combining the expressiveness of non-parametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require only O(t)\mathcal{O}(t) time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal O(T)\mathcal{O}(\sqrt{T}) regret. Nonetheless, many common losses in kernel problems, such as squared loss, logistic loss, and squared hinge loss posses stronger curvature that can be exploited. In this case, second-order KOCO methods achieve O(log⁥(Det(K)))\mathcal{O}(\log(\text{Det}(\boldsymbol{K}))) regret, which we show scales as O(defflog⁥T)\mathcal{O}(d_{\text{eff}}\log T), where deffd_{\text{eff}} is the effective dimension of the problem and is usually much smaller than O(T)\mathcal{O}(\sqrt{T}). The main drawback of second-order methods is their much higher O(t2)\mathcal{O}(t^2) space and time complexity. In this paper, we introduce kernel online Newton step (KONS), a new second-order KOCO method that also achieves O(defflog⁥T)\mathcal{O}(d_{\text{eff}}\log T) regret. To address the computational complexity of second-order methods, we introduce a new matrix sketching algorithm for the kernel matrix Kt\boldsymbol{K}_t, and show that for a chosen parameter γ≀1\gamma \leq 1 our Sketched-KONS reduces the space and time complexity by a factor of Îł2\gamma^2 to O(t2Îł2)\mathcal{O}(t^2\gamma^2) space and time per iteration, while incurring only 1/Îł1/\gamma times more regret

    The Complexity of Online Graph Games

    Full text link
    Online computation is a concept to model uncertainty where not all information on a problem instance is known in advance. An online algorithm receives requests which reveal the instance piecewise and has to respond with irrevocable decisions. Often, an adversary is assumed that constructs the instance knowing the deterministic behavior of the algorithm. From a game theoretical point of view, the adversary and the online algorithm are players in a two-player game. By applying this view on combinatorial graph problems, especially on problems where the solution is a subset of the vertices, we analyze their complexity. For this, we introduce a framework based on gadget reductions from 3-Satisfiability and extend it to an online setting where the graph is a priori known by a map. This is done by identifying a set of rules for the reductions and providing schemes for gadgets. The extension of the framework to the online setting enable reductions from TQBF. We provide example reductions to the well-known problems Vertex Cover, Independent Set and Dominating Set and prove that they are PSPACE-complete. Thus, this paper establishes that the online version with a map of NP-complete graph problems form a large class of PSPACE-complete problems

    Randomization can be as helpful as a glimpse of the future in online computation

    Get PDF
    We provide simple but surprisingly useful direct product theorems for proving lower bounds on online algorithms with a limited amount of advice about the future. As a consequence, we are able to translate decades of research on randomized online algorithms to the advice complexity model. Doing so improves significantly on the previous best advice complexity lower bounds for many online problems, or provides the first known lower bounds. For example, if nn is the number of requests, we show that: (1) A paging algorithm needs Ω(n)\Omega(n) bits of advice to achieve a competitive ratio better than Hk=Ω(log⁥k)H_k=\Omega(\log k), where kk is the cache size. Previously, it was only known that Ω(n)\Omega(n) bits of advice were necessary to achieve a constant competitive ratio smaller than 5/45/4. (2) Every O(n1−Δ)O(n^{1-\varepsilon})-competitive vertex coloring algorithm must use Ω(nlog⁥n)\Omega(n\log n) bits of advice. Previously, it was only known that Ω(nlog⁥n)\Omega(n\log n) bits of advice were necessary to be optimal. For certain online problems, including the MTS, kk-server, paging, list update, and dynamic binary search tree problem, our results imply that randomization and sublinear advice are equally powerful (if the underlying metric space or node set is finite). This means that several long-standing open questions regarding randomized online algorithms can be equivalently stated as questions regarding online algorithms with sublinear advice. For example, we show that there exists a deterministic O(log⁥k)O(\log k)-competitive kk-server algorithm with advice complexity o(n)o(n) if and only if there exists a randomized O(log⁥k)O(\log k)-competitive kk-server algorithm without advice. Technically, our main direct product theorem is obtained by extending an information theoretical lower bound technique due to Emek, Fraigniaud, Korman, and Ros\'en [ICALP'09]
    • 

    corecore