397 research outputs found

    Probabilistic Reduced-Order Modeling for Stochastic Partial Differential Equations

    Full text link
    We discuss a Bayesian formulation to coarse-graining (CG) of PDEs where the coefficients (e.g. material parameters) exhibit random, fine scale variability. The direct solution to such problems requires grids that are small enough to resolve this fine scale variability which unavoidably requires the repeated solution of very large systems of algebraic equations. We establish a physically inspired, data-driven coarse-grained model which learns a low- dimensional set of microstructural features that are predictive of the fine-grained model (FG) response. Once learned, those features provide a sharp distribution over the coarse scale effec- tive coefficients of the PDE that are most suitable for prediction of the fine scale model output. This ultimately allows to replace the computationally expensive FG by a generative proba- bilistic model based on evaluating the much cheaper CG several times. Sparsity enforcing pri- ors further increase predictive efficiency and reveal microstructural features that are important in predicting the FG response. Moreover, the model yields probabilistic rather than single-point predictions, which enables the quantification of the unavoidable epistemic uncertainty that is present due to the information loss that occurs during the coarse-graining process

    Bipartite Perfect Matching in Pseudo-Deterministic NC

    Get PDF
    We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses poly(n) processors, poly(log n) depth, poly(log n) random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph. We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be reused by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only O(log^2 n) bits whereas the previous best randomized algorithm used O(log^7 n), and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art. Furthermore, we prove that resolving the decision question NC = RNC, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    Comparing Spatial and Scenario Decomposition for Stochastic Hydrothermal Unit Commitment Problems

    Get PDF
    Solving very-large-scale optimization problems frequently require to decompose them in smaller subproblems, that are iteratively solved to produce useful information. One such approach is the Lagrangian Relaxation (LR), a general technique that leads to many different decomposition schemes. The LR produces a lower bound of the objective function and useful information for heuristics aimed at constructing feasible primal solutions. In this paper, we compare the main LR strategies used so far for Stochastic Hydrothermal Unit Commitment problems, where uncertainty mainly concerns water availability in reservoirs and demand (weather conditions). The problem is customarily modeled as a two-stage mixed-integer optimization problem. We compare different decomposition strategies (unit and scenario schemes) in terms of quality of produced lower bound and running time. The schemes are assessed with various hydrothermal systems, considering different configuration of power plants, in terms of capacity and number of units

    On k-Column Sparse Packing Programs

    Full text link
    We consider the class of packing integer programs (PIPs) that are column sparse, i.e. there is a specified upper bound k on the number of constraints that each variable appears in. We give an (ek+o(k))-approximation algorithm for k-column sparse PIPs, improving on recent results of k2⋅2kk^2\cdot 2^k and O(k2)O(k^2). We also show that the integrality gap of our linear programming relaxation is at least 2k-1; it is known that k-column sparse PIPs are Ω(k/log⁥k)\Omega(k/ \log k)-hard to approximate. We also extend our result (at the loss of a small constant factor) to the more general case of maximizing a submodular objective over k-column sparse packing constraints.Comment: 19 pages, v3: additional detail

    Sketching Cuts in Graphs and Hypergraphs

    Full text link
    Sketching and streaming algorithms are in the forefront of current research directions for cut problems in graphs. In the streaming model, we show that (1−ϔ)(1-\epsilon)-approximation for Max-Cut must use n1−O(Ï”)n^{1-O(\epsilon)} space; moreover, beating 4/54/5-approximation requires polynomial space. For the sketching model, we show that rr-uniform hypergraphs admit a (1+Ï”)(1+\epsilon)-cut-sparsifier (i.e., a weighted subhypergraph that approximately preserves all the cuts) with O(ϔ−2n(r+log⁥n))O(\epsilon^{-2} n (r+\log n)) edges. We also make first steps towards sketching general CSPs (Constraint Satisfaction Problems)

    A Two-Scale Complexity Measure for Deep Learning Models

    Full text link
    We introduce a novel capacity measure 2sED for statistical models based on the effective dimension. The new quantity provably bounds the generalization error under mild assumptions on the model. Furthermore, simulations on standard data sets and popular model architectures show that 2sED correlates well with the training error. For Markovian models, we show how to efficiently approximate 2sED from below through a layerwise iterative approach, which allows us to tackle deep learning models with a large number of parameters. Simulation results suggest that the approximation is good for different prominent models and data sets
    • 

    corecore