82,719 research outputs found

    Hierarchical Time-Dependent Oracles

    Get PDF
    We study networks obeying \emph{time-dependent} min-cost path metrics, and present novel oracles for them which \emph{provably} achieve two unique features: % (i) \emph{subquadratic} preprocessing time and space, \emph{independent} of the metric's amount of disconcavity; % (ii) \emph{sublinear} query time, in either the network size or the actual Dijkstra-Rank of the query at hand

    Time domain simulations of dynamic river networks

    Get PDF
    The problem of simulating a river network is considered. A river network is considered to comprise of rivers, dams/lakes as well as weirs. We suggest a numerical approach with specific features that enable the correct representation of these assets. For each river the flow of water is described by the shallow water equations which is a system of hyperbolic partial differential equations and at the junctions of the rivers, suitable coupling conditions, viewed as interior boundary conditions are used to couple the dynamics. A different model for the dams is also presented. Numerical test cases are presented which show that the model is able to reproduce the expected dynamics of the system. Other aspects of the modelling such as rainfall, run-off, overflow/flooding, evaporation, absorption/seepage, bed-slopes, bed friction have not been incorporated in the model due to their specific nature

    Lightweight Probabilistic Deep Networks

    Full text link
    Even though probabilistic treatments of neural networks have a long history, they have not found widespread use in practice. Sampling approaches are often too slow already for simple networks. The size of the inputs and the depth of typical CNN architectures in computer vision only compound this problem. Uncertainty in neural networks has thus been largely ignored in practice, despite the fact that it may provide important information about the reliability of predictions and the inner workings of the network. In this paper, we introduce two lightweight approaches to making supervised learning with probabilistic deep networks practical: First, we suggest probabilistic output layers for classification and regression that require only minimal changes to existing networks. Second, we employ assumed density filtering and show that activation uncertainties can be propagated in a practical fashion through the entire network, again with minor changes. Both probabilistic networks retain the predictive power of the deterministic counterpart, but yield uncertainties that correlate well with the empirical error induced by their predictions. Moreover, the robustness to adversarial examples is significantly increased.Comment: To appear at CVPR 201

    Hierarchical Beamforming: Resource Allocation, Fairness and Flow Level Performance

    Full text link
    We consider hierarchical beamforming in wireless networks. For a given population of flows, we propose computationally efficient algorithms for fair rate allocation including proportional fairness and max-min fairness. We next propose closed-form formulas for flow level performance, for both elastic (with either proportional fairness and max-min fairness) and streaming traffic. We further assess the performance of hierarchical beamforming using numerical experiments. Since the proposed solutions have low complexity compared to conventional beamforming, our work suggests that hierarchical beamforming is a promising candidate for the implementation of beamforming in future cellular networks.Comment: 34 page

    Model Reduction and Neural Networks for Parametric PDEs

    Get PDF
    We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature

    Computation Alignment: Capacity Approximation without Noise Accumulation

    Full text link
    Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by Avestimehr et al. has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor
    • …
    corecore