189,342 research outputs found

    Integrated Neural Based System for State Estimation and Confidence Limit Analysis in Water Networks

    Get PDF
    In this paper a simple recurrent neural network (NN) is used as a basis for constructing an integrated system capable of finding the state estimates with corresponding confidence limits for water distribution systems. In the first phase of calculations a neural linear equations solver is combined with a Newton-Raphson iterations to find a solution to an overdetermined set of nonlinear equations describing water networks. The mathematical model of the water system is derived using measurements and pseudomeasurements consisting certain amount of uncertainty. This uncertainty has an impact on the accuracy to which the state estimates can be calculated. The second phase of calculations, using the same NN, is carried out in order to quantify the effect of measurement uncertainty on accuracy of the derived state estimates. Rather than a single deterministic state estimate, the set of all feasible states corresponding to a given level of measurement uncertainty is calculated. The set is presented in the form of upper and lower bounds for the individual variables, and hence provides limits on the potential error of each variable. The simulations have been carried out and results are presented for a realistic 34-node water distribution network

    mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location

    Full text link
    This paper develops mfEGRA, a multifidelity active learning method using data-driven adaptively refined surrogates for failure boundary location in reliability analysis. This work addresses the issue of prohibitive cost of reliability analysis using Monte Carlo sampling for expensive-to-evaluate high-fidelity models by using cheaper-to-evaluate approximations of the high-fidelity model. The method builds on the Efficient Global Reliability Analysis (EGRA) method, which is a surrogate-based method that uses adaptive sampling for refining Gaussian process surrogates for failure boundary location using a single-fidelity model. Our method introduces a two-stage adaptive sampling criterion that uses a multifidelity Gaussian process surrogate to leverage multiple information sources with different fidelities. The method combines expected feasibility criterion from EGRA with one-step lookahead information gain to refine the surrogate around the failure boundary. The computational savings from mfEGRA depends on the discrepancy between the different models, and the relative cost of evaluating the different models as compared to the high-fidelity model. We show that accurate estimation of reliability using mfEGRA leads to computational savings of ∌\sim46% for an analytic multimodal test problem and 24% for a three-dimensional acoustic horn problem, when compared to single-fidelity EGRA. We also show the effect of using a priori drawn Monte Carlo samples in the implementation for the acoustic horn problem, where mfEGRA leads to computational savings of 45% for the three-dimensional case and 48% for a rarer event four-dimensional case as compared to single-fidelity EGRA

    PF-OLA: A High-Performance Framework for Parallel On-Line Aggregation

    Full text link
    Online aggregation provides estimates to the final result of a computation during the actual processing. The user can stop the computation as soon as the estimate is accurate enough, typically early in the execution. This allows for the interactive data exploration of the largest datasets. In this paper we introduce the first framework for parallel online aggregation in which the estimation virtually does not incur any overhead on top of the actual execution. We define a generic interface to express any estimation model that abstracts completely the execution details. We design a novel estimator specifically targeted at parallel online aggregation. When executed by the framework over a massive 8TB8\text{TB} TPC-H instance, the estimator provides accurate confidence bounds early in the execution even when the cardinality of the final result is seven orders of magnitude smaller than the dataset size and without incurring overhead.Comment: 36 page

    Optimal Quantum Measurements of Expectation Values of Observables

    Get PDF
    Experimental characterizations of a quantum system involve the measurement of expectation values of observables for a preparable state |psi> of the quantum system. Such expectation values can be measured by repeatedly preparing |psi> and coupling the system to an apparatus. For this method, the precision of the measured value scales as 1/sqrt(N) for N repetitions of the experiment. For the problem of estimating the parameter phi in an evolution exp(-i phi H), it is possible to achieve precision 1/N (the quantum metrology limit) provided that sufficient information about H and its spectrum is available. We consider the more general problem of estimating expectations of operators A with minimal prior knowledge of A. We give explicit algorithms that approach precision 1/N given a bound on the eigenvalues of A or on their tail distribution. These algorithms are particularly useful for simulating quantum systems on quantum computers because they enable efficient measurement of observables and correlation functions. Our algorithms are based on a method for efficiently measuring the complex overlap of |psi> and U|psi>, where U is an implementable unitary operator. We explicitly consider the issue of confidence levels in measuring observables and overlaps and show that, as expected, confidence levels can be improved exponentially with linear overhead. We further show that the algorithms given here can typically be parallelized with minimal increase in resource usage.Comment: 22 page

    Gravitational waves: search results, data analysis and parameter estimation

    Get PDF
    The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity

    Probabilistic Motion Estimation Based on Temporal Coherence

    Full text link
    We develop a theory for the temporal integration of visual motion motivated by psychophysical experiments. The theory proposes that input data are temporally grouped and used to predict and estimate the motion flows in the image sequence. This temporal grouping can be considered a generalization of the data association techniques used by engineers to study motion sequences. Our temporal-grouping theory is expressed in terms of the Bayesian generalization of standard Kalman filtering. To implement the theory we derive a parallel network which shares some properties of cortical networks. Computer simulations of this network demonstrate that our theory qualitatively accounts for psychophysical experiments on motion occlusion and motion outliers.Comment: 40 pages, 7 figure
    • 

    corecore