391,128 research outputs found

    Ergodicity and Accuracy of Optimal Particle Filters for Bayesian Data Assimilation

    Get PDF
    Data assimilation refers to the methodology of combining dynamical models and observed data with the objective of improving state estimation. Most data assimilation algorithms are viewed as approximations of the Bayesian posterior (filtering distribution) on the signal given the observations. Some of these approximations are controlled, such as particle filters which may be refined to produce the true filtering distribution in the large particle number limit, and some are uncontrolled, such as ensemble Kalman filter methods which do not recover the true filtering distribution in the large ensemble limit. Other data assimilation algorithms, such as cycled 3DVAR methods, may be thought of as controlled estimators of the state, in the small observational noise scenario, but are also uncontrolled in general in relation to the true filtering distribution. For particle filters and ensemble Kalman filters it is of practical importance to understand how and why data assimilation methods can be effective when used with a fixed small number of particles, since for many large-scale applications it is not practical to deploy algorithms close to the large particle limit asymptotic. In this paper, the authors address this question for particle filters and, in particular, study their accuracy (in the small noise limit) and ergodicity (for noisy signal and observation) without appealing to the large particle number limit. The authors first overview the accuracy and minorization properties for the true filtering distribution, working in the setting of conditional Gaussianity for the dynamics-observation model. They then show that these properties are inherited by optimal particle filters for any fixed number of particles, and use the minorization to establish ergodicity of the filters. For completeness we also prove large particle number consistency results for the optimal particle filters, by writing the update equations for the underlying distributions as recursions. In addition to looking at the optimal particle filter with standard resampling, they derive all the above results for (what they term) the Gaussianized optimal particle filter and show that the theoretical properties are favorable for this method, when compared to the standard optimal particle filter

    Robust failure detection filters

    Get PDF
    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch

    A Class of Mean-field LQG Games with Partial Information

    Full text link
    The large-population system consists of considerable small agents whose individual behavior and mass effect are interrelated via their state-average. The mean-field game provides an efficient way to get the decentralized strategies of large-population system when studying its dynamic optimizations. Unlike other large-population literature, this current paper possesses the following distinctive features. First, our setting includes the partial information structure of large-population system which is practical from real application standpoint. Specially, two cases of partial information structure are considered here: the partial filtration case (see Section 2, 3) where the available information to agents is the filtration generated by an observable component of underlying Brownian motion; the noisy observation case (Section 4) where the individual agent can access an additive white-noise observation on its own state. Also, it is new in filtering modeling that our sensor function may depend on the state-average. Second, in both cases, the limiting state-averages become random and the filtering equations to individual state should be formalized to get the decentralized strategies. Moreover, it is also new that the limit average of state filters should be analyzed here. This makes our analysis very different to the full information arguments of large-population system. Third, the consistency conditions are equivalent to the wellposedness of some Riccati equations, and do not involve the fixed-point analysis as in other mean-field games. The ϵ\epsilon-Nash equilibrium properties are also presented.Comment: 19 page

    Ergodicity and Accuracy of Optimal Particle Filters for Bayesian Data Assimilation

    Get PDF
    Data assimilation refers to the methodology of combining dynamical models and observed data with the objective of improving state estimation. Most data assimilation algorithms are viewed as approximations of the Bayesian posterior (filtering distribution) on the signal given the observations. Some of these approximations are controlled, such as particle filters which may be refined to produce the true filtering distribution in the large particle number limit, and some are uncontrolled, such as ensemble Kalman filter methods which do not recover the true filtering distribution in the large ensemble limit. Other data assimilation algorithms, such as cycled 3DVAR methods, may be thought of as controlled estimators of the state, in the small observational noise scenario, but are also uncontrolled in general in relation to the true filtering distribution. For particle filters and ensemble Kalman filters it is of practical importance to understand how and why data assimilation methods can be effective when used with a fixed small number of particles, since for many large-scale applications it is not practical to deploy algorithms close to the large particle limit asymptotic. In this paper, the authors address this question for particle filters and, in particular, study their accuracy (in the small noise limit) and ergodicity (for noisy signal and observation) without appealing to the large particle number limit. The authors first overview the accuracy and minorization properties for the true filtering distribution, working in the setting of conditional Gaussianity for the dynamics-observation model. They then show that these properties are inherited by optimal particle filters for any fixed number of particles, and use the minorization to establish ergodicity of the filters. For completeness we also prove large particle number consistency results for the optimal particle filters, by writing the update equations for the underlying distributions as recursions. In addition to looking at the optimal particle filter with standard resampling, they derive all the above results for (what they term) the Gaussianized optimal particle filter and show that the theoretical properties are favorable for this method, when compared to the standard optimal particle filter

    A numerical study of the alpha model for two-dimensional magnetohydrodynamic turbulent flows

    Full text link
    We explore some consequences of the ``alpha model,'' also called the ``Lagrangian-averaged'' model, for two-dimensional incompressible magnetohydrodynamic (MHD) turbulence. This model is an extension of the smoothing procedure in fluid dynamics which filters velocity fields locally while leaving their associated vorticities unsmoothed, and has proved useful for high Reynolds number turbulence computations. We consider several known effects (selective decay, dynamic alignment, inverse cascades, and the probability distribution functions of fluctuating turbulent quantities) in magnetofluid turbulence and compare the results of numerical solutions of the primitive MHD equations with their alpha-model counterparts' performance for the same flows, in regimes where available resolution is adequate to explore both. The hope is to justify the use of the alpha model in regimes that lie outside currently available resolution, as will be the case in particular in three-dimensional geometry or for magnetic Prandtl numbers differing significantly from unity. We focus our investigation, using direct numerical simulations with a standard and fully parallelized pseudo-spectral method and periodic boundary conditions in two space dimensions, on the role that such a modeling of the small scales using the Lagrangian-averaged framework plays in the large-scale dynamics of MHD turbulence. Several flows are examined, and for all of them one can conclude that the statistical properties of the large-scale spectra are recovered, whereas small-scale detailed phase information (such as e.g. the location of structures) is lost.Comment: 22 pages, 20 figure

    The Initial Mass Function of the Orion Nebula Cluster across the H-burning limit

    Get PDF
    We present a new census of the Orion Nebula Cluster (ONC) over a large field of view (>30'x30'), significantly increasing the known population of stellar and substellar cluster members with precisely determined properties. We develop and exploit a technique to determine stellar effective temperatures from optical colors, nearly doubling the previously available number of objects with effective temperature determinations in this benchmark cluster. Our technique utilizes colors from deep photometry in the I-band and in two medium-band filters at lambda~753 and 770nm, which accurately measure the depth of a molecular feature present in the spectra of cool stars. From these colors we can derive effective temperatures with a precision corresponding to better than one-half spectral subtype, and importantly this precision is independent of the extinction to the individual stars. Also, because this technique utilizes only photometry redward of 750nm, the results are only mildly sensitive to optical veiling produced by accretion. Completing our census with previously available data, we place some 1750 sources in the Hertzsprung-Russel diagram and assign masses and ages down to 0.02 solar masses. At faint luminosities, we detect a large population of background sources which is easily separated in our photometry from the bona fide cluster members. The resulting initial mass function of the cluster has good completeness well into the substellar mass range, and we find that it declines steeply with decreasing mass. This suggests a deficiency of newly formed brown dwarfs in the cluster compared to the Galactic disk population.Comment: 16 pages, 18 figures. Accepted for publication in The Astrophysical Journa

    Optimizing weak lensing mass estimates for cluster profile uncertainty

    Full text link
    Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M_200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement M_ap that minimizes the mass estimate variance <(M_ap - M_200m)^2> in the presence of all these forms of variability. Depending on halo mass and observational conditions, the resulting mass estimator improves on M_ap filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. We briefly discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.Comment: 11 pages, 5 figures; accepted by MNRA

    LOW AREA AND DELAY IMPLEMENTATION OF ERROR CORRECTING AND ERROR DETECTING CODE USING REVERSIBLE GATE

    Get PDF
    Digital filters are widely used in signal processing and communication systems. In some cases, the reliability of those systems is critical, and fault tolerant filter implementations are needed. Over the years, many techniques that exploit the filters’ structure and properties to achieve fault tolerance have been proposed. As technology scales, it enables more complex systems that incorporate many filters. In those complex systems, it is common that some of the filters operate in parallel, for example, by applying the same filter to different input signals. Recently, a simple technique that exploits the presence of parallel filters to achieve fault tolerance has been presented. In this brief, that idea is generalized to show that parallel filters can be protected using error correction codes (ECCs) in which each filter is the equivalent of a bit in a traditional ECC. This new scheme allows more efficient protection when the number of parallel filters is large. The technique is evaluated using a case study of parallel finite impulse response filters showing the effectiveness in terms of protection and implementation cost
    • …
    corecore