51,053 research outputs found

    Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity

    Get PDF
    The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's minimal sufficient statistic. In general we show that data compression is almost always the best strategy, both in hypothesis identification and prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor

    On the Asymptotic Efficiency of Approximate Bayesian Computation Estimators

    Get PDF
    Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a large-data limit. Our key assumption is that the data are summarized by a fixed-dimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is the same dimension as the parameter vector, p; and that any summary statistic of higher dimension can be reduced, through a linear transformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.Comment: Main text shortened and proof revised. To appear in Biometrik

    Reversibility and Adiabatic Computation: Trading Time and Space for Energy

    Get PDF
    Future miniaturization and mobilization of computing devices requires energy parsimonious `adiabatic' computation. This is contingent on logical reversibility of computation. An example is the idea of quantum computations which are reversible except for the irreversible observation steps. We propose to study quantitatively the exchange of computational resources like time and space for irreversibility in computations. Reversible simulations of irreversible computations are memory intensive. Such (polynomial time) simulations are analysed here in terms of `reversible' pebble games. We show that Bennett's pebbling strategy uses least additional space for the greatest number of simulated steps. We derive a trade-off for storage space versus irreversible erasure. Next we consider reversible computation itself. An alternative proof is provided for the precise expression of the ultimate irreversibility cost of an otherwise reversible computation without restrictions on time and space use. A time-irreversibility trade-off hierarchy in the exponential time region is exhibited. Finally, extreme time-irreversibility trade-offs for reversible computations in the thoroughly unrealistic range of computable versus noncomputable time-bounds are given.Comment: 30 pages, Latex. Lemma 2.3 should be replaced by the slightly better ``There is a winning strategy with n+2n+2 pebbles and m−1m-1 erasures for pebble games GG with TG=m2nT_G= m2^n, for all m≥1m \geq 1'' with appropriate further changes (as pointed out by Wim van Dam). This and further work on reversible simulations as in Section 2 appears in quant-ph/970300

    Three-Dimensional MHD Simulation of Caltech Plasma Jet Experiment: First Results

    Get PDF
    Magnetic fields are believed to play an essential role in astrophysical jets with observations suggesting the presence of helical magnetic fields. Here, we present three-dimensional (3D) ideal MHD simulationsof the Caltech plasma jet experiment using a magnetic tower scenario as the baseline model. Magnetic fields consist of an initially localized dipole-like poloidal component and a toroidal component that is continuously being injected into the domain. This flux injection mimics the poloidal currents driven by the anode-cathode voltage drop in the experiment. The injected toroidal field stretches the poloidal fields to large distances, while forming a collimated jet along with several other key features. Detailed comparisons between 3D MHD simulations and experimental measurements provide a comprehensive description of the interplay among magnetic force, pressure and flow effects. In particular, we delineate both the jet structure and the transition process that converts the injected magnetic energy to other forms. With suitably chosen parameters that are derived from experiments, the jet in the simulation agrees quantitatively with the experimental jet in terms of magnetic/kinetic/inertial energy, total poloidal current, voltage, jet radius, and jet propagation velocity. Specifically, the jet velocity in the simulation is proportional to the poloidal current divided by the square root of the jet density, in agreement with both the experiment and analytical theory. This work provides a new and quantitative method for relating experiments, numerical simulations and astrophysical observation, and demonstrates the possibility of using terrestrial laboratory experiments to study astrophysical jets.Comment: accepted by ApJ 37 pages, 15 figures, 2 table
    • …
    corecore