184,247 research outputs found

    Minimum Description Length Revisited

    Get PDF
    This is an up-to-date introduction to and overview of the Minimum Description Length (MDL) Principle, a theory of inductive inference that can be applied to general problems in statistics, machine learning and pattern recognition. While MDL was originally based on data compression ideas, this introduction can be read without any knowledge thereof. It takes into account all major developments since 2007, the last time an extensive overview was written. These include new methods for model selection and averaging and hypothesis testing, as well as the first completely general definition of {\em MDL estimators}. Incorporating these developments, MDL can be seen as a powerful extension of both penalized likelihood and Bayesian approaches, in which penalization functions and prior distributions are replaced by more general luckiness functions, average-case methodology is replaced by a more robust worst-case approach, and in which methods classically viewed as highly distinct, such as AIC vs BIC and cross-validation vs Bayes can, to a large extent, be viewed from a unified perspective

    Minimum description length revisited

    Get PDF
    This is an up-to-date introduction to and overview of the Minimum Description Length (MDL) Principle, a theory of inductive inference that can be applied to general problems in statistics, machine learning and pattern recognition. While MDL was originally based on data compression ideas, this introduction can be read without any knowledge thereof. It takes into account all major developments since 2007, the last time an extensive overview was written. These include new methods for model selection and averaging and hypothesis testing, as well as the first completely general definition of MDL estimators. Incorporating these developments, MDL can be seen as a powerful extension of both penalized likelihood and Bayesian approaches, in which penalization functions and prior distributions are replaced by more general luckiness functions, average-case methodology is replaced by a more robust worst-case approach, and in which methods classically viewed as highly distinct, such as AIC versus BIC and cross-validation versus Bayes can, to a large extent, be viewed from a unified perspective.Peer reviewe

    The stability of parallel-propagating circularly polarized Alfvén waves revisited

    Get PDF
    The parametric instability of parallel-propagating circularly polarized Alfven waves (pump waves) is revisited. The stability of these waves is determined by the linearized system of magnetohydrodynamic equations with periodic coefficients. The variable substitution that reduces this system of equations to a system with constant coefficients is suggested. The system with constant coefficients is used to derive the dispersion equation that was previously derived by many authors with the use of different approaches. The dependences of general stability properties on the dimensionless amplitude of the pump wave a and the ratio of the sound and Alfven speed b are studied analytically. It is shown that, for any a and b, there are such quantities k(1) and k(2) that a perturbation with the dimensionless wavenumber k is unstable if k(1)(2) 1, k(1) is a monotonically increasing function of a. For any b, k(1) tends to a limiting value approximately equal to 1.18 as a -> infinity

    Dimensionful deformations of Poincare' symmetries for a Quantum Gravity without ideal observers

    Get PDF
    Quantum Mechanics is revisited as the appropriate theoretical framework for the description of the outcome of experiments that rely on the use of classical devices. In particular, it is emphasized that the limitations on the measurability of (pairs of conjugate) observables encoded in the formalism of Quantum Mechanics reproduce faithfully the ``classical-device limit'' of the corresponding limitations encountered in (real or gedanken) experimental setups. It is then argued that devices cannot behave classically in Quantum Gravity, and that this might raise serious problems for the search of a class of experiments described by theories obtained by ``applying Quantum Mechanics to Gravity.'' It is also observed that using heuristic/intuitive arguments based on the absence of classical devices one is led to consider some candidate Quantum-Gravity phenomena involving dimensionful deformations of the Poincare' symmetries.Comment: 7 pages, Latex. (This essay received an ``honorable mention'' from the Gravity Research Foundation, 1998-Ed.

    The response of self-graviting protostellar discs to slow reduction in cooling timescale: the fragmentation boundary revisited

    Full text link
    A number of previous studies of the fragmentation of self-gravitating protostellar discs have modeled radiative cooling with a cooling timescale (t_{cool}) parameterised as a simple multiple (beta_{cool}) of the local dynamical timescale. Such studies have delineated the `fragmentation boundary' in terms of a critical value of beta_{cool} (beta_{crit}), where the disc fragments if beta_{cool} < beta_{crit}. Such an approach however begs the question of how in reality a disc could ever be assembled with beta_{cool} < beta_{crit}. Here we adopt the more realistic approach of gradually reducing beta_{cool}, as might correspond to changes in thermal regime due to secular changes in the disc density profile. We find that when beta_{cool} is gradually reduced (on a timescale longer than t_{cool}), the disc is stabilised against fragmentation, compared with models in which beta_{cool} is reduced rapidly. We therefore conclude that a disc's ability to remain in a self-regulated, self-gravitating state (without fragmentation) is partly dependent on its thermal history, as well as its current cooling rate. Nevertheless, a slow reduction in t_{cool} appears only to lower the fragmentation boundary by about a factor two in t_{cool} and thus only permits maximum alpha values (parameterising the efficiency of angular momentum transfer in the disc) that are about a factor two higher than determined hitherto. Our results therefore do not undermine the notion of a fundamental upper limit to the heating rate that can be delivered by gravitational instabilities before the disc is subject to fragmentation. An important implication of this work, therefore, is that self-gravitating discs can enter into the regime of fragmentation via secular evolution and it is not necessary to invoke rapid (impulsive) events to trigger fragmentation.Comment: accepted for publication in MNRA

    Arithmetic coding revisited

    Get PDF
    Over the last decade, arithmetic coding has emerged as an important compression tool. It is now the method of choice for adaptive coding on multisymbol alphabets because of its speed, low storage requirements, and effectiveness of compression. This article describes a new implementation of arithmetic coding that incorporates several improvements over a widely used earlier version by Witten, Neal, and Cleary, which has become a de facto standard. These improvements include fewer multiplicative operations, greatly extended range of alphabet sizes and symbol probabilities, and the use of low-precision arithmetic, permitting implementation by fast shift/add operations. We also describe a modular structure that separates the coding, modeling, and probability estimation components of a compression system. To motivate the improved coder, we consider the needs of a word-based text compression program. We report a range of experimental results using this and other models. Complete source code is available

    Alternating subgroups of Coxeter groups

    Get PDF
    We study combinatorial properties of the alternating subgroup of a Coxeter group, using a presentation of it due to Bourbaki.Comment: 39 pages, 3 figure

    MDL Denoising Revisited

    Full text link
    We refine and extend an earlier MDL denoising criterion for wavelet-based denoising. We start by showing that the denoising problem can be reformulated as a clustering problem, where the goal is to obtain separate clusters for informative and non-informative wavelet coefficients, respectively. This suggests two refinements, adding a code-length for the model index, and extending the model in order to account for subband-dependent coefficient distributions. A third refinement is derivation of soft thresholding inspired by predictive universal coding with weighted mixtures. We propose a practical method incorporating all three refinements, which is shown to achieve good performance and robustness in denoising both artificial and natural signals.Comment: Submitted to IEEE Transactions on Information Theory, June 200

    The born again (VLTP) scenario revisited: The mass of the remnants and implications for V4334 Sgr

    Get PDF
    We present 1-D numerical simulations of the very late thermal pulse (VLTP) scenario for a wide range of remnant masses. We show that by taking into account the different possible remnant masses, the observed evolution of V4334 Sgr (a.k.a. Sakurai's Object) can be reproduced within the standard 1D-MLT stellar evolutionary models without the inclusion of any ad−hocad-hoc reduced mixing efficiency. Our simulations hint at a consistent picture with present observations of V4334 Sgr. From energetics, and within the standard MLT approach, we show that low mass remnants \hbox{(Mâ‰Č0.6M\lesssim0.6\msun)} are expected to behave markedly different than higher mass remnants \hbox{(M≳0.6M\gtrsim0.6\msun)} in the sense that the latter are not expected to expand significantly as a result of the violent H-burning that takes place during the VLTP. We also assess the discrepancy in the born again times obtained by different authors by comparing the energy that can be liberated by H-burning during the VLTP event.Comment: Submitted to MNRAS. In includes an appendix regarding the treatment of reduced convective motions within the Mixing Length Theor
    • 

    corecore