492 research outputs found

    Kolmogorov's Structure Functions and Model Selection

    Full text link
    In 1974 Kolmogorov proposed a non-probabilistic approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The ``structure function'' of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the ``true'' model is in the model class considered or not. In this setting, this happens {\em with certainty}, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that--within the obvious constraints--every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the ``algorithmic minimal sufficient statistic.''Comment: 25 pages LaTeX, 5 figures. In part in Proc 47th IEEE FOCS; this final version (more explanations, cosmetic modifications) to appear in IEEE Trans Inform T

    Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Get PDF
    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics

    Phenomenology of hydromagnetic turbulence in a uniformly expanding medium

    Get PDF
    A simple phenomenology is developed for the decay and transport of turbulence in a constant-speed, uniformly expanding medium. The fluctuations are assumed to be locally incompressible, and either of the hydrodynamic or non-Alfvénic magnetohydrodynamic (MHD) type. In order to represent local effects of nonlinearities, a simple model of the Kaármá-Dryden type for locally homogeneous turbulent decay is adopted. A detailed discussion of the parameters of this familiar one-point hydrodynamic closure is given, which has been shown recently to be applicable to non-Alfvénic MHD as well. The effects of the large-scale flow and expansion are incorporated using a two-scale approach, in which assumptions of particular turbulence symmetries provide simplifications. The derived model is tractable and provides a basis for understanding turbulence in the outer heliosphere, as well as in other astrophysical applications

    Non-Boolean probabilities and quantum measurement

    Full text link
    A non-Boolean extension of the classical probability model is proposed. The non-Boolean probabilities reproduce typical quantum phenomena. The proposed model is more general and more abstract, but easier to interpret, than the quantum mechanical Hilbert space formalism and exhibits a particular phenomenon (state-independent conditional probabilities) which may provide new opportunities for an understanding of the quantum measurement process. Examples of the proposed model are provided, using Jordan operator algebras.Comment: 12 pages, the original publication is available at http://www.iop.or

    Interpretations of Probability

    Get PDF

    Care 3, Phase 1, volume 1

    Get PDF
    A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics

    Analysis of relative dispersion of two particles by Lagrangian stochastic models and DNS methods

    Get PDF
    Comparisons of the Q1D against the known Lagrangian stochastic well-mixed quadratic form models and the moments approximation models are presented. In the case of modestly large Reynolds numbers turbulence (Re λ ⋍ 240) the comparison of the Q1D model with the DNS data is made. Being in a qualitatively agreemnet with the DNS data, the Q1D model predicts higher rate of separation. Realizability of Q1D model extracted from the transport equation with a quadratic form of the conditional acceleration is shown

    Etude de l’expression des gènes nycthéméraux à la lumière de l’évolution

    Get PDF
    Circadian clocks are now an important part of the understanding of biological systems. They are ubiquitous, found in a wide range of biological processes, from molecular systems to behavior, and are also found almost everywhere in nature: in animals, plants, bacteria and fungi. This thesis focuses on biological systems that respond to factors oscillating on a 24-hour time scale. The detection of genes expressed with a periodicity of 24 hrs remains a complicated aspect of analytical work. We show that most detection methods are efficient only for strong signals and that outside of these genes, the algorithms seem to detect rhythmic genes in a rather random way. We have also tried to understand why genes have periodic variations in the amount of their RNA or their protein they encode. Indeed, 20% to 50% of cyclically accumulated proteins (i.e. nycthemeral) are translated from non-oscillating mRNAs, and conversely, there are many mRNAs that oscillate but not the proteins they encode. Why is that? My results suggest that the nycthemeral variation of proteins concerns on average highly expressed proteins, which remain on average costlier to produce for the cell (in terms of energy and molecular material) compared to other proteins produced in a non-rhythmic way. Moreover, these rhythmic proteins would be even more expensive to produce if the cell had to maintain constantly a sufficient high effective level of these proteins to ensure the function. The costs of protein production are large enough to be under natural selection, whereas the costs of mRNA production are not. So, why do cells periodically produce some mRNAs? My results suggest that the periodic oscillation in mRNA quantity concerns genes that have on average weaker cell-to-cell variability (noise) than genes with constant mRNA levels. Since causality is not very clear, it is still possible that the rhythmicity of mRNAs may optimize the expression precision for noise-sensitive functions over a period of time, repeatedly, every 24 hours. Finally, mRNA rhythmicity concerns genes that have undergone a strong purifying selection. This strong purifying selection does not seem to concern genes that have periodic protein levels, although there is insufficient data to really go further in the formulation of an evolutionary explanation. Overall, I suggest the hypothesis that rhythmicity of gene expression provides an adaptive advantage only to species living in highly changing environments (over 24 hours). In such environments, i.e. for a large part of marine and terrestrial ecosystems, it is possible that the rhythmicity of gene expression could have allowed the preservation of complex and costly new properties that would otherwise have been eliminated. The evolutionary trade-offs take into account the advantages provided by the function, its expression costs and precision required, but maybe also the variability of expression leading to phenotypic diversity improving adaptability in a fluctuating environment

    A Theory of Networks for Appxoimation and Learning

    Get PDF
    Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data
    corecore