180 research outputs found

    The Quantity of Quantum Information and Its Metaphysics

    Get PDF
    The quantum information introduced by quantum mechanics is equivalent to that generalization of the classical information from finite to infinite series or collections. The quantity of information is the quantity of choices measured in the units of elementary choice. The qubit can be interpreted as that generalization of bit, which is a choice among a continuum of alternatives. The axiom of choice is necessary for quantum information. The coherent state is transformed into a well-ordered series of results in time after measurement. The quantity of quantum information is the ordinal corresponding to the infinity series in question. Number and being (by the meditation of time), the natural and artificial turn out to be not more than different hypostases of a single common essence. This implies some kind of neo-Pythagorean ontology making related mathematics, physics, and technics immediately, by an explicit mathematical structure

    Global and local Complexity in weakly chaotic dynamical systems

    Full text link
    In a topological dynamical system the complexity of an orbit is a measure of the amount of information (algorithmic information content) that is necessary to describe the orbit. This indicator is invariant up to topological conjugation. We consider this indicator of local complexity of the dynamics and provide different examples of its behavior, showing how it can be useful to characterize various kind of weakly chaotic dynamics. We also provide criteria to find systems with non trivial orbit complexity (systems where the description of the whole orbit requires an infinite amount of information). We consider also a global indicator of the complexity of the system. This global indicator generalizes the topological entropy, taking into account systems were the number of essentially different orbits increases less than exponentially. Then we prove that if the system is constructive (roughly speaking: if the map can be defined up to any given accuracy using a finite amount of information) the orbit complexity is everywhere less or equal than the generalized topological entropy. Conversely there are compact non constructive examples where the inequality is reversed, suggesting that this notion comes out naturally in this kind of complexity questions.Comment: 23 page

    The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy

    Full text link
    The principle of maximum entropy (Maxent) is often used to obtain prior probability distributions as a method to obtain a Gibbs measure under some restriction giving the probability that a system will be in a certain state compared to the rest of the elements in the distribution. Because classical entropy-based Maxent collapses cases confounding all distinct degrees of randomness and pseudo-randomness, here we take into consideration the generative mechanism of the systems considered in the ensemble to separate objects that may comply with the principle under some restriction and whose entropy is maximal but may be generated recursively from those that are actually algorithmically random offering a refinement to classical Maxent. We take advantage of a causal algorithmic calculus to derive a thermodynamic-like result based on how difficult it is to reprogram a computer code. Using the distinction between computable and algorithmic randomness we quantify the cost in information loss associated with reprogramming. To illustrate this we apply the algorithmic refinement to Maxent on graphs and introduce a Maximal Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a generalisation over previous approaches. We discuss practical implications of evaluation of network randomness. Our analysis provides insight in that the reprogrammability asymmetry appears to originate from a non-monotonic relationship to algorithmic probability. Our analysis motivates further analysis of the origin and consequences of the aforementioned asymmetries, reprogrammability, and computation.Comment: 30 page

    A decomposition method for global evaluation of Shannon entropy and local estimations of algorithmic complexity

    Get PDF
    We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.Swedish Research Council (Vetenskapsrådet

    A decomposition method for global evaluation of Shannon entropy and local estimations of algorithmic complexity

    Get PDF
    We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.Swedish Research Council (Vetenskapsrådet

    Homo deceptus: How language creates its own reality

    Get PDF
    Homo deceptus is a book that brings together new ideas on language, consciousness and physics into a comprehensive theory that unifies science and philosophy in a different kind of Theory of Everything. The subject of how we are to make sense of the world is addressed in a structured and ordered manner, which starts with a recognition that scientific truths are constructed within a linguistic framework. The author argues that an epistemic foundation of natural language must be understood before laying claim to any notion of reality. This foundation begins with Ludwig Wittgenstein’s Tractatus Logico-Philosophicus and the relationship of language to formal logic. Ultimately, we arrive at an answer to the question of why people believe the things they do. This is effectively a modification of Alfred Tarski’s semantic theory of truth. The second major issue addressed is the ‘dreaded’ Hard Problem of Consciousness as first stated by David Chalmers in 1995. The solution is found in the unification of consciousness, information theory and notions of physicalism. The physical world is shown to be an isomorphic representation of the phenomenological conscious experience. New concepts in understanding how language operates help to explain why this relationship has been so difficult to appreciate. The inclusion of concepts from information theory shows how a digital mechanics resolves heretofore conflicting theories in physics, cognitive science and linguistics. Scientific orthodoxy is supported, but viewed in a different light. Mainstream science is not challenged, but findings are interpreted in a manner that unifies consciousness without contradiction. Digital mechanics and formal systems of logic play central roles in combining language, consciousness and the physical world into a unified theory where all can be understood within a single consistent framework
    • …
    corecore