59,258 research outputs found

    Algorithmic randomness and stochastic selection function

    Full text link
    We show algorithmic randomness versions of the two classical theorems on subsequences of normal numbers. One is Kamae-Weiss theorem (Kamae 1973) on normal numbers, which characterize the selection function that preserves normal numbers. Another one is the Steinhaus (1922) theorem on normal numbers, which characterize the normality from their subsequences. In van Lambalgen (1987), an algorithmic analogy to Kamae-Weiss theorem is conjectured in terms of algorithmic randomness and complexity. In this paper we consider two types of algorithmic random sequence; one is ML-random sequences and the other one is the set of sequences that have maximal complexity rate. Then we show algorithmic randomness versions of corresponding theorems to the above classical results.Comment: submitted to CCR2012 special issue. arXiv admin note: text overlap with arXiv:1106.315

    On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes: A New Perspective

    Get PDF
    The problem of exact maximum-likelihood (ML) decoding of general linear codes is well-known to be NP-hard. In this paper, we show that exact ML decoding of a class of asymptotically good low density parity check codes — expander codes — over binary symmetric channels (BSCs) is possible with an average-case polynomial complexity. This offers a new way of looking at the complexity issue of exact ML decoding for communication systems where the randomness in channel plays a fundamental central role. More precisely, for any bit-flipping probability p in a nontrivial range, there exists a rate region of non-zero support and a family of asymptotically good codes which achieve error probability exponentially decaying in coding length n while admitting exact ML decoding in average-case polynomial time. As p approaches zero, this rate region approaches the Shannon channel capacity region. Similar results can be extended to AWGN channels, suggesting it may be feasible to eliminate the error floor phenomenon associated with belief-propagation decoding of LDPC codes in the high SNR regime. The derivations are based on a hierarchy of ML certificate decoding algorithms adaptive to the channel realization. In this process, we propose an efficient O(n^2) new ML certificate algorithm based on the max-flow algorithm. Moreover, exact ML decoding of the considered class of codes constructed from LDPC codes with regular left degree, of which the considered expander codes are a special case, remains NP-hard; thus giving an interesting contrast between the worst-case and average-case complexities

    Generative theatre of totality

    Get PDF
    Generative art can be used for creating complex multisensory and multimedia experiences within predetermined aesthetic parameters, characteristic of the performing arts and remarkably suitable to address Moholy-Nagy's Theatre of Totality vision. In generative artworks the artist will usually take on the role of an experience framework designer, and the system evolves freely within that framework and its defined aesthetic boundaries. Most generative art impacts visual arts, music and literature, but there does not seem to be any relevant work exploring the cross-medium potential, and one could confidently state that most generative art outcomes are abstract and visual, or audio. It is the goal of this article to propose a model for the creation of generative performances within the Theatre of Totality's scope, derived from stochastic Lindenmayer systems, where mapping techniques are proposed to address the seven variables addressed by Moholy-Nagy: light, space, plane, form, motion, sound and man ("man" is replaced in this article with "human", except where quoting from the author), with all the inherent complexities

    Foundations of Quantum Gravity : The Role of Principles Grounded in Empirical Reality

    Full text link
    When attempting to assess the strengths and weaknesses of various principles in their potential role of guiding the formulation of a theory of quantum gravity, it is crucial to distinguish between principles which are strongly supported by empirical data - either directly or indirectly - and principles which instead (merely) rely heavily on theoretical arguments for their justification. These remarks are illustrated in terms of the current standard models of cosmology and particle physics, as well as their respective underlying theories, viz. general relativity and quantum (field) theory. It is argued that if history is to be of any guidance, the best chance to obtain the key structural features of a putative quantum gravity theory is by deducing them, in some form, from the appropriate empirical principles (analogous to the manner in which, say, the idea that gravitation is a curved spacetime phenomenon is arguably implied by the equivalence principle). It is subsequently argued that the appropriate empirical principles for quantum gravity should at least include (i) quantum nonlocality, (ii) irreducible indeterminacy, (iii) the thermodynamic arrow of time, (iv) homogeneity and isotropy of the observable universe on the largest scales. In each case, it is explained - when appropriate - how the principle in question could be implemented mathematically in a theory of quantum gravity, why it is considered to be of fundamental significance and also why contemporary accounts of it are insufficient.Comment: 21 pages. Some (mostly minor) corrections. Final published versio
    • …
    corecore