31,194 research outputs found

    A generalized characterization of algorithmic probability

    Get PDF
    An a priori semimeasure (also known as "algorithmic probability" or "the Solomonoff prior" in the context of inductive inference) is defined as the transformation, by a given universal monotone Turing machine, of the uniform measure on the infinite strings. It is shown in this paper that the class of a priori semimeasures can equivalently be defined as the class of transformations, by all compatible universal monotone Turing machines, of any continuous computable measure in place of the uniform measure. Some consideration is given to possible implications for the prevalent association of algorithmic probability with certain foundational statistical principles

    On Approaching the Ultimate Limits of Photon-Efficient and Bandwidth-Efficient Optical Communication

    Full text link
    It is well known that ideal free-space optical communication at the quantum limit can have unbounded photon information efficiency (PIE), measured in bits per photon. High PIE comes at a price of low dimensional information efficiency (DIE), measured in bits per spatio-temporal-polarization mode. If only temporal modes are used, then DIE translates directly to bandwidth efficiency. In this paper, the DIE vs. PIE tradeoffs for known modulations and receiver structures are compared to the ultimate quantum limit, and analytic approximations are found in the limit of high PIE. This analysis shows that known structures fall short of the maximum attainable DIE by a factor that increases linearly with PIE for high PIE. The capacity of the Dolinar receiver is derived for binary coherent-state modulations and computed for the case of on-off keying (OOK). The DIE vs. PIE tradeoff for this case is improved only slightly compared to OOK with photon counting. An adaptive rule is derived for an additive local oscillator that maximizes the mutual information between a receiver and a transmitter that selects from a set of coherent states. For binary phase-shift keying (BPSK), this is shown to be equivalent to the operation of the Dolinar receiver. The Dolinar receiver is extended to make adaptive measurements on a coded sequence of coherent state symbols. Information from previous measurements is used to adjust the a priori probabilities of the next symbols. The adaptive Dolinar receiver does not improve the DIE vs. PIE tradeoff compared to independent transmission and Dolinar reception of each symbol.Comment: 10 pages, 8 figures; corrected a typo in equation 3

    Rate-Distortion Analysis of Multiview Coding in a DIBR Framework

    Get PDF
    Depth image based rendering techniques for multiview applications have been recently introduced for efficient view generation at arbitrary camera positions. Encoding rate control has thus to consider both texture and depth data. Due to different structures of depth and texture images and their different roles on the rendered views, distributing the available bit budget between them however requires a careful analysis. Information loss due to texture coding affects the value of pixels in synthesized views while errors in depth information lead to shift in objects or unexpected patterns at their boundaries. In this paper, we address the problem of efficient bit allocation between textures and depth data of multiview video sequences. We adopt a rate-distortion framework based on a simplified model of depth and texture images. Our model preserves the main features of depth and texture images. Unlike most recent solutions, our method permits to avoid rendering at encoding time for distortion estimation so that the encoding complexity is not augmented. In addition to this, our model is independent of the underlying inpainting method that is used at decoder. Experiments confirm our theoretical results and the efficiency of our rate allocation strategy

    Auto-encoders: reconstruction versus compression

    Full text link
    We discuss the similarities and differences between training an auto-encoder to minimize the reconstruction error, and training the same auto-encoder to compress the data via a generative model. Minimizing a codelength for the data using an auto-encoder is equivalent to minimizing the reconstruction error plus some correcting terms which have an interpretation as either a denoising or contractive property of the decoding function. These terms are related but not identical to those used in denoising or contractive auto-encoders [Vincent et al. 2010, Rifai et al. 2011]. In particular, the codelength viewpoint fully determines an optimal noise level for the denoising criterion

    Online Reinforcement Learning for Dynamic Multimedia Systems

    Full text link
    In our previous work, we proposed a systematic cross-layer framework for dynamic multimedia systems, which allows each layer to make autonomous and foresighted decisions that maximize the system's long-term performance, while meeting the application's real-time delay constraints. The proposed solution solved the cross-layer optimization offline, under the assumption that the multimedia system's probabilistic dynamics were known a priori. In practice, however, these dynamics are unknown a priori and therefore must be learned online. In this paper, we address this problem by allowing the multimedia system layers to learn, through repeated interactions with each other, to autonomously optimize the system's long-term performance at run-time. We propose two reinforcement learning algorithms for optimizing the system under different design constraints: the first algorithm solves the cross-layer optimization in a centralized manner, and the second solves it in a decentralized manner. We analyze both algorithms in terms of their required computation, memory, and inter-layer communication overheads. After noting that the proposed reinforcement learning algorithms learn too slowly, we introduce a complementary accelerated learning algorithm that exploits partial knowledge about the system's dynamics in order to dramatically improve the system's performance. In our experiments, we demonstrate that decentralized learning can perform as well as centralized learning, while enabling the layers to act autonomously. Additionally, we show that existing application-independent reinforcement learning algorithms, and existing myopic learning algorithms deployed in multimedia systems, perform significantly worse than our proposed application-aware and foresighted learning methods.Comment: 35 pages, 11 figures, 10 table
    • …
    corecore