33,742 research outputs found

    On the Minimum Achievable Age of Information for General Service-Time Distributions

    Full text link
    There is a growing interest in analysing the freshness of data in networked systems. Age of Information (AoI) has emerged as a popular metric to quantify this freshness at a given destination. There has been a significant research effort in optimizing this metric in communication and networking systems under different settings. In contrast to previous works, we are interested in a fundamental question, what is the minimum achievable AoI in any single-server-single-source queuing system for a given service-time distribution? To address this question, we study a problem of optimizing AoI under service preemptions. Our main result is on the characterization of the minimum achievable average peak AoI (PAoI). We obtain this result by showing that a fixed-threshold policy is optimal in the set of all randomized-threshold causal policies. We use the characterization to provide necessary and sufficient conditions for the service-time distributions under which preemptions are beneficial

    Optimizing Age-of-Information in a Multi-class Queueing System

    Get PDF
    We consider the age-of-information in a multi-class M/G/1M/G/1 queueing system, where each class generates packets containing status information. Age of information is a relatively new metric that measures the amount of time that elapsed between status updates, thus accounting for both the queueing delay and the delay between packet generation. This gives rise to a tradeoff between frequency of status updates, and queueing delay. In this paper, we study this tradeoff in a system with heterogenous users modeled as a multi-class M/G/1M/G/1 queue. To this end, we derive the exact peak age-of-Information (PAoI) profile of the system, which measures the "freshness" of the status information. We then seek to optimize the age of information, by formulating the problem using quasiconvex optimization, and obtain structural properties of the optimal solution

    Effects of Rate Adaption on the Throughput of Random Ad Hoc Networks

    No full text
    The capacity of wireless ad hoc networks has been studied in an excellent treatise by Gupta and Kumar [1], assuming a fixed transmission rate. By contrast, in this treatise we investigate the achievable throughput improvement of rate adaptation in the context of random ad hoc networks, which have been studied in conjunction with a fixed transmission rate in [1]. Our analysis shows that rate adaptation has the potential of improving the achievable throughput compared to fixed rate transmission, since rate adaptation mitigates the effects of link quality fluctuations. However, even perfect rate control fails to change the scaling law of the per-node throughput result given in [1], regardless of the absence or presence of shadow fading. This result is confirmed in the context of specific adaptive modulation aided design examples

    Cooperative Downlink Multicell Preprocessing Relying on Reduced-Rate Back-Haul Data Exchange

    No full text
    Different-complexity multicell preprocessing (MCP) schemes employing distributed signal-to-interference leakageplus-noise ratio (SILNR) precoding techniques are proposed, which require reduced back-haul data exchange in comparison with the conventional MCP structure. Our results demonstrate that the proposed structures are capable of increasing the throughput achievable in the cell-edge area while offering different geographic rate profile distributions, as well as meeting different delay requirements

    Age-Optimal Updates of Multiple Information Flows

    Full text link
    In this paper, we study an age of information minimization problem, where multiple flows of update packets are sent over multiple servers to their destinations. Two online scheduling policies are proposed. When the packet generation and arrival times are synchronized across the flows, the proposed policies are shown to be (near) optimal for minimizing any time-dependent, symmetric, and non-decreasing penalty function of the ages of the flows over time in a stochastic ordering sense

    Unforgeable Noise-Tolerant Quantum Tokens

    Get PDF
    The realization of devices which harness the laws of quantum mechanics represents an exciting challenge at the interface of modern technology and fundamental science. An exemplary paragon of the power of such quantum primitives is the concept of "quantum money". A dishonest holder of a quantum bank-note will invariably fail in any forging attempts; indeed, under assumptions of ideal measurements and decoherence-free memories such security is guaranteed by the no-cloning theorem. In any practical situation, however, noise, decoherence and operational imperfections abound. Thus, the development of secure "quantum money"-type primitives capable of tolerating realistic infidelities is of both practical and fundamental importance. Here, we propose a novel class of such protocols and demonstrate their tolerance to noise; moreover, we prove their rigorous security by determining tight fidelity thresholds. Our proposed protocols require only the ability to prepare, store and measure single qubit quantum memories, making their experimental realization accessible with current technologies.Comment: 18 pages, 5 figure

    Privacy Against Statistical Inference

    Full text link
    We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.Comment: Allerton 2012, 8 page
    corecore