4,823 research outputs found

    Enhancing quantum entropy in vacuum-based quantum random number generator

    Full text link
    Information-theoretically provable unique true random numbers, which cannot be correlated or controlled by an attacker, can be generated based on quantum measurement of vacuum state and universal-hashing randomness extraction. Quantum entropy in the measurements decides the quality and security of the random number generator. At the same time, it directly determine the extraction ratio of true randomness from the raw data, in other words, it affects quantum random numbers generating rate obviously. In this work, considering the effects of classical noise, the best way to enhance quantum entropy in the vacuum-based quantum random number generator is explored in the optimum dynamical analog-digital converter (ADC) range scenario. The influence of classical noise excursion, which may be intrinsic to a system or deliberately induced by an eavesdropper, on the quantum entropy is derived. We propose enhancing local oscillator intensity rather than electrical gain for noise-independent amplification of quadrature fluctuation of vacuum state. Abundant quantum entropy is extractable from the raw data even when classical noise excursion is large. Experimentally, an extraction ratio of true randomness of 85.3% is achieved by finite enhancement of the local oscillator power when classical noise excursions of the raw data is obvious.Comment: 12 pages,8 figure

    Balanced Allocations and Double Hashing

    Full text link
    Double hashing has recently found more common usage in schemes that use multiple hash functions. In double hashing, for an item xx, one generates two hash values f(x)f(x) and g(x)g(x), and then uses combinations (f(x)+kg(x))modn(f(x) +k g(x)) \bmod n for k=0,1,2,...k=0,1,2,... to generate multiple hash values from the initial two. We first perform an empirical study showing that, surprisingly, the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of dd choices, as well as several related variants. We then provide theoretical results that explain the behavior of double hashing in this context.Comment: Further updated, small improvements/typos fixe

    How the Experts Algorithm Can Help Solve LPs Online

    Full text link
    We consider the problem of solving packing/covering LPs online, when the columns of the constraint matrix are presented in random order. This problem has received much attention and the main focus is to figure out how large the right-hand sides of the LPs have to be (compared to the entries on the left-hand side of the constraints) to allow (1+ϵ)(1+\epsilon)-approximations online. It is known that the right-hand sides have to be Ω(ϵ2logm)\Omega(\epsilon^{-2} \log m) times the left-hand sides, where mm is the number of constraints. In this paper we give a primal-dual algorithm that achieve this bound for mixed packing/covering LPs. Our algorithms construct dual solutions using a regret-minimizing online learning algorithm in a black-box fashion, and use them to construct primal solutions. The adversarial guarantee that holds for the constructed duals helps us to take care of most of the correlations that arise in the algorithm; the remaining correlations are handled via martingale concentration and maximal inequalities. These ideas lead to conceptually simple and modular algorithms, which we hope will be useful in other contexts.Comment: An extended abstract appears in the 22nd European Symposium on Algorithms (ESA 2014

    Gossip vs. Markov Chains, and Randomness-Efficient Rumor Spreading

    Get PDF
    We study gossip algorithms for the rumor spreading problem which asks one node to deliver a rumor to all nodes in an unknown network. We present the first protocol for any expander graph GG with nn nodes such that, the protocol informs every node in O(logn)O(\log n) rounds with high probability, and uses O~(logn)\tilde{O}(\log n) random bits in total. The runtime of our protocol is tight, and the randomness requirement of O~(logn)\tilde{O}(\log n) random bits almost matches the lower bound of Ω(logn)\Omega(\log n) random bits for dense graphs. We further show that, for many graph families, polylogarithmic number of random bits in total suffice to spread the rumor in O(polylogn)O(\mathrm{poly}\log n) rounds. These results together give us an almost complete understanding of the randomness requirement of this fundamental gossip process. Our analysis relies on unexpectedly tight connections among gossip processes, Markov chains, and branching programs. First, we establish a connection between rumor spreading processes and Markov chains, which is used to approximate the rumor spreading time by the mixing time of Markov chains. Second, we show a reduction from rumor spreading processes to branching programs, and this reduction provides a general framework to derandomize gossip processes. In addition to designing rumor spreading protocols, these novel techniques may have applications in studying parallel and multiple random walks, and randomness complexity of distributed algorithms.Comment: 41 pages, 1 figure. arXiv admin note: substantial text overlap with arXiv:1304.135

    Decentralized Adaptive Helper Selection in Multi-channel P2P Streaming Systems

    Full text link
    In Peer-to-Peer (P2P) multichannel live streaming, helper peers with surplus bandwidth resources act as micro-servers to compensate the server deficiencies in balancing the resources between different channel overlays. With deployment of helper level between server and peers, optimizing the user/helper topology becomes a challenging task since applying well-known reciprocity-based choking algorithms is impossible due to the one-directional nature of video streaming from helpers to users. Because of selfish behavior of peers and lack of central authority among them, selection of helpers requires coordination. In this paper, we design a distributed online helper selection mechanism which is adaptable to supply and demand pattern of various video channels. Our solution for strategic peers' exploitation from the shared resources of helpers is to guarantee the convergence to correlated equilibria (CE) among the helper selection strategies. Online convergence to the set of CE is achieved through the regret-tracking algorithm which tracks the equilibrium in the presence of stochastic dynamics of helpers' bandwidth. The resulting CE can help us select proper cooperation policies. Simulation results demonstrate that our algorithm achieves good convergence, load distribution on helpers and sustainable streaming rates for peers

    Harnessing Flexible and Reliable Demand Response Under Customer Uncertainties

    Full text link
    Demand response (DR) is a cost-effective and environmentally friendly approach for mitigating the uncertainties in renewable energy integration by taking advantage of the flexibility of customers' demands. However, existing DR programs suffer from either low participation due to strict commitment requirements or not being reliable in voluntary programs. In addition, the capacity planning for energy storage/reserves is traditionally done separately from the demand response program design, which incurs inefficiencies. Moreover, customers often face high uncertainties in their costs in providing demand response, which is not well studied in literature. This paper first models the problem of joint capacity planning and demand response program design by a stochastic optimization problem, which incorporates the uncertainties from renewable energy generation, customer power demands, as well as the customers' costs in providing DR. We propose online DR control policies based on the optimal structures of the offline solution. A distributed algorithm is then developed for implementing the control policies without efficiency loss. We further offer enhanced policy design by allowing flexibilities into the commitment level. We perform real world trace based numerical simulations. Results demonstrate that the proposed algorithms can achieve near optimal social costs, and significant social cost savings compared to baseline methods
    corecore