66,828 research outputs found

    A Hypercontractive Inequality for Matrix-Valued Functions with Applications to Quantum Computing and LDCs

    Full text link
    The Bonami-Beckner hypercontractive inequality is a powerful tool in Fourier analysis of real-valued functions on the Boolean cube. In this paper we present a version of this inequality for matrix-valued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and Lieb. We also present a number of applications. First, we analyze maps that encode nn classical bits into mm qubits, in such a way that each set of kk bits can be recovered with some probability by an appropriate measurement on the quantum encoding; we show that if m<0.7nm<0.7 n, then the success probability is exponentially small in kk. This result may be viewed as a direct product version of Nayak's quantum random access code bound. It in turn implies strong direct product theorems for the one-way quantum communication complexity of Disjointness and other problems. Second, we prove that error-correcting codes that are locally decodable with 2 queries require length exponential in the length of the encoded string. This gives what is arguably the first ``non-quantum'' proof of a result originally derived by Kerenidis and de Wolf using quantum information theory, and answers a question by Trevisan.Comment: This is the full version of a paper that will appear in the proceedings of the IEEE FOCS 08 conferenc

    Bayesian Design of Tandem Networks for Distributed Detection With Multi-bit Sensor Decisions

    Full text link
    We consider the problem of decentralized hypothesis testing under communication constraints in a topology where several peripheral nodes are arranged in tandem. Each node receives an observation and transmits a message to its successor, and the last node then decides which hypothesis is true. We assume that the observations at different nodes are, conditioned on the true hypothesis, independent and the channel between any two successive nodes is considered error-free but rate-constrained. We propose a cyclic numerical design algorithm for the design of nodes using a person-by-person methodology with the minimum expected error probability as a design criterion, where the number of communicated messages is not necessarily equal to the number of hypotheses. The number of peripheral nodes in the proposed method is in principle arbitrary and the information rate constraints are satisfied by quantizing the input of each node. The performance of the proposed method for different information rate constraints, in a binary hypothesis test, is compared to the optimum rate-one solution due to Swaszek and a method proposed by Cover, and it is shown numerically that increasing the channel rate can significantly enhance the performance of the tandem network. Simulation results for MM-ary hypothesis tests also show that by increasing the channel rates the performance of the tandem network significantly improves

    New Negentropy Optimization Schemes for Blind Signal Extraction of Complex Valued Sources

    Get PDF
    Blind signal extraction, a hot issue in the field of communication signal processing, aims to retrieve the sources through the optimization of contrast functions. Many contrasts based on higher-order statistics such as kurtosis, usually behave sensitive to outliers. Thus, to achieve robust results, nonlinear functions are utilized as contrasts to approximate the negentropy criterion, which is also a classical metric for non-Gaussianity. However, existing methods generally have a high computational cost, hence leading us to address the problem of efficient optimization of contrast function. More precisely, we design a novel “reference-based” contrast function based on negentropy approximations, and then propose a new family of algorithms (Alg.1 and Alg.2) to maximize it. Simulations confirm the convergence of our method to a separating solution, which is also analyzed in theory. We also validate the theoretic complexity analysis that Alg.2 has a much lower computational cost than Alg.1 and existing optimization methods based on negentropy criterion. Finally, experiments for the separation of single sideband signals illustrate that our method has good prospects in real-world applications

    VirtFogSim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5G Mobile-Fog-Cloud virtualized platforms

    Get PDF
    It is expected that the pervasive deployment of multi-tier 5G-supported Mobile-Fog-Cloudtechnological computing platforms will constitute an effective means to support the real-time execution of future Internet applications by resource- and energy-limited mobile devices. Increasing interest in this emerging networking-computing technology demands the optimization and performance evaluation of several parts of the underlying infrastructures. However, field trials are challenging due to their operational costs, and in every case, the obtained results could be difficult to repeat and customize. These emergingMobile-Fog-Cloud ecosystems still lack, indeed, customizable software tools for the performance simulation of their computing-networking building blocks. Motivated by these considerations, in this contribution, we present VirtFogSim. It is aMATLAB-supported software toolbox that allows the dynamic joint optimization and tracking of the energy and delay performance of Mobile-Fog-Cloud systems for the execution of applications described by general Directed Application Graphs (DAGs). In a nutshell, the main peculiar features of the proposed VirtFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the placement of the application tasks and the allocation of the needed computing-networking resources under hard constraints on acceptable overall execution times, (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall system; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operational environments, as those typically featuring mobile applications; (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering, and (v) itsMATLAB code is optimized for running atop multi-core parallel execution platforms. To check both the actual optimization and scalability capabilities of the VirtFogSim toolbox, a number of experimental setups featuring different use cases and operational environments are simulated, and their performances are compared

    Unbounded-error One-way Classical and Quantum Communication Complexity

    Full text link
    This paper studies the gap between quantum one-way communication complexity Q(f)Q(f) and its classical counterpart C(f)C(f), under the {\em unbounded-error} setting, i.e., it is enough that the success probability is strictly greater than 1/2. It is proved that for {\em any} (total or partial) Boolean function ff, Q(f)=C(f)/2Q(f)=\lceil C(f)/2 \rceil, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining (again an exact) bound for the existence of (m,n,p)(m,n,p)-QRAC which is the nn-qubit random access coding that can recover any one of mm original bits with success probability p\geq p. We can prove that (m,n,>1/2)(m,n,>1/2)-QRAC exists if and only if m22n1m\leq 2^{2n}-1. Previously, only the construction of QRAC using one qubit, the existence of (O(n),n,>1/2)(O(n),n,>1/2)-RAC, and the non-existence of (22n,n,>1/2)(2^{2n},n,>1/2)-QRAC were known.Comment: 9 pages. To appear in Proc. ICALP 200

    Approximate F_2-Sketching of Valuation Functions

    Get PDF
    We study the problem of constructing a linear sketch of minimum dimension that allows approximation of a given real-valued function f : F_2^n - > R with small expected squared error. We develop a general theory of linear sketching for such functions through which we analyze their dimension for most commonly studied types of valuation functions: additive, budget-additive, coverage, alpha-Lipschitz submodular and matroid rank functions. This gives a characterization of how many bits of information have to be stored about the input x so that one can compute f under additive updates to its coordinates. Our results are tight in most cases and we also give extensions to the distributional version of the problem where the input x in F_2^n is generated uniformly at random. Using known connections with dynamic streaming algorithms, both upper and lower bounds on dimension obtained in our work extend to the space complexity of algorithms evaluating f(x) under long sequences of additive updates to the input x presented as a stream. Similar results hold for simultaneous communication in a distributed setting
    corecore