1,807 research outputs found

    High-Rate Vector Quantization for the Neyman-Pearson Detection of Correlated Processes

    Full text link
    This paper investigates the effect of quantization on the performance of the Neyman-Pearson test. It is assumed that a sensing unit observes samples of a correlated stationary ergodic multivariate process. Each sample is passed through an N-point quantizer and transmitted to a decision device which performs a binary hypothesis test. For any false alarm level, it is shown that the miss probability of the Neyman-Pearson test converges to zero exponentially as the number of samples tends to infinity, assuming that the observed process satisfies certain mixing conditions. The main contribution of this paper is to provide a compact closed-form expression of the error exponent in the high-rate regime i.e., when the number N of quantization levels tends to infinity, generalizing previous results of Gupta and Hero to the case of non-independent observations. If d represents the dimension of one sample, it is proved that the error exponent converges at rate N^{2/d} to the one obtained in the absence of quantization. As an application, relevant high-rate quantization strategies which lead to a large error exponent are determined. Numerical results indicate that the proposed quantization rule can yield better performance than existing ones in terms of detection error.Comment: 47 pages, 7 figures, 1 table. To appear in the IEEE Transactions on Information Theor

    Revisiting light stringy states in view of the 750 GeV diphoton excess

    Get PDF
    We investigate light massive string states that appear at brane intersections. They replicate the massless spectrum in a richer fashion and may be parametrically lighter than standard Regge excitations. We identify the first few physical states and determine their BRST invariant vertex operators. In the supersymmetric case we reconstruct the super-multiplet structure. We then compute some simple interactions, such as the decay rate of a massive scalar or vector into two massless fermions. Finally we suggest an alternative interpretation of the 750 GeV diphoton excess at LHC in terms of a light massive string state, a replica of the Standard Model Higgs.Comment: 29 pages, 5 eps figures. v

    Distributed on-line multidimensional scaling for self-localization in wireless sensor networks

    Full text link
    The present work considers the localization problem in wireless sensor networks formed by fixed nodes. Each node seeks to estimate its own position based on noisy measurements of the relative distance to other nodes. In a centralized batch mode, positions can be retrieved (up to a rigid transformation) by applying Principal Component Analysis (PCA) on a so-called similarity matrix built from the relative distances. In this paper, we propose a distributed on-line algorithm allowing each node to estimate its own position based on limited exchange of information in the network. Our framework encompasses the case of sporadic measurements and random link failures. We prove the consistency of our algorithm in the case of fixed sensors. Finally, we provide numerical and experimental results from both simulated and real data. Simulations issued to real data are conducted on a wireless sensor network testbed.Comment: 32 pages, 5 figures, 1 tabl

    A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization

    Get PDF
    Based on the idea of randomized coordinate descent of α\alpha-averaged operators, a randomized primal-dual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed by V\~u and Condat that includes the well known ADMM as a particular case. The obtained algorithm is used to solve asynchronously a distributed optimization problem. A network of agents, each having a separate cost function containing a differentiable term, seek to find a consensus on the minimum of the aggregate objective. The method yields an algorithm where at each iteration, a random subset of agents wake up, update their local estimates, exchange some data with their neighbors, and go idle. Numerical results demonstrate the attractive performance of the method. The general approach can be naturally adapted to other situations where coordinate descent convex optimization algorithms are used with a random choice of the coordinates.Comment: 10 page
    • …
    corecore