5,255 research outputs found

    Secure Multiterminal Source Coding with Side Information at the Eavesdropper

    Full text link
    The problem of secure multiterminal source coding with side information at the eavesdropper is investigated. This scenario consists of a main encoder (referred to as Alice) that wishes to compress a single source but simultaneously satisfying the desired requirements on the distortion level at a legitimate receiver (referred to as Bob) and the equivocation rate --average uncertainty-- at an eavesdropper (referred to as Eve). It is further assumed the presence of a (public) rate-limited link between Alice and Bob. In this setting, Eve perfectly observes the information bits sent by Alice to Bob and has also access to a correlated source which can be used as side information. A second encoder (referred to as Charlie) helps Bob in estimating Alice's source by sending a compressed version of its own correlated observation via a (private) rate-limited link, which is only observed by Bob. For instance, the problem at hands can be seen as the unification between the Berger-Tung and the secure source coding setups. Inner and outer bounds on the so called rates-distortion-equivocation region are derived. The inner region turns to be tight for two cases: (i) uncoded side information at Bob and (ii) lossless reconstruction of both sources at Bob --secure distributed lossless compression. Application examples to secure lossy source coding of Gaussian and binary sources in the presence of Gaussian and binary/ternary (resp.) side informations are also considered. Optimal coding schemes are characterized for some cases of interest where the statistical differences between the side information at the decoders and the presence of a non-zero distortion at Bob can be fully exploited to guarantee secrecy.Comment: 26 pages, 16 figures, 2 table

    High-Rate Vector Quantization for the Neyman-Pearson Detection of Correlated Processes

    Full text link
    This paper investigates the effect of quantization on the performance of the Neyman-Pearson test. It is assumed that a sensing unit observes samples of a correlated stationary ergodic multivariate process. Each sample is passed through an N-point quantizer and transmitted to a decision device which performs a binary hypothesis test. For any false alarm level, it is shown that the miss probability of the Neyman-Pearson test converges to zero exponentially as the number of samples tends to infinity, assuming that the observed process satisfies certain mixing conditions. The main contribution of this paper is to provide a compact closed-form expression of the error exponent in the high-rate regime i.e., when the number N of quantization levels tends to infinity, generalizing previous results of Gupta and Hero to the case of non-independent observations. If d represents the dimension of one sample, it is proved that the error exponent converges at rate N^{2/d} to the one obtained in the absence of quantization. As an application, relevant high-rate quantization strategies which lead to a large error exponent are determined. Numerical results indicate that the proposed quantization rule can yield better performance than existing ones in terms of detection error.Comment: 47 pages, 7 figures, 1 table. To appear in the IEEE Transactions on Information Theor

    Bose-Glass behaviour in Bi_{2}Sr_{2}Ca_{1-x}Y_{x}Cu_{2}O_{8} crystals with columnar defects: experimental evidence for variable-range hopping

    Full text link
    We report on vortex transport in Bi_{2}Sr_{2}Ca_{1-x}Y_{x}Cu_{2}O_{8} crystals irradiated at different doses of heavy ions. We show evidence of a flux-creep resistivity typical of a variable-range vortex hopping mechanism as predicted by Nelson and Vinokur.Comment: 5 pages LaTeX2e (uses elsart.cls), 1 Encapsulated PostScript figur

    Asymptotically fast polynomial matrix algorithms for multivariable systems

    Full text link
    We present the asymptotically fastest known algorithms for some basic problems on univariate polynomial matrices: rank, nullspace, determinant, generic inverse, reduced form. We show that they essentially can be reduced to two computer algebra techniques, minimal basis computations and matrix fraction expansion/reconstruction, and to polynomial matrix multiplication. Such reductions eventually imply that all these problems can be solved in about the same amount of time as polynomial matrix multiplication

    Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix

    Get PDF
    We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n x n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n,d)=softO(n^omega d) operations, with omega the exponent of matrix multiplication over K, then the algorithm uses softO(MM(n,d)) operations in K. The softO notation indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel high-order lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]-moduleComment: Research Report LIP RR2005-03, January 200

    Secure Lossy Source Coding with Side Information at the Decoders

    Full text link
    This paper investigates the problem of secure lossy source coding in the presence of an eavesdropper with arbitrary correlated side informations at the legitimate decoder (referred to as Bob) and the eavesdropper (referred to as Eve). This scenario consists of an encoder that wishes to compress a source to satisfy the desired requirements on: (i) the distortion level at Bob and (ii) the equivocation rate at Eve. It is assumed that the decoders have access to correlated sources as side information. For instance, this problem can be seen as a generalization of the well-known Wyner-Ziv problem taking into account the security requirements. A complete characterization of the rate-distortion-equivocation region for the case of arbitrary correlated side informations at the decoders is derived. Several special cases of interest and an application example to secure lossy source coding of binary sources in the presence of binary and ternary side informations are also considered. It is shown that the statistical differences between the side information at the decoders and the presence of non-zero distortion at the legitimate decoder can be useful in terms of secrecy. Applications of these results arise in a variety of distributed sensor network scenarios.Comment: 7 pages, 5 figures, 1 table, to be presented at Allerton 201

    Computing the Kalman form

    Get PDF
    We present two algorithms for the computation of the Kalman form of a linear control system. The first one is based on the technique developed by Keller-Gehrig for the computation of the characteristic polynomial. The cost is a logarithmic number of matrix multiplications. To our knowledge, this improves the best previously known algebraic complexity by an order of magnitude. Then we also present a cubic algorithm proven to more efficient in practice.Comment: 10 page

    Polygons vs. clumps of discs: a numerical study of the influence of grain shape on the mechanical behaviour of granular materials

    Full text link
    We performed a series of numerical vertical compression tests on assemblies of 2D granular material using a Discrete Element code and studied the results with regard to the grain shape. The samples consist of 5,000 grains made from either 3 overlapping discs (clumps - grains with concavities) or six-edged polygons (convex grains). These two grain type have similar external envelopes, which is a function of a geometrical parameter α\alpha. In this paper, the numerical procedure applied is briefly presented followed by the description of the granular model used. Observations and mechanical analysis of dense and loose granular assemblies under isotropic loading are made. The mechanical response of our numerical granular samples is studied in the framework of the classical vertical compression test with constant lateral stress (biaxial test). The comparison of macroscopic responses of dense and loose samples with various grain shapes shows that when α\alpha is considered a concavity parameter, it is therefore a relevant variable for increasing mechanical performances of dense samples. When α\alpha is considered an envelope deviation from perfect sphericity, it can control mechanical performances for large strains. Finally, we present some remarks concerning the kinematics of the deformed samples: while some polygon samples subjected to a vertical compression present large damage zones (any polygon shape), dense samples made of clumps always exhibit thin reflecting shear bands. This paper was written as part of a CEGEO research project www.granuloscience.comComment: This version of the paper doesn't include figures. Visit the journal web site to download the final version of the paper with the figure
    • …
    corecore