1,024 research outputs found

    Lossy Source Coding with Reconstruction Privacy

    Full text link
    We consider the problem of lossy source coding with side information under a privacy constraint that the reconstruction sequence at a decoder should be kept secret to a certain extent from another terminal such as an eavesdropper, a sender, or a helper. We are interested in how the reconstruction privacy constraint at a particular terminal affects the rate-distortion tradeoff. In this work, we allow the decoder to use a random mapping, and give inner and outer bounds to the rate-distortion-equivocation region for different cases where the side information is available non-causally and causally at the decoder. In the special case where each reconstruction symbol depends only on the source description and current side information symbol, the complete rate-distortion-equivocation region is provided. A binary example illustrating a new tradeoff due to the new privacy constraint, and a gain from the use of a stochastic decoder is given.Comment: 22 pages, added proofs, to be presented at ISIT 201

    Stabilization of Linear Systems Over Gaussian Networks

    Full text link
    The problem of remotely stabilizing a noisy linear time invariant plant over a Gaussian relay network is addressed. The network is comprised of a sensor node, a group of relay nodes and a remote controller. The sensor and the relay nodes operate subject to an average transmit power constraint and they can cooperate to communicate the observations of the plant's state to the remote controller. The communication links between all nodes are modeled as Gaussian channels. Necessary as well as sufficient conditions for mean-square stabilization over various network topologies are derived. The sufficient conditions are in general obtained using delay-free linear policies and the necessary conditions are obtained using information theoretic tools. Different settings where linear policies are optimal, asymptotically optimal (in certain parameters of the system) and suboptimal have been identified. For the case with noisy multi-dimensional sources controlled over scalar channels, it is shown that linear time varying policies lead to minimum capacity requirements, meeting the fundamental lower bound. For the case with noiseless sources and parallel channels, non-linear policies which meet the lower bound have been identified

    Private Variable-Length Coding with Non-zero Leakage

    Full text link
    A private compression design problem is studied, where an encoder observes useful data YY, wishes to compress it using variable length code and communicates it through an unsecured channel. Since YY is correlated with private data XX, the encoder uses a private compression mechanism to design encoded message C\cal C and sends it over the channel. An adversary is assumed to have access to the output of the encoder, i.e., C\cal C, and tries to estimate XX. Furthermore, it is assumed that both encoder and decoder have access to a shared secret key WW. In this work, we generalize the perfect privacy (secrecy) assumption and consider a non-zero leakage between the private data XX and encoded message C\cal C. The design goal is to encode message C\cal C with minimum possible average length that satisfies non-perfect privacy constraints. We find upper and lower bounds on the average length of the encoded message using different privacy metrics and study them in special cases. For the achievability we use two-part construction coding and extended versions of Functional Representation Lemma. Lastly, in an example we show that the bounds can be asymptotically tight.Comment: arXiv admin note: text overlap with arXiv:2306.1318

    Multi-User Privacy Mechanism Design with Non-zero Leakage

    Full text link
    A privacy mechanism design problem is studied through the lens of information theory. In this work, an agent observes useful data Y=(Y1,...,YN)Y=(Y_1,...,Y_N) that is correlated with private data X=(X1,...,XN)X=(X_1,...,X_N) which is assumed to be also accessible by the agent. Here, we consider KK users where user ii demands a sub-vector of YY, denoted by CiC_{i}. The agent wishes to disclose CiC_{i} to user ii. Since CiC_{i} is correlated with XX it can not be disclosed directly. A privacy mechanism is designed to generate disclosed data UU which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. In a similar work it has been assumed that XiX_i is a deterministic function of YiY_i, however in this work we let XiX_i and YiY_i be arbitrarily correlated. First, an upper bound on the privacy-utility trade-off is obtained by using a specific transformation, Functional Representation Lemma and Strong Functional Representaion Lemma, then we show that the upper bound can be decomposed into NN parallel problems. Next, lower bounds on privacy-utility trade-off are derived using Functional Representation Lemma and Strong Functional Representaion Lemma. The upper bound is tight within a constant and the lower bounds assert that the disclosed data is independent of all {Xj}i=1N\{X_j\}_{i=1}^N except one which we allocate the maximum allowed leakage to it. Finally, the obtained bounds are studied in special cases.Comment: arXiv admin note: text overlap with arXiv:2205.04881, arXiv:2201.0873

    A Design Framework for Strongly χ2\chi^2-Private Data Disclosure

    Full text link
    In this paper, we study a stochastic disclosure control problem using information-theoretic methods. The useful data to be disclosed depend on private data that should be protected. Thus, we design a privacy mechanism to produce new data which maximizes the disclosed information about the useful data under a strong χ2\chi^2-privacy criterion. For sufficiently small leakage, the privacy mechanism design problem can be geometrically studied in the space of probability distributions by a local approximation of the mutual information. By using methods from Euclidean information geometry, the original highly challenging optimization problem can be reduced to a problem of finding the principal right-singular vector of a matrix, which characterizes the optimal privacy mechanism. In two extensions we first consider a scenario where an adversary receives a noisy version of the user's message and then we look for a mechanism which finds UU based on observing XX, maximizing the mutual information between UU and YY while satisfying the privacy criterion on UU and ZZ under the Markov chain (Z,Y)−X−U(Z,Y)-X-U.Comment: 16 pages, 2 figure

    New Privacy Mechanism Design With Direct Access to the Private Data

    Full text link
    The design of a statistical signal processing privacy problem is studied where the private data is assumed to be observable. In this work, an agent observes useful data YY, which is correlated with private data XX, and wants to disclose the useful information to a user. A statistical privacy mechanism is employed to generate data UU based on (X,Y)(X,Y) that maximizes the revealed information about YY while satisfying a privacy criterion. To this end, we use extended versions of the Functional Representation Lemma and Strong Functional Representation Lemma and combine them with a simple observation which we call separation technique. New lower bounds on privacy-utility trade-off are derived and we show that they can improve the previous bounds. We study the obtained bounds in different scenarios and compare them with previous results.Comment: arXiv admin note: substantial text overlap with arXiv:2201.08738, arXiv:2212.1247

    Inferential Privacy: From Impossibility to Database Privacy

    Full text link
    We investigate the possibility of guaranteeing inferential privacy for mechanisms that release useful information about some data containing sensitive information, denoted by XX. We describe a general model of utility and privacy in which utility is achieved by disclosing the value of low-entropy features of XX, while privacy is maintained by keeping high-entropy features of XX secret. Adopting this model, we prove that meaningful inferential privacy guarantees can be obtained, even though this is commonly considered to be impossible by the well-known result of Dwork and Naor. Then, we specifically discuss a privacy measure called pointwise maximal leakage (PML) whose guarantees are of the inferential type. We use PML to show that differential privacy admits an inferential formulation: it describes the information leaking about a single entry in a database assuming that every other entry is known, and considering the worst-case distribution on the data. Moreover, we define inferential instance privacy (IIP) as a bound on the (non-conditional) information leaking about a single entry in the database under the worst-case distribution, and show that it is equivalent to free-lunch privacy. Overall, our approach to privacy unifies, formalizes, and explains many existing ideas, e.g., why the informed adversary assumption may lead to underestimating the information leaking about each entry in the database. Furthermore, insights obtained from our results suggest general methods for improving privacy analyses; for example, we argue that smaller privacy parameters can be obtained by excluding low-entropy prior distributions from protection
    • …
    corecore