1,377 research outputs found

    On the Anonymization of Differentially Private Location Obfuscation

    Full text link
    Obfuscation techniques in location-based services (LBSs) have been shown useful to hide the concrete locations of service users, whereas they do not necessarily provide the anonymity. We quantify the anonymity of the location data obfuscated by the planar Laplacian mechanism and that by the optimal geo-indistinguishable mechanism of Bordenabe et al. We empirically show that the latter provides stronger anonymity than the former in the sense that more users in the database satisfy k-anonymity. To formalize and analyze such approximate anonymity we introduce the notion of asymptotic anonymity. Then we show that the location data obfuscated by the optimal geo-indistinguishable mechanism can be anonymized by removing a smaller number of users from the database. Furthermore, we demonstrate that the optimal geo-indistinguishable mechanism has better utility both for users and for data analysts.Comment: ISITA'18 conference pape

    A combinatorial approach to jumping particles

    Get PDF
    In this paper we consider a model of particles jumping on a row of cells, called in physics the one dimensional totally asymmetric exclusion process (TASEP). More precisely we deal with the TASEP with open or periodic boundary conditions and with two or three types of particles. From the point of view of combinatorics a remarkable feature of this Markov chain is that it involves Catalan numbers in several entries of its stationary distribution. We give a combinatorial interpretation and a simple proof of these observations. In doing this we reveal a second row of cells, which is used by particles to travel backward. As a byproduct we also obtain an interpretation of the occurrence of the Brownian excursion in the description of the density of particles on a long row of cells.Comment: 24 figure

    The importance of better models in stochastic optimization

    Full text link
    Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algorithmic parameters, and they exhibit instability outside of well-behaved families of objectives. To address these challenges, we investigate models for stochastic minimization and learning problems that exhibit better robustness to problem families and algorithmic parameters. With appropriately accurate models---which we call the aProx family---stochastic methods can be made stable, provably convergent and asymptotically optimal; even modeling that the objective is nonnegative is sufficient for this stability. We extend these results beyond convexity to weakly convex objectives, which include compositions of convex losses with smooth functions common in modern machine learning applications. We highlight the importance of robustness and accurate modeling with a careful experimental evaluation of convergence time and algorithm sensitivity

    Mean Estimation from Adaptive One-bit Measurements

    Full text link
    We consider the problem of estimating the mean of a normal distribution under the following constraint: the estimator can access only a single bit from each sample from this distribution. We study the squared error risk in this estimation as a function of the number of samples and one-bit measurements nn. We consider an adaptive estimation setting where the single-bit sent at step nn is a function of both the new sample and the previous n−1n-1 acquired bits. For this setting, we show that no estimator can attain asymptotic mean squared error smaller than π/(2n)+O(n−2)\pi/(2n)+O(n^{-2}) times the variance. In other words, one-bit restriction increases the number of samples required for a prescribed accuracy of estimation by a factor of at least π/2\pi/2 compared to the unrestricted case. In addition, we provide an explicit estimator that attains this asymptotic error, showing that, rather surprisingly, only π/2\pi/2 times more samples are required in order to attain estimation performance equivalent to the unrestricted case
    • …
    corecore