15 research outputs found

    Deploying and Evaluating Pufferfish Privacy for Smart Meter Data (Technical Report)

    Get PDF
    Information hiding ensures privacy by transforming personalized data so that certain sensitive information cannot be inferred any more. One state-of-the-art information-hiding approach is the Pufferfish framework. It lets the users specify their privacy requirements as so-called discriminative pairs of secrets, and it perturbs data so that an adversary does not learn about the probability distribution of such pairs. However, deploying the framework on complex data such as time series requires application specific work. This includes a general definition of the representation of secrets in the data. Another issue is that the tradeoff between Pufferfish privacy and utility of the data is largely unexplored in quantitative terms. In this study, we quantify this tradeoff for smart meter data. Such data contains fine-grained time series of power-consumption data from private households. Disseminating such data in an uncontrolled way puts privacy at risk. We investigate how time series of energy consumption data must be transformed to facilitate specifying secrets that Pufferfish can use. We ensure the generality of our study by looking at different information-extraction approaches, such as re-identification and non-intrusive-appliance-load monitoring, in combination with a comprehensive set of secrets. Additionally, we provide quantitative utility results for a real-world application, the so-called local energy market

    Deploying and Evaluating Pufferfish Privacy for Smart Meter Data (Technical Report \u2715)

    Get PDF
    Information hiding ensures privacy by transforming personalized data so that certain sensitive information cannot be inferred any more. One state-of-the-art information-hiding approach is the Pufferfish framework. It lets the users specify their privacy requirements as so-called discriminative pairs of secrets, and it perturbs data so that an adversary does not learn about the probability distribution of such pairs. However, deploying the framework on complex data such as time series requires application specific work. This includes a general definition of the representation of secrets in the data. Another issue is that the tradeoff between Pufferfish privacy and utility of the data is largely unexplored in quantitative terms. In this study, we quantify this tradeoff for smart meter data. Such data contains fine-grained time series of power-consumption data from private households. Disseminating such data in an uncontrolled way puts privacy at risk. We investigate how time series of energy consumption data must be transformed to facilitate specifying secrets that Pufferfish can use. We ensure the generality of our study by looking at different information-extraction approaches, such as re-identification and non-intrusive-appliance-load monitoring, in combination with a comprehensive set of secrets. Additionally, we provide quantitative utility results for a real-world application, the so-called local energy market

    Privacy-Enhancing Methods for Time Series and their Impact on Electronic Markets

    Get PDF
    The amount of collected time series data containing personal information has increased in the last years, e.g., smart meters store time series of power consumption data. Using such data for the benefit of society requires methods to protect the privacy of individuals. Those methods need to modify the data. In this thesis, we contribute a provable privacy method for time series and introduce an application specific measure in the smart grid domain to evaluate their impact on data quality

    Studying Utility Metrics for Differentially Private Low-Voltage Grid Monitoring

    Get PDF

    Swellfish Privacy: Exploiting Time-Dependent Relevance for Continuous Differential Privacy : Technical Report

    Get PDF
    Today, continuous publishing of differentially private query results is the de-facto standard. The challenge hereby is adding enough noise to satisfy a given privacy level, and adding as little noise as necessary to keep high data utility. In this context, we observe that privacy goals of individuals vary significantly over time. For instance, one might aim to hide whether one is on vacation only during school holidays. This observation, named time-dependent relevance, implies two effects which – properly exploited – allow to tune data utility. The effects are time-variant sensitivity (TEAS) and time-variant number of affected query results (TINAR). As today’s DP frameworks, by design, cannot exploit these effects, we propose Swellfish privacy. There, with policy collections, individuals can specify combinations of time-dependent privacy goals. Then, query results are Swellfish-private, if the streams are indistinguishable with respect to such a collection.We propose two tools for designing Swellfish-private mechanisms, namely, temporal sensitivity and a composition theorem, each allowing to exploit one of the effects. In a realistic case study, we show empirically that exploiting both effects improves data utility by one to three orders of magnitude compared to state-of-the-art w-event DP mechanisms. Finally, we generalize the case study by showing how to estimate the strength of the effects for arbitrary use cases

    Evaluating Privacy-Friendly Mobility Analytics on Aggregate Location Data

    Get PDF
    Information about people's movements and the locations they visit enables a wide number of mobility analytics applications, e.g., real-time traffic maps or urban planning, aiming to improve quality of life in modern smart-cities. Alas, the availability of users' fine-grained location data reveals sensitive information about them such as home and work places, lifestyles, political or religious inclinations. In an attempt to mitigate this, aggregation is often employed as a strategy that allows analytics and machine learning tasks while protecting the privacy of individual users' location traces. In this thesis, we perform an end-to-end evaluation of crowdsourced privacy-friendly location aggregation aiming to understand its usefulness for analytics as well as its privacy implications towards users who contribute their data. First, we present a time-series methodology which, along with privacy-friendly crowdsourcing of aggregate locations, supports mobility analytics such as traffic forecasting and mobility anomaly detection. Next, we design quantification frameworks and methodologies that let us reason about the privacy loss stemming from the collection or release of aggregate location information against knowledgeable adversaries that aim to infer users' profiles, locations, or membership. We then utilize these frameworks to evaluate defenses ranging from generalization and hiding, to differential privacy, which can be employed to prevent inferences on aggregate location statistics, in terms of privacy protection as well as utility loss towards analytics tasks. Our results highlight that, while location aggregation is useful for mobility analytics, it is a weak privacy protection mechanism in this setting and that additional defenses can only protect privacy if some statistical utility is sacrificed. Overall, the tools presented in this thesis can be used by providers who desire to assess the quality of privacy protection before data release and its results have several implications about current location data practices and applications

    Science-based restoration monitoring of coastal habitats, Volume Two: Tools for monitoring coastal habitats

    Get PDF
    Healthy coastal habitats are not only important ecologically; they also support healthy coastal communities and improve the quality of people’s lives. Despite their many benefits and values, coastal habitats have been systematically modified, degraded, and destroyed throughout the United States and its protectorates beginning with European colonization in the 1600’s (Dahl 1990). As a result, many coastal habitats around the United States are in desperate need of restoration. The monitoring of restoration projects, the focus of this document, is necessary to ensure that restoration efforts are successful, to further the science, and to increase the efficiency of future restoration efforts

    VertrauenswĂĽrdige, adaptive Anfrageverarbeitung in dynamischen Sensornetzwerken zur UnterstĂĽtzung assistiver Systeme

    Get PDF
    Assistenzsysteme in smarten Umgebungen sammeln durch den Einsatz verschiedenster Sensoren viele Daten, um die Intentionen und zukünftigen Aktivitäten der Nutzer zu berechnen. In den meisten Fällen werden dabei mehr Informationen gesammelt als für die Erfüllung der Aufgabe des Assistenzsystems notwendig sind. Das Ziel dieser Dissertation ist die Konzeption und Implementierung von datenschutzfördernden Algorithmen für die Weitergabe sensibler Sensor- und Kontextinformationen zu den Analysewerkzeugen der Assistenzsysteme. Die Datenschutzansprüche der Nutzer werden dazu in Integritätsbedingungen der Datenbanksysteme transformiert, welche die gesammelten Informationen speichern und auswerten. Ausgehend vom Informationsbedarf des Assistenzsystems und den Datenschutzbedürfnissen des Nutzers werden die gesammelten Daten so nahe wie möglich am Sensor durch Selektion, Reduktion, Kompression oder Aggregation durch die Datenschutzkomponente des Assistenzsystems verdichtet. Sofern nicht alle Informationen lokal verarbeitet werden können, werden Teile der Analyse an andere, an der Verarbeitung der Daten beteiligte Rechenknoten ausgelagert. Das Konzept wurde im Rahmen des PArADISE-Frameworks (Privacy-AwaRe Assistive Distributed Information System Environment) umgesetzt und u. a. in Zusammenarbeit mit dem DFG-Graduiertenkolleg 1424 (MuSAMA-Multimodal Smart Appliances for Mobile Application) anhand eines Beispielszenarios getestet
    corecore