1 research outputs found

    Privacy Vulnerabilities of Dataset Anonymization Techniques

    Full text link
    Vast amounts of information of all types are collected daily about people by governments, corporations and individuals. The information is collected when users register to or use on-line applications, receive health related services, use their mobile phones, utilize search engines, or perform common daily activities. As a result, there is an enormous quantity of privately-owned records that describe individuals' finances, interests, activities, and demographics. These records often include sensitive data and may violate the privacy of the users if published. The common approach to safeguarding user information, or data in general, is to limit access to the storage (usually a database) by using and authentication and authorization protocol. This way, only users with legitimate permissions can access the user data. In many cases though, the publication of user data for statistical analysis and research can be extremely beneficial for both academic and commercial uses, such as statistical research and recommendation systems. To maintain user privacy when such a publication occurs many databases employ anonymization techniques, either on the query results or the data itself. In this paper we examine variants of 2 such techniques, "data perturbation" and "query-set-size control" and discuss their vulnerabilities. Data perturbation deals with changing the values of records in the dataset while maintaining a level of accuracy over the resulting queries. We focus on a relatively new data perturbation method called NeNDS to show a possible partial knowledge attack on its privacy. The query-set-size control allows publication of a query result dependent on having a minimum set size, k, of records satisfying the query parameters. We show some query types relying on this method may still be used to extract hidden information, and prove others maintain privacy even when using multiple queries
    corecore