98 research outputs found

    A Prototype For Learning Privacy-Preserving Data Publising

    Get PDF
    Erinevad organisatsioonid, valitsusasutused, firmad ja üksikisikud koguvad andmeid, mida on võimalik hiljem uute teadmiste saamiseks andmekaeve meetoditega töödelda. Töötlejaks ei tarvitse olla andmete koguja. Sageli ei ole teada andmetöötleja usaldusväärsus, mistõttu on oluline tagada, et avalikustatud andmetest poleks enam võimalik tagantjärgi privaatseid isikuandmeid identifitseerida. Selleks, et isikuid ei oleks enam võimalik identifitseerida, tuleb enne andmete töötlejatele väljastamist rakendada privaatsust säilitavaid meetodeid. Käesolevas lõputöös kirjeldatakse erinevaid ohte privaatsusele, meetodeid nende ohtude ennetamiseks, võrreldakse neid meetodeid omavahel ja kirjeldatakse erinevaid viise, kuidas andmeidanonümiseerida. Lõputöö teiseks väljundiks on õpitarkvara, mis võimaldabtudengitel antud valdkonnaga tutvuda. Lõputöö viimases osas valideeritakse loodud tarkvara.Our data gets collected every day by governments and different organizations for data mining. It is often not known who the receiving part of data is and whether data receiver can be trusted. Therefore it is necessary to anonymize data in a way what it would be not possible to identify persons from released data sets. This master thesis will discuss different threats to privacy, discuss and compare different privacy-preserving methods to mitigate these threats. The thesis will give an overview of different possible implementations for these privacy-preserving methods. The other output of this thesis is educational purpose software that allows students to learn and practice privacy-preserving methods. The final part of this thesis is a validation of designed software

    Technical Privacy Metrics: a Systematic Survey

    Get PDF
    The file attached to this record is the author's final peer reviewed versionThe goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over eighty privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement

    Negative emotions boost users activity at BBC Forum

    Full text link
    We present an empirical study of user activity in online BBC discussion forums, measured by the number of posts written by individual debaters and the average sentiment of these posts. Nearly 2.5 million posts from over 18 thousand users were investigated. Scale free distributions were observed for activity in individual discussion threads as well as for overall activity. The number of unique users in a thread normalized by the thread length decays with thread length, suggesting that thread life is sustained by mutual discussions rather than by independent comments. Automatic sentiment analysis shows that most posts contain negative emotions and the most active users in individual threads express predominantly negative sentiments. It follows that the average emotion of longer threads is more negative and that threads can be sustained by negative comments. An agent based computer simulation model has been used to reproduce several essential characteristics of the analyzed system. The model stresses the role of discussions between users, especially emotionally laden quarrels between supporters of opposite opinions, and represents many observed statistics of the forum.Comment: 29 pages, 6 figure

    Towards trajectory anonymization: a generalization-based approach

    Get PDF
    Trajectory datasets are becoming popular due to the massive usage of GPS and locationbased services. In this paper, we address privacy issues regarding the identification of individuals in static trajectory datasets. We first adopt the notion of k-anonymity to trajectories and propose a novel generalization-based approach for anonymization of trajectories. We further show that releasing anonymized trajectories may still have some privacy leaks. Therefore we propose a randomization based reconstruction algorithm for releasing anonymized trajectory data and also present how the underlying techniques can be adapted to other anonymity standards. The experimental results on real and synthetic trajectory datasets show the effectiveness of the proposed techniques

    Local and global recoding methods for anonymizing set-valued data

    Get PDF
    In this paper, we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of supermarket transactions that contains detailed information about items bought together by individuals. Even after removing all personal characteristics of the buyer, which can serve as links to his identity, the publication of such data is still subject to privacy attacks from adversaries who have partial knowledge about the set. Unlike most previous works, we do not distinguish data as sensitive and non-sensitive, but we consider them both as potential quasi-identifiers and potential sensitive data, depending on the knowledge of the adversary. We define a new version of the k-anonymity guarantee, the k m-anonymity, to limit the effects of the data dimensionality, and we propose efficient algorithms to transform the database. Our anonymization model relies on generalization instead of suppression, which is the most common practice in related works on such data. We develop an algorithm that finds the optimal solution, however, at a high cost that makes it inapplicable for large, realistic problems. Then, we propose a greedy heuristic, which performs generalizations in an Apriori, level-wise fashion. The heuristic scales much better and in most of the cases finds a solution close to the optimal. Finally, we investigate the application of techniques that partition the database and perform anonymization locally, aiming at the reduction of the memory consumption and further scalability. A thorough experimental evaluation with real datasets shows that a vertical partitioning approach achieves excellent results in practice. © 2010 Springer-Verlag.postprin

    Exploring Cyberbullying and Other Toxic Behavior in Team Competition Online Games

    Full text link
    In this work we explore cyberbullying and other toxic behavior in team competition online games. Using a dataset of over 10 million player reports on 1.46 million toxic players along with corresponding crowdsourced decisions, we test several hypotheses drawn from theories explaining toxic behavior. Besides providing large-scale, empirical based understanding of toxic behavior, our work can be used as a basis for building systems to detect, prevent, and counter-act toxic behavior.Comment: CHI'1

    A Novel Approach Of Privacy Preserving Data With Anonymizing Tree Structure

    Get PDF
    Data anonymization techniques have been proposed in order to allow processing of personal data without compromising user’s privacy. the data management community is facing a big challenge to protect personal information of individuals from attackers who try to disclose the information. So data anonymization strategies have been proposed so as to permit handling of individual information without compromising user’s privacy. Data anonymization is a type of information sanitization whose intent is privacy protection. It is the process of either encrypting or removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous. We are presenting k(m;n)-anonymity privacy guarantee which addresses background knowledge of both value and structure using improved and automatic greedy algorithm. (k (m,n) - obscurity ensure) A tree database D is considered k (m,n) - unknown if any assailant who has foundation information of m hub names and n auxiliary relations between them (ancestor descendant), is not ready to utilize this learning to distinguish not as much as k records in D. A tree dataset D can be transformed to a dataset D0 which complies to k (m,n) - anonymity, by a series of transformations.The key idea is to replace rare values with a common generalized value and to remove ancestor descendant relations when they might lead to privacy breaches

    Privacy Preservation by Disassociation

    Full text link
    In this work, we focus on protection against identity disclosure in the publication of sparse multidimensional data. Existing multidimensional anonymization techniquesa) protect the privacy of users either by altering the set of quasi-identifiers of the original data (e.g., by generalization or suppression) or by adding noise (e.g., using differential privacy) and/or (b) assume a clear distinction between sensitive and non-sensitive information and sever the possible linkage. In many real world applications the above techniques are not applicable. For instance, consider web search query logs. Suppressing or generalizing anonymization methods would remove the most valuable information in the dataset: the original query terms. Additionally, web search query logs contain millions of query terms which cannot be categorized as sensitive or non-sensitive since a term may be sensitive for a user and non-sensitive for another. Motivated by this observation, we propose an anonymization technique termed disassociation that preserves the original terms but hides the fact that two or more different terms appear in the same record. We protect the users' privacy by disassociating record terms that participate in identifying combinations. This way the adversary cannot associate with high probability a record with a rare combination of terms. To the best of our knowledge, our proposal is the first to employ such a technique to provide protection against identity disclosure. We propose an anonymization algorithm based on our approach and evaluate its performance on real and synthetic datasets, comparing it against other state-of-the-art methods based on generalization and differential privacy.Comment: VLDB201

    A SURVEY ON PRIVACY PRESERVING TECHNIQUES FOR SOCIAL NETWORK DATA

    Get PDF
    In this era of 20th century, online social network like Facebook, twitter, etc. plays a very important role in everyone's life. Social network data, regarding any individual organization can be published online at any time, in which there is a risk of information leakage of anyone's personal data. So preserving the privacy of individual organizations and companies are needed before data is published online. Therefore the research was carried out in this area for many years and it is still going on. There have been various existing techniques that provide the solutions for preserving privacy to tabular data called as relational data and also social network data represented in graphs. Different techniques exists for tabular data but you can't apply directly to the structured complex graph  data,which consists of vertices represented as individuals and edges representing some kind of connection or relationship between the nodes. Various techniques like K-anonymity, L-diversity, and T-closeness exist to provide privacy to nodes and techniques like edge perturbation, edge randomization are there to provide privacy to edges in social graphs. Development of new techniques by  Integration to exiting techniques like K-anonymity ,edge perturbation, edge randomization, L-Diversity are still going on to provide more privacy to relational data and social network data are ongoingin the best possible manner.Â
    corecore