3 research outputs found

    LAYERED APPROACH FOR PERSONALIZED SEARCH ENGINE LOGS PRIVACY PRESERVING

    Get PDF
    ABSTRACT In this paper we examine the problem of defending privacy for publishing search engine logs. Search engines play a vital role in the navigation through the enormity of the Web. Privacy-preserving data publishing (PPDP) provides techniques and tools for publishing helpful information while preserving data privacy. Recently, PPDP has received significant attention in research communities, and several approaches have been proposed for different data publishing situations. In this paper we learn privacy preservation for the publication of search engine query logs. Consider a subject that even after eliminating all personal characteristics of the searcher, which can serve as associations to his identity, the magazine of such data, is still subject to privacy attacks from opponents who have partial knowledge about the set. Our tentative results show that the query log can be appropriately anonymized against the particular attack, while retaining a significant volume of helpful data. In this paper we learn about problem in search logs and why the log is not secure and how to create log secure using data mining algorithm and methods like Generalization, Suppression and Quasi identifier

    Privacy-Preserving Updates to Anonymous and Confidential Databases

    No full text
    Suppose Alice owns a k-anonymous database and needs to determine whether her database, when inserted with a tuple owned by Bob, is still k-anonymous. Also, suppose that access to the database is strictly controlled, because for example data are used for certain experiments that need to be maintained confidential. Clearly, allowing Alice to directly read the contents of the tuple breaks the privacy of Bob (e.g., a patient\u27s medical record); on the other hand, the confidentiality of the database managed by Alice is violated once Bob has access to the contents of the database. Thus, the problem is to check whether the database inserted with the tuple is still k-anonymous, without letting Alice and Bob know the contents of the tuple and the database, respectively. In this paper, we propose two protocols solving this problem on suppression-based and generalization-based k-anonymous and confidential databases. The protocols rely on well-known cryptographic assumptions, and we provide theoretical analyses to proof their soundness and experimental results to illustrate their efficiency
    corecore