8 research outputs found

    Interestingness measure on privacy preserved data with horizontal partitioning

    Get PDF
    Association rule mining is a process of finding the frequent item sets based on the interestingness measure. The major challenge exists when performing the association of the data where privacy preservation is emphasized. The actual transaction data provides the evident to calculate the parameters for defining the association rules. In this paper, a solution is proposed to find one such parameter i.e. support count for item sets on the non transparent data, in other words the transaction data is not disclosed. The privacy preservation is ensured by transferring the x-anonymous records for every transaction record. All the anonymous set of actual transaction record perceives high generalized values. The clients process the anonymous set of every transaction record to arrive at high abstract values and these generalized values are used for support calculation. More the number of anonymous records, more the privacy of data is amplified. In experimental results it is shown that privacy is ensured with more number of formatted transactions

    Advancing Data Privacy: A Novel K-Anonymity Algorithm with Dissimilarity Tree-Based Clustering and Minimal Information Loss

    Get PDF
    Anonymization serves as a crucial privacy protection technique employed across various technology domains, including cloud storage, machine learning, data mining and big data to safeguard sensitive information from unauthorized third-party access. As the significance and volume of data grow exponentially, comprehensive data protection against all threats is of utmost importance. The main objective of this paper is to provide a brief summary of techniques for data anonymization and differential privacy.A new k-anonymity method, which deviates from conventional k-anonymity approaches, is proposed by us to address privacy protection concerns. Our paper presents a new algorithm designed to achieve k-anonymity through more efficient clustering. The processing of data by most clustering algorithms requires substantial computation. However, by identifying initial centers that align with the data structure, a superior cluster arrangement can be obtained.Our study presents a Dissimilarity Tree-based strategy for selecting optimal starting centroids and generating more accurate clusters with reduced computing time and Normalised Certainty Penalty (NCP). This method also has the added benefit of reducing the Normalised Certainty Penalty (NCP). When compared to other methods, the graphical performance analysis shows that this one reduces the amount of overall information lost in the dataset being anonymized by around 20% on average. In addition, the method that we have designed is capable of properly handling both numerical and category characteristics

    Reduce Threats in Competitive Intelligence System: A Generic Information Fusion Access Control Model

    Full text link

    INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET)

    Get PDF
    Cryptographic approaches are traditional and preferred methodologies used to preserve the privacy of data released for analysis. Privacy Preserving Data Mining (PPDM) is a new trend to derive knowledge when the data is available with multiple parties involved. The PPDM deployments that currently exist involve cryptographic key exchange and key computation achieved through a trusted server or a third party. The key computation over heads, key compromise in presence of dishonest parties and shared data integrity are the key challenges that exist. This research work discusses the provisioning of data privacy using commutative RSA algorithms eliminating the overheads of secure key distribution, storage and key update mechanisms generally used to secure the data to be used for analysis. Decision Tree algorithms are used for analysis of the data provided by the various parties involved. We have considered the C5. 0 data mining algorithm for analysis due to its efficiency over the currently prevalent algorithms like C4. 5 and ID3. In this paper the major emphasis is to provide a platform for secure communication, preserving privacy of the vertically partitioned data available with the parties involved in the semi-honest trust model. The proposed Key Distribution-Less Privacy Preserving Data Mining () model is compared with other protocols like Secure Lock and Access Control Polynomial to prove its efficiency in terms of the computational overheads observed in preserving privacy. The experiential evaluations proves the reduces the computational overheads by about 95.96% when compared to the Secure Lock model and is similar to the

    Effects of Information Filters: A Phenomenon on the Web

    Get PDF
    In the Internet era, information processing for personalization and relevance has been one of the key topics of research and development. It ranges from design of applications like search engines, web crawlers, learning engines to reverse image searches, audio processed search, auto complete, etc. Information retrieval plays a vital role in most of the above mentioned applications. A part of information retrieval which deals with personalization and rendering is often referred to as Information Filtering. The emphasis of this paper is to empirically analyze the information filters commonly seen and to analyze their correctness and effects. The measure of correctness is not in terms of percentage of correct results but instead a rational approach of analysis using a non mathematical argument is presented. Filters employed by Google’s search engine are used to analyse the effects of filtering on the web. A plausible

    A novel privacy paradigm for improving serial data privacy

    Get PDF
    Protecting the privacy of individuals is of utmost concern in today’s society, as inscribed and governed by the prevailing privacy laws, such as GDPR. In serial data, bits of data are continuously released, but their combined effect may result in a privacy breach in the whole serial publication. Protecting serial data is crucial for preserving them from adversaries. Previous approaches provide privacy for relational data and serial data, but many loopholes exist when dealing with multiple sensitive values. We address these problems by introducing a novel privacy approach that limits the risk of privacy disclosure in republication and gives better privacy with much lower perturbation rates. Existing techniques provide a strong privacy guarantee against attacks on data privacy; however, in serial publication, the chances of attack still exist due to the continuous addition and deletion of data. In serial data, proper countermeasures for tackling attacks such as correlation attacks have not been taken, due to which serial publication is still at risk. Moreover, protecting privacy is a significant task due to the critical absence of sensitive values while dealing with multiple sensitive values. Due to this critical absence, signatures change in every release, which is a reason for attacks. In this paper, we introduce a novel approach in order to counter the composition attack and the transitive composition attack and we prove that the proposed approach is better than the existing state-of-the-art techniques. Our paper establishes the result with a systematic examination of the republication dilemma. Finally, we evaluate our work using benchmark datasets, and the results show the efficacy of the proposed technique

    A comparative analysis of good enterprise data management practices:insights from literature and artificial intelligence perspectives for business efficiency and effectiveness

    Get PDF
    Abstract. This thesis presents a comparative analysis of enterprise data management practices based on literature and artificial intelligence (AI) perspectives, focusing on their impact on data quality, business efficiency, and effectiveness. It employs a systematic research methodology comprising of a literature review, an AI-based examination of current practices using ChatGPT, and a comparative analysis of findings. The study highlights the importance of robust data governance, high data quality, data integration, and security, alongside the transformative potential of AI. The limitations revolve around the primarily qualitative nature of the study and potential restrictions in the generalizability of the findings. However, the thesis offers valuable insights and recommendations for enterprises to optimize their data management strategies, underscoring the enhancement potential of AI in traditional practices. The research contributes to scientific discourse in information systems, data science, and business management
    corecore