11,551 research outputs found

    Privacy Protection Performance of De-identified Face Images with and without Background

    Get PDF
    Li Meng, 'Privacy Protection Performance of De-identified Face Images with and without Background', paper presented at the 39th International Information and Communication Technology (ICT) Convention. Grand Hotel Adriatic Congress Centre and Admiral Hotel, Opatija, Croatia, May 30 - June 3, 2016.This paper presents an approach to blending a de-identified face region with its original background, for the purpose of completing the process of face de-identification. The re-identification risk of the de-identified FERET face images has been evaluated for the k-Diff-furthest face de-identification method, using several face recognition benchmark methods including PCA, LBP, HOG and LPQ. The experimental results show that the k-Diff-furthest face de-identification delivers high privacy protection within the face region while blending the de-identified face region with its original background may significantly increases the re-identification risk, indicating that de-identification must also be applied to image areas beyond the face region

    Exact Inference Techniques for the Analysis of Bayesian Attack Graphs

    Get PDF
    Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources. The uncertainty about the attacker's behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.Comment: 14 pages, 15 figure

    Injecting Uncertainty in Graphs for Identity Obfuscation

    Full text link
    Data collected nowadays by social-networking applications create fascinating opportunities for building novel services, as well as expanding our understanding about social structures and their dynamics. Unfortunately, publishing social-network graphs is considered an ill-advised practice due to privacy concerns. To alleviate this problem, several anonymization methods have been proposed, aiming at reducing the risk of a privacy breach on the published data, while still allowing to analyze them and draw relevant conclusions. In this paper we introduce a new anonymization approach that is based on injecting uncertainty in social graphs and publishing the resulting uncertain graphs. While existing approaches obfuscate graph data by adding or removing edges entirely, we propose using a finer-grained perturbation that adds or removes edges partially: this way we can achieve the same desired level of obfuscation with smaller changes in the data, thus maintaining higher utility. Our experiments on real-world networks confirm that at the same level of identity obfuscation our method provides higher usefulness than existing randomized methods that publish standard graphs.Comment: VLDB201

    Privacy-preserving mechanism for social network data publishing

    Full text link
     Privacy is receiving growing concern from various parties especially consumers due to the simplification of the collection and distribution of personal data. This research focuses on preserving privacy in social network data publishing. The study explores the data anonymization mechanism in order to improve privacy protection of social network users. We identified new type of privacy breach and has proposed an effective mechanism for privacy protection

    Modelling and Analysis Using GROOVE

    Get PDF
    In this paper we present case studies that describe how the graph transformation tool GROOVE has been used to model problems from a wide variety of domains. These case studies highlight the wide applicability of GROOVE in particular, and of graph transformation in general. They also give concrete templates for using GROOVE in practice. Furthermore, we use the case studies to analyse the main strong and weak points of GROOVE

    How to Hide One's Relationships from Link Prediction Algorithms

    Get PDF
    Our private connections can be exposed by link prediction algorithms. To date, this threat has only been addressed from the perspective of a central authority, completely neglecting the possibility that members of the social network can themselves mitigate such threats. We fill this gap by studying how an individual can rewire her own network neighborhood to hide her sensitive relationships. We prove that the optimization problem faced by such an individual is NP-complete, meaning that any attempt to identify an optimal way to hide oneā€™s relationships is futile. Based on this, we shift our attention towards developing effective, albeit not optimal, heuristics that are readily-applicable by users of existing social media platforms to conceal any connections they deem sensitive. Our empirical evaluation reveals that it is more beneficial to focus on ā€œunfriendingā€ carefully-chosen individuals rather than befriending new ones. In fact, by avoiding communication with just 5 individuals, it is possible for one to hide some of her relationships in a massive, real-life telecommunication network, consisting of 829,725 phone calls between 248,763 individuals. Our analysis also shows that link prediction algorithms are more susceptible to manipulation in smaller and denser networks. Evaluating the error vs. attack tolerance of link prediction algorithms reveals that rewiring connections randomly may end up exposing oneā€™s sensitive relationships, highlighting the importance of the strategic aspect. In an age where personal relationships continue to leave digital traces, our results empower the general public to proactively protect their private relationships.M.W. was supported by the Polish National Science Centre grant 2015/17/N/ST6/03686. T.P.M. was supported by the Polish National Science Centre grants 2016/23/B/ST6/03599 and 2014/13/B/ST6/01807 (for this and the previous versions of this article, respectively). Y.V. and K.Z. were supported by ARO MURI (grant #W911NF1810208). Y.V. was also supported by the U.S. National Science Foundation (CAREER award IIS- 1905558 and grant IIS-1526860). E.M. acknowledges funding by Ministerio de Economa y Competitividad (Spain) through grant FIS2016-78904-C3-3-P

    Self-learning Anomaly Detection in Industrial Production

    Get PDF
    • ā€¦
    corecore