2,774 research outputs found
Privacy-Friendly Collaboration for Cyber Threat Mitigation
Sharing of security data across organizational boundaries has often been
advocated as a promising way to enhance cyber threat mitigation. However,
collaborative security faces a number of important challenges, including
privacy, trust, and liability concerns with the potential disclosure of
sensitive data. In this paper, we focus on data sharing for predictive
blacklisting, i.e., forecasting attack sources based on past attack
information. We propose a novel privacy-enhanced data sharing approach in which
organizations estimate collaboration benefits without disclosing their
datasets, organize into coalitions of allied organizations, and securely share
data within these coalitions. We study how different partner selection
strategies affect prediction accuracy by experimenting on a real-world dataset
of 2 billion IP addresses and observe up to a 105% prediction improvement.Comment: This paper has been withdrawn as it has been superseded by
arXiv:1502.0533
Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications
Social Virtual Reality based Learning Environments (VRLEs) such as vSocial
render instructional content in a three-dimensional immersive computer
experience for training youth with learning impediments. There are limited
prior works that explored attack vulnerability in VR technology, and hence
there is a need for systematic frameworks to quantify risks corresponding to
security, privacy, and safety (SPS) threats. The SPS threats can adversely
impact the educational user experience and hinder delivery of VRLE content. In
this paper, we propose a novel risk assessment framework that utilizes attack
trees to calculate a risk score for varied VRLE threats with rate and duration
of threats as inputs. We compare the impact of a well-constructed attack tree
with an adhoc attack tree to study the trade-offs between overheads in managing
attack trees, and the cost of risk mitigation when vulnerabilities are
identified. We use a vSocial VRLE testbed in a case study to showcase the
effectiveness of our framework and demonstrate how a suitable attack tree
formalism can result in a more safer, privacy-preserving and secure VRLE
system.Comment: Tp appear in the CCNC 2019 Conferenc
Preserve data-while-sharing: An Efficient Technique for Privacy Preserving in OSNs
Online Social Networks (OSNs) have become one of the major platforms for social interactions, such as building up relationships, sharing personal experiences, and providing other services. Rapid growth in Social Network has attracted various groups like the scientific community and business enterprise to use these huge social network data to serve their various purposes. The process of disseminating extensive datasets from online social networks for the purpose of conducting diverse trend analyses gives rise to apprehensions regarding privacy, owing to the disclosure of personal information disclosed on these platforms. Privacy control features have been implemented in widely used online social networks (OSNs) to empower users in regulating access to their personal information. Even if Online Social Network owners allow their users to set customizable privacy, attackers can still find out users’ private information by finding the relationships between public and private information with some background knowledge and this is termed as inference attack. In order to defend against these inference attacks this research work could completely anonymize the user identity.
This research work designs an optimization algorithm that aims to strike a balance between self-disclosure utility and their privacy. This research work proposes two privacy preserving algorithms to defend against an inference attack. The research work design an Privacy-Preserving Algorithm (PPA) algorithm which helps to achieve high utility by allowing users to share their data with utmost privacy. Another algorithm-Multi-dimensional Knapsack based Relation Disclosure Algorithm (mdKP-RDA) that deals with social relation disclosure problems with low computational complexity. The proposed work is evaluated to test the effectiveness on datasets taken from actual social networks. According on the experimental results, the proposed methods outperform the current methods.
 
Crypto'Graph: Leveraging Privacy-Preserving Distributed Link Prediction for Robust Graph Learning
Graphs are a widely used data structure for collecting and analyzing
relational data. However, when the graph structure is distributed across
several parties, its analysis is particularly challenging. In particular, due
to the sensitivity of the data each party might want to keep their partial
knowledge of the graph private, while still willing to collaborate with the
other parties for tasks of mutual benefit, such as data curation or the removal
of poisoned data. To address this challenge, we propose Crypto'Graph, an
efficient protocol for privacy-preserving link prediction on distributed
graphs. More precisely, it allows parties partially sharing a graph with
distributed links to infer the likelihood of formation of new links in the
future. Through the use of cryptographic primitives, Crypto'Graph is able to
compute the likelihood of these new links on the joint network without
revealing the structure of the private individual graph of each party, even
though they know the number of nodes they have, since they share the same graph
but not the same links. Crypto'Graph improves on previous works by enabling the
computation of a certain number of similarity metrics without any additional
cost. The use of Crypto'Graph is illustrated for defense against graph
poisoning attacks, in which it is possible to identify potential adversarial
links without compromising the privacy of the graphs of individual parties. The
effectiveness of Crypto'Graph in mitigating graph poisoning attacks and
achieving high prediction accuracy on a graph neural network node
classification task is demonstrated through extensive experimentation on a
real-world dataset
- …