3,549 research outputs found
A Graphical Adversarial Risk Analysis Model for Oil and Gas Drilling Cybersecurity
Oil and gas drilling is based, increasingly, on operational technology, whose
cybersecurity is complicated by several challenges. We propose a graphical
model for cybersecurity risk assessment based on Adversarial Risk Analysis to
face those challenges. We also provide an example of the model in the context
of an offshore drilling rig. The proposed model provides a more formal and
comprehensive analysis of risks, still using the standard business language
based on decisions, risks, and value.Comment: In Proceedings GraMSec 2014, arXiv:1404.163
On the anonymity risk of time-varying user profiles.
Websites and applications use personalisation services to profile their users, collect their patterns and activities and eventually use this data to provide tailored suggestions. User preferences and social interactions are therefore aggregated and analysed. Every time a user publishes a new post or creates a link with another entity, either another user, or some online resource, new information is added to the user profile. Exposing private data does not only reveal information about single users’ preferences, increasing their privacy risk, but can expose more about their network that single actors intended. This mechanism is self-evident in social networks where users receive suggestions based on their friends’ activities. We propose an information-theoretic approach to measure the differential update of the anonymity risk of time-varying user profiles. This expresses how privacy is affected when new content is posted and how much third-party services get to know about the users when a new activity is shared. We use actual Facebook data to show how our model can be applied to a real-world scenario.Peer ReviewedPostprint (published version
An analysis of security issues in building automation systems
The purpose of Building Automation Systems (BAS) is to centralise the management of a wide range of building services, through the use of integrated protocol and communication media. Through the use of IP-based communication and encapsulated protocols, BAS are increasingly being connected to corporate networks and also being remotely accessed for management purposes, both for convenience and emergency purposes. These protocols, however, were not designed with security as a primary requirement, thus the majority of systems operate with sub-standard or non-existent security implementations, relying on security through obscurity. Research has been undertaken into addressing the shortfalls of security implementations in BAS, however defining the threats against BAS, and detection of these threats is an area that is particularly lacking. This paper presents an overview of the current security measures in BAS, outlining key issues, and methods that can be improved to protect cyber physical systems against the increasing threat of cyber terrorism and hacktivism. Future research aims to further evaluate and improve the detection systems used in BAS through first defining the threats and then applying and evaluating machine learning algorithms for traffic classification and IDS profiling capable of operating on resource constrained BAS
Optimal Active Social Network De-anonymization Using Information Thresholds
In this paper, de-anonymizing internet users by actively querying their group
memberships in social networks is considered. In this problem, an anonymous
victim visits the attacker's website, and the attacker uses the victim's
browser history to query her social media activity for the purpose of
de-anonymization using the minimum number of queries. A stochastic model of the
problem is considered where the attacker has partial prior knowledge of the
group membership graph and receives noisy responses to its real-time queries.
The victim's identity is assumed to be chosen randomly based on a given
distribution which models the users' risk of visiting the malicious website. A
de-anonymization algorithm is proposed which operates based on information
thresholds and its performance both in the finite and asymptotically large
social network regimes is analyzed. Furthermore, a converse result is provided
which proves the optimality of the proposed attack strategy
Adversarial Removal of Demographic Attributes from Text Data
Recent advances in Representation Learning and Adversarial Training seem to
succeed in removing unwanted features from the learned representation. We show
that demographic information of authors is encoded in -- and can be recovered
from -- the intermediate representations learned by text-based neural
classifiers. The implication is that decisions of classifiers trained on
textual data are not agnostic to -- and likely condition on -- demographic
attributes. When attempting to remove such demographic information using
adversarial training, we find that while the adversarial component achieves
chance-level development-set accuracy during training, a post-hoc classifier,
trained on the encoded sentences from the first part, still manages to reach
substantially higher classification accuracies on the same data. This behavior
is consistent across several tasks, demographic properties and datasets. We
explore several techniques to improve the effectiveness of the adversarial
component. Our main conclusion is a cautionary one: do not rely on the
adversarial training to achieve invariant representation to sensitive features
Recommended from our members
Towards real-time profiling of human attackers and bot detection
Characterising the person behind a cyber attack can be highly useful. At a practical security and forensic level, it can help profile adversaries during and after an attack, and at a theoretical level it can allow us to build improved threat models. This is, however, a challenging problem, as relevant data cannot easily be found. They are not often released publicly and may be the result of criminal investigation. Moreover, the identity of an attacker is rarely revealed in an attack. Here, we attempt a rather unusual approach. We attempt to classify the adversary as a type of human user, arguing that if it does not fit in any realistic profile of a human user, then it is probably a bot. Hence, we are working towards a system that is both a human attacker profiler and an anomaly-based bot detector. For this, we first need to build a technical system that collects relevant data in real- time. As no such information exists, we experimented with several different measurable input data and human profile characteristics, evaluating the usefulness of the former in determining the latter. We then present a case-based reason- ing approach that classifies an attacker based on the values of these metrics. For this, we use experimental data that we have previously collected and are the result of a set of cyber-attack scenarios carried out by 87 users. As a practical application, we have developed an automated profiling tool demonstrating the potential real-time use of the proposed system in a quasi-realistic setting. We discuss this approach’s ability for an adversary that has already gained access to a target system. The profile identified should tell us the characteristics of the adversary if it is human. If no profile can be identified, we argue that this is a good indication it is a bot
- …