4,962 research outputs found
Sharing Computer Network Logs for Security and Privacy: A Motivation for New Methodologies of Anonymization
Logs are one of the most fundamental resources to any security professional.
It is widely recognized by the government and industry that it is both
beneficial and desirable to share logs for the purpose of security research.
However, the sharing is not happening or not to the degree or magnitude that is
desired. Organizations are reluctant to share logs because of the risk of
exposing sensitive information to potential attackers. We believe this
reluctance remains high because current anonymization techniques are weak and
one-size-fits-all--or better put, one size tries to fit all. We must develop
standards and make anonymization available at varying levels, striking a
balance between privacy and utility. Organizations have different needs and
trust other organizations to different degrees. They must be able to map
multiple anonymization levels with defined risks to the trust levels they share
with (would-be) receivers. It is not until there are industry standards for
multiple levels of anonymization that we will be able to move forward and
achieve the goal of widespread sharing of logs for security researchers.Comment: 17 pages, 1 figur
Understanding the Detection of View Fraud in Video Content Portals
While substantial effort has been devoted to understand fraudulent activity
in traditional online advertising (search and banner), more recent forms such
as video ads have received little attention. The understanding and
identification of fraudulent activity (i.e., fake views) in video ads for
advertisers, is complicated as they rely exclusively on the detection
mechanisms deployed by video hosting portals. In this context, the development
of independent tools able to monitor and audit the fidelity of these systems
are missing today and needed by both industry and regulators.
In this paper we present a first set of tools to serve this purpose. Using
our tools, we evaluate the performance of the audit systems of five major
online video portals. Our results reveal that YouTube's detection system
significantly outperforms all the others. Despite this, a systematic evaluation
indicates that it may still be susceptible to simple attacks. Furthermore, we
find that YouTube penalizes its videos' public and monetized view counters
differently, the former being more aggressive. This means that views identified
as fake and discounted from the public view counter are still monetized. We
speculate that even though YouTube's policy puts in lots of effort to
compensate users after an attack is discovered, this practice places the burden
of the risk on the advertisers, who pay to get their ads displayed.Comment: To appear in WWW 2016, Montr\'eal, Qu\'ebec, Canada. Please cite the
conference version of this pape
Comprehensive Security Framework for Global Threats Analysis
Cyber criminality activities are changing and becoming more and more professional. With the growth of financial flows through the Internet and the Information System (IS), new kinds of thread arise involving complex scenarios spread within multiple IS components. The IS information modeling and Behavioral Analysis are becoming new solutions to normalize the IS information and counter these new threads. This paper presents a framework which details the principal and necessary steps for monitoring an IS. We present the architecture of the framework, i.e. an ontology of activities carried out within an IS to model security information and User Behavioral analysis. The results of the performed experiments on real data show that the modeling is effective to reduce the amount of events by 91%. The User Behavioral Analysis on uniform modeled data is also effective, detecting more than 80% of legitimate actions of attack scenarios
Scan Correlation -- Revealing distributed scan campaigns
Public networks are exposed to port scans from the Internet. Attackers search
for vulnerable services they can exploit. In large scan campaigns, attackers
often utilize different machines to perform distributed scans, which impedes
their detection and might also camouflage the actual goal of the scanning
campaign. In this paper, we present a correlation algorithm to detect scans,
identify potential relations among them, and reassemble them to larger
campaigns. We evaluate our approach on real-world Internet traffic and our
results indicate that it can summarize and characterize standalone and
distributed scan campaigns based on their tools and intention.Comment: Accepted for publication at DISSECT '2
An Evolutionary Approach for Learning Attack Specifications in Network Graphs
This paper presents an evolutionary algorithm that learns attack scenarios, called attack specifications, from a network graph. This learning process aims to find attack specifications that minimise cost and maximise the value that an attacker gets from a successful attack. The attack specifications that the algorithm learns are represented using an approach based on Hoare's CSP (Communicating Sequential Processes). This new approach is able to represent several elements found in attacks, for example synchronisation. These attack specifications can be used by network administrators to find vulnerable scenarios, composed from the basic constructs Sequence, Parallel and Choice, that lead to valuable assets in the network
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
- …