63,268 research outputs found
Shadow Honeypots
We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network or service. Traffic that is considered anomalous is processed by a "shadow honeypot" to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular ("production") instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives
High-resolution optical and SAR image fusion for building database updating
This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of DempsterâShafer evidence theory
Detecting Targeted Attacks Using Shadow Honeypots
We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a "shadow honeypot'' to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular ("production'') instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives
Adaptive Traffic Fingerprinting for Darknet Threat Intelligence
Darknet technology such as Tor has been used by various threat actors for
organising illegal activities and data exfiltration. As such, there is a case
for organisations to block such traffic, or to try and identify when it is used
and for what purposes. However, anonymity in cyberspace has always been a
domain of conflicting interests. While it gives enough power to nefarious
actors to masquerade their illegal activities, it is also the cornerstone to
facilitate freedom of speech and privacy. We present a proof of concept for a
novel algorithm that could form the fundamental pillar of a darknet-capable
Cyber Threat Intelligence platform. The solution can reduce anonymity of users
of Tor, and considers the existing visibility of network traffic before
optionally initiating targeted or widespread BGP interception. In combination
with server HTTP response manipulation, the algorithm attempts to reduce the
candidate data set to eliminate client-side traffic that is most unlikely to be
responsible for server-side connections of interest. Our test results show that
MITM manipulated server responses lead to expected changes received by the Tor
client. Using simulation data generated by shadow, we show that the detection
scheme is effective with false positive rate of 0.001, while sensitivity
detecting non-targets was 0.016+-0.127. Our algorithm could assist
collaborating organisations willing to share their threat intelligence or
cooperate during investigations.Comment: 26 page
Efficient Evaluation of the Number of False Alarm Criterion
This paper proposes a method for computing efficiently the significance of a
parametric pattern inside a binary image. On the one hand, a-contrario
strategies avoid the user involvement for tuning detection thresholds, and
allow one to account fairly for different pattern sizes. On the other hand,
a-contrario criteria become intractable when the pattern complexity in terms of
parametrization increases. In this work, we introduce a strategy which relies
on the use of a cumulative space of reduced dimensionality, derived from the
coupling of a classic (Hough) cumulative space with an integral histogram
trick. This space allows us to store partial computations which are required by
the a-contrario criterion, and to evaluate the significance with a lower
computational cost than by following a straightforward approach. The method is
illustrated on synthetic examples on patterns with various parametrizations up
to five dimensions. In order to demonstrate how to apply this generic concept
in a real scenario, we consider a difficult crack detection task in still
images, which has been addressed in the literature with various local and
global detection strategies. We model cracks as bounded segments, detected by
the proposed a-contrario criterion, which allow us to introduce additional
spatial constraints based on their relative alignment. On this application, the
proposed strategy yields state-of the-art results, and underlines its potential
for handling complex pattern detection tasks
Shadows of CPR black holes and tests of the Kerr metric
We study the shadow of the Cardoso-Pani-Rico (CPR) black hole for different
values of the black hole spin , the deformation parameters
and , and the viewing angle . We find that the main impact of
the deformation parameter is the change of the size of the
shadow, while the deformation parameter affects the shape of its
boundary. In general, it is impossible to test the Kerr metric, because the
shadow of a Kerr black hole can be reproduced quite well by a black hole with
non-vanishing or . Deviations from the Kerr
geometry could be constrained in the presence of high quality data and in the
favorable case of a black hole with high values of and . However, the
shadows of some black holes with non-vanishing present peculiar
features and the possible detection of these shadows could unambiguously
distinguish these objects from the standard Kerr black holes of general
relativity.Comment: 10 pages, 7 figures. v2: refereed version with minor change
- âŠ