2,400 research outputs found
Enhanced Position Verification for VANETs using Subjective Logic
The integrity of messages in vehicular ad-hoc networks has been extensively
studied by the research community, resulting in the IEEE~1609.2 standard, which
provides typical integrity guarantees. However, the correctness of message
contents is still one of the main challenges of applying dependable and secure
vehicular ad-hoc networks. One important use case is the validity of position
information contained in messages: position verification mechanisms have been
proposed in the literature to provide this functionality. A more general
approach to validate such information is by applying misbehavior detection
mechanisms. In this paper, we consider misbehavior detection by enhancing two
position verification mechanisms and fusing their results in a generalized
framework using subjective logic. We conduct extensive simulations using VEINS
to study the impact of traffic density, as well as several types of attackers
and fractions of attackers on our mechanisms. The obtained results show the
proposed framework can validate position information as effectively as existing
approaches in the literature, without tailoring the framework specifically for
this use case.Comment: 7 pages, 18 figures, corrected version of a paper submitted to 2016
IEEE 84th Vehicular Technology Conference (VTC2016-Fall): revised the way an
opinion is created with eART, and re-did the experiments (uploaded here as
correction in agreement with TPC Chairs
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
Creation of backdoors in quantum communications via laser damage
Practical quantum communication (QC) protocols are assumed to be secure
provided implemented devices are properly characterized and all known side
channels are closed. We show that this is not always true. We demonstrate a
laser-damage attack capable of modifying device behaviour on-demand. We test it
on two practical QC systems for key distribution and coin-tossing, and show
that newly created deviations lead to side channels. This reveals that laser
damage is a potential security risk to existing QC systems, and necessitates
their testing to guarantee security.Comment: Changed the title to match the journal version. 9 pages, 5 figure
Concise Security Bounds for Practical Decoy-State Quantum Key Distribution
Due to its ability to tolerate high channel loss, decoy-state quantum key
distribution (QKD) has been one of the main focuses within the QKD community.
Notably, several experimental groups have demonstrated that it is secure and
feasible under real-world conditions. Crucially, however, the security and
feasibility claims made by most of these experiments were obtained under the
assumption that the eavesdropper is restricted to particular types of attacks
or that the finite-key effects are neglected. Unfortunately, such assumptions
are not possible to guarantee in practice. In this work, we provide concise and
tight finite-key security bounds for practical decoy-state QKD that are valid
against general attacks.Comment: 5+3 pages and 2 figure
Randomized ancillary qubit overcomes detector-control and intercept-resend hacking of quantum key distribution
Practical implementations of quantum key distribution (QKD) have been shown
to be subject to various detector side-channel attacks that compromise the
promised unconditional security. Most notable is a general class of attacks
adopting the use of faked-state photons as in the detector-control and, more
broadly, the intercept-resend attacks. In this paper, we present a simple
scheme to overcome such class of attacks: A legitimate user, Bob, uses a
polarization randomizer at his gateway to distort an ancillary polarization of
a phase-encoded photon in a bidirectional QKD configuration. Passing through
the randomizer once on the way to his partner, Alice, and again in the opposite
direction, the polarization qubit of the genuine photon is immune to
randomization. However, the polarization state of a photon from an intruder,
Eve, to Bob is randomized and hence directed to a detector in a different path,
whereupon it triggers an alert. We demonstrate theoretically and experimentally
that, using commercial off-the-shelf detectors, it can be made impossible for
Eve to avoid triggering the alert, no matter what faked-state of light she
uses.Comment: Quantum encryption, bidirectional quantum key distribution, detector
control, intercept and resend attacks, faked state photon
- …