15,354 research outputs found
ATTACK2VEC: Leveraging Temporal Word Embeddings to Understand the Evolution of Cyberattacks
Despite the fact that cyberattacks are constantly growing in complexity, the
research community still lacks effective tools to easily monitor and understand
them. In particular, there is a need for techniques that are able to not only
track how prominently certain malicious actions, such as the exploitation of
specific vulnerabilities, are exploited in the wild, but also (and more
importantly) how these malicious actions factor in as attack steps in more
complex cyberattacks. In this paper we present ATTACK2VEC, a system that uses
temporal word embeddings to model how attack steps are exploited in the wild,
and track how they evolve. We test ATTACK2VEC on a dataset of billions of
security events collected from the customers of a commercial Intrusion
Prevention System over a period of two years, and show that our approach is
effective in monitoring the emergence of new attack strategies in the wild and
in flagging which attack steps are often used together by attackers (e.g.,
vulnerabilities that are frequently exploited together). ATTACK2VEC provides a
useful tool for researchers and practitioners to better understand cyberattacks
and their evolution, and use this knowledge to improve situational awareness
and develop proactive defenses
Attack2vec: Leveraging temporal word embeddings to understand the evolution of cyberattacks
Despite the fact that cyberattacks are constantly growing in complexity, the research community still lacks effective tools to easily monitor and understand them. In particular, there is a need for techniques that are able to not only track how prominently certain malicious actions, such as the exploitation of specific vulnerabilities, are exploited in the wild, but also (and more importantly) how these malicious actions factor in as attack steps in more complex cyberattacks. In this paper we present ATTACK2VEC, a system that uses temporal word embeddings to model how attack steps are exploited in the wild, and track how they evolve. We test ATTACK2VEC on a dataset
of billions of security events collected from the customers of a commercial Intrusion Prevention System over a period of two years, and show that our approach is effective in monitoring the emergence of new attack strategies in the wild and in flagging which attack steps are often used together by attackers (e.g., vulnerabilities that are frequently exploited together). ATTACK2VEC provides a useful tool for researchers and practitioners to better
understand cyberattacks and their evolution, and use this knowledge to improve situational awareness and develop proactive defenses.Accepted manuscrip
Electric-Field-Induced Resonant Spin Polarization in a Two-Dimensional Electron Gas
Electric response of spin polarization in two-dimensional electron gas with
structural inversion asymmetry subjected to a magnetic field was studied by
means of the linear and non-linear theory and numerical simulation with the
disorder effect. It was found by Kubo linear reponse theory that an electric
resonant response of spin polarization occurs when the Fermi surface is located
near the crossing of two Landau levels, which is induced from the competition
between the spin-orbit coupling and Zeeman splitting. The scaling behavior was
investigated with a simplified two-level model by non-linear method, and the
resonant peak value is reciprocally proportional to the electric field at low
temperatures and to temperature for finite electric fields. Finally numerical
simulation illustrated that impurity potential opens an enegy gap near the
resonant point and suppresses the effect gradually with the increasing strength
of disorder. This resonant effect may provide an efficient way to control spin
polarization by an external electric field.Comment: 6 pages, 5 figure
Deep Active Learning for Named Entity Recognition
Deep learning has yielded state-of-the-art performance on many natural
language processing tasks including named entity recognition (NER). However,
this typically requires large amounts of labeled data. In this work, we
demonstrate that the amount of labeled training data can be drastically reduced
when deep learning is combined with active learning. While active learning is
sample-efficient, it can be computationally expensive since it requires
iterative retraining. To speed this up, we introduce a lightweight architecture
for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and
word encoders and a long short term memory (LSTM) tag decoder. The model
achieves nearly state-of-the-art performance on standard datasets for the task
while being computationally much more efficient than best performing models. We
carry out incremental active learning, during the training process, and are
able to nearly match state-of-the-art performance with just 25\% of the
original training data
A Fenchel-Moreau-Rockafellar type theorem on the Kantorovich-Wasserstein space with Applications in Partially Observable Markov Decision Processes
By using the fact that the space of all probability measures with finite
support can be somehow completed in two different fashions, one generating the
Arens-Eells space and another generating the Kantorovich-Wasserstein
(Wasserstein-1) space, and by exploiting the duality relationship between the
Arens-Eells space with the space of Lipschitz functions, we provide a dual
representation of Fenchel-Moreau-Rockafellar type for proper convex functionals
on Wasserstein-1. We retrieve dual transportation inequalities as a Corollary
and we provide examples where the theorem can be used to easily prove dual
expressions like the celebrated Donsker-Varadhan variational formula. Finally
our result allows to write convex functions as the supremum over all linear
functions that are generated by roots of its conjugate dual, something that we
apply to the field of Partially observable Markov decision processes (POMDPs)
to approximate the value function of a given POMDP by iterating level sets.
This extends the method used in Smallwood 1973 for finite state spaces to the
case were the state space is a Polish metric space.Comment: 20 page
Risk-sensitive Reinforcement Learning
We derive a family of risk-sensitive reinforcement learning methods for
agents, who face sequential decision-making tasks in uncertain environments. By
applying a utility function to the temporal difference (TD) error, nonlinear
transformations are effectively applied not only to the received rewards but
also to the true transition probabilities of the underlying Markov decision
process. When appropriate utility functions are chosen, the agents' behaviors
express key features of human behavior as predicted by prospect theory
(Kahneman and Tversky, 1979), for example different risk-preferences for gains
and losses as well as the shape of subjective probability curves. We derive a
risk-sensitive Q-learning algorithm, which is necessary for modeling human
behavior when transition probabilities are unknown, and prove its convergence.
As a proof of principle for the applicability of the new framework we apply it
to quantify human behavior in a sequential investment task. We find, that the
risk-sensitive variant provides a significantly better fit to the behavioral
data and that it leads to an interpretation of the subject's responses which is
indeed consistent with prospect theory. The analysis of simultaneously measured
fMRI signals show a significant correlation of the risk-sensitive TD error with
BOLD signal change in the ventral striatum. In addition we find a significant
correlation of the risk-sensitive Q-values with neural activity in the
striatum, cingulate cortex and insula, which is not present if standard
Q-values are used.Comment: 27 pages, 7 figure
- …
