7,536 research outputs found
Game Theory Meets Network Security: A Tutorial at ACM CCS
The increasingly pervasive connectivity of today's information systems brings
up new challenges to security. Traditional security has accomplished a long way
toward protecting well-defined goals such as confidentiality, integrity,
availability, and authenticity. However, with the growing sophistication of the
attacks and the complexity of the system, the protection using traditional
methods could be cost-prohibitive. A new perspective and a new theoretical
foundation are needed to understand security from a strategic and
decision-making perspective. Game theory provides a natural framework to
capture the adversarial and defensive interactions between an attacker and a
defender. It provides a quantitative assessment of security, prediction of
security outcomes, and a mechanism design tool that can enable
security-by-design and reverse the attacker's advantage. This tutorial provides
an overview of diverse methodologies from game theory that includes games of
incomplete information, dynamic games, mechanism design theory to offer a
modern theoretic underpinning of a science of cybersecurity. The tutorial will
also discuss open problems and research challenges that the CCS community can
address and contribute with an objective to build a multidisciplinary bridge
between cybersecurity, economics, game and decision theory
Trial without Error: Towards Safe Reinforcement Learning via Human Intervention
AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe
- …