92 research outputs found
Effect of spin-orbit interaction on the critical temperature of an ideal Bose gas
We consider Bose-Einstein condensation of an ideal bose gas with an equal
mixture of `Rashba' and `Dresselhaus' spin-orbit interactions and study its
effect on the critical temperature.
In uniform bose gas a `cusp' and a sharp drop in the critical temperature
occurs due to the change in the density of states at a critical Raman coupling
where the degeneracy of the ground states is lifted. Relative drop in the
critical temperature depends on the diluteness of the gas as well as on the
spin-orbit coupling strength. In the presence of a harmonic trap, the cusp in
the critical temperature smoothened out and a minimum appears. Both the drop in
the critical temperature and lifting of `quasi-degeneracy' of the ground states
exhibit crossover phenomena which is controlled by the trap frequency. By
considering a 'Dicke' like model we extend our calculation to bosons with large
spin and observe a similar minimum in the critical temperature near the
critical Raman frequency, which becomes deeper for larger spin. Finally in the
limit of infinite spin, the critical temperature vanishes at the critical
frequency, which is a manifestation of Dicke type quantum phase transition.Comment: 9 pages, 6 figure
Audit Games with Multiple Defender Resources
Modern organizations (e.g., hospitals, social networks, government agencies)
rely heavily on audit to detect and punish insiders who inappropriately access
and disclose confidential information. Recent work on audit games models the
strategic interaction between an auditor with a single audit resource and
auditees as a Stackelberg game, augmenting associated well-studied security
games with a configurable punishment parameter. We significantly generalize
this audit game model to account for multiple audit resources where each
resource is restricted to audit a subset of all potential violations, thus
enabling application to practical auditing scenarios. We provide an FPTAS that
computes an approximately optimal solution to the resulting non-convex
optimization problem. The main technical novelty is in the design and
correctness proof of an optimization transformation that enables the
construction of this FPTAS. In addition, we experimentally demonstrate that
this transformation significantly speeds up computation of solutions for a
class of audit games and security games
A learning and masking approach to secure learning
Deep Neural Networks (DNNs) have been shown to be vulnerable against
adversarial examples, which are data points cleverly constructed to fool the
classifier. Such attacks can be devastating in practice, especially as DNNs are
being applied to ever increasing critical tasks like image recognition in
autonomous driving. In this paper, we introduce a new perspective on the
problem. We do so by first defining robustness of a classifier to adversarial
exploitation. Next, we show that the problem of adversarial example generation
can be posed as learning problem. We also categorize attacks in literature into
high and low perturbation attacks; well-known attacks like fast-gradient sign
method (FGSM) and our attack produce higher perturbation adversarial examples
while the more potent but computationally inefficient Carlini-Wagner (CW)
attack is low perturbation. Next, we show that the dual approach of the attack
learning problem can be used as a defensive technique that is effective against
high perturbation attacks. Finally, we show that a classifier masking method
achieved by adding noise to the a neural network's logit output protects
against low distortion attacks such as the CW attack. We also show that both
our learning and masking defense can work simultaneously to protect against
multiple attacks. We demonstrate the efficacy of our techniques by
experimenting with the MNIST and CIFAR-10 datasets
Security Games with Information Leakage: Modeling and Computation
Most models of Stackelberg security games assume that the attacker only knows
the defender's mixed strategy, but is not able to observe (even partially) the
instantiated pure strategy. Such partial observation of the deployed pure
strategy -- an issue we refer to as information leakage -- is a significant
concern in practical applications. While previous research on patrolling games
has considered the attacker's real-time surveillance, our settings, therefore
models and techniques, are fundamentally different. More specifically, after
describing the information leakage model, we start with an LP formulation to
compute the defender's optimal strategy in the presence of leakage. Perhaps
surprisingly, we show that a key subproblem to solve this LP (more precisely,
the defender oracle) is NP-hard even for the simplest of security game models.
We then approach the problem from three possible directions: efficient
algorithms for restricted cases, approximation algorithms, and heuristic
algorithms for sampling that improves upon the status quo. Our experiments
confirm the necessity of handling information leakage and the advantage of our
algorithms
Learning adversary behavior in security games: A PAC model perspective
Recent applications of Stackelberg Security Games (SSG), from wildlife crime
to urban crime, have employed machine learning tools to learn and predict
adversary behavior using available data about defender-adversary interactions.
Given these recent developments, this paper commits to an approach of directly
learning the response function of the adversary. Using the PAC model, this
paper lays a firm theoretical foundation for learning in SSGs (e.g.,
theoretically answer questions about the numbers of samples required to learn
adversary behavior) and provides utility guarantees when the learned adversary
model is used to plan the defender's strategy. The paper also aims to answer
practical questions such as how much more data is needed to improve an
adversary model's accuracy. Additionally, we explain a recently observed
phenomenon that prediction accuracy of learned adversary behavior is not enough
to discover the utility maximizing defender strategy. We provide four main
contributions: (1) a PAC model of learning adversary response functions in
SSGs; (2) PAC-model analysis of the learning of key, existing bounded
rationality models in SSGs; (3) an entirely new approach to adversary modeling
based on a non-parametric class of response functions with PAC-model analysis
and (4) identification of conditions under which computing the best defender
strategy against the learned adversary behavior is indeed the optimal strategy.
Finally, we conduct experiments with real-world data from a national park in
Uganda, showing the benefit of our new adversary modeling approach and
verification of our PAC model predictions
- …