209 research outputs found
Causal Strategic Classification: A Tale of Two Shifts
When users can benefit from certain predictive outcomes, they may be prone to
act to achieve those outcome, e.g., by strategically modifying their features.
The goal in strategic classification is therefore to train predictive models
that are robust to such behavior. However, the conventional framework assumes
that changing features does not change actual outcomes, which depicts users as
"gaming" the system. Here we remove this assumption, and study learning in a
causal strategic setting where true outcomes do change. Focusing on accuracy as
our primary objective, we show how strategic behavior and causal effects
underlie two complementing forms of distribution shift. We characterize these
shifts, and propose a learning algorithm that balances between these two forces
and over time, and permits end-to-end training. Experiments on synthetic and
semi-synthetic data demonstrate the utility of our approach
The importance of N2 leptogenesis
We argue that fast interactions of the lightest singlet neutrino would
project part of a preexisting lepton asymmetry onto a direction that is
protected from washout effects, thus preventing it from being erased. In
particular, we consider an asymmetry generated in decays, assuming that
interactions are fast enough to bring into full thermal
equilibrium. If decays occur at T\gsim 10^9 GeV, that is, before the
muon Yukawa interactions enter into thermal equilibrium, then generically part
of survives. In this case some of the constraints implied by the standard
leptogenesis scenario hold only if . For T\lsim 10^9
GeV, is generally erased, unless special alignment/orthogonality
conditions in flavor space are realized.Comment: 5 pages. A few clarifications added, conclusions unchanged. Version
published in Phys. Rev. Lett. (Title changed in journal
Adaptive KalmanNet: Data-Driven Kalman Filter with Fast Adaptation
Combining the classical Kalman filter (KF) with a deep neural network (DNN)
enables tracking in partially known state space (SS) models. A major limitation
of current DNN-aided designs stems from the need to train them to filter data
originating from a specific distribution and underlying SS model. Consequently,
changes in the model parameters may require lengthy retraining. While the KF
adapts through parameter tuning, the black-box nature of DNNs makes identifying
tunable components difficult. Hence, we propose Adaptive KalmanNet (AKNet), a
DNN-aided KF that can adapt to changes in the SS model without retraining.
Inspired by recent advances in large language model fine-tuning paradigms,
AKNet uses a compact hypernetwork to generate context-dependent modulation
weights. Numerical evaluation shows that AKNet provides consistent state
estimation performance across a continuous range of noise distributions, even
when trained using data from limited noise settings
- β¦