209 research outputs found

    Causal Strategic Classification: A Tale of Two Shifts

    Full text link
    When users can benefit from certain predictive outcomes, they may be prone to act to achieve those outcome, e.g., by strategically modifying their features. The goal in strategic classification is therefore to train predictive models that are robust to such behavior. However, the conventional framework assumes that changing features does not change actual outcomes, which depicts users as "gaming" the system. Here we remove this assumption, and study learning in a causal strategic setting where true outcomes do change. Focusing on accuracy as our primary objective, we show how strategic behavior and causal effects underlie two complementing forms of distribution shift. We characterize these shifts, and propose a learning algorithm that balances between these two forces and over time, and permits end-to-end training. Experiments on synthetic and semi-synthetic data demonstrate the utility of our approach

    The importance of N2 leptogenesis

    Full text link
    We argue that fast interactions of the lightest singlet neutrino N1N_1 would project part of a preexisting lepton asymmetry LpL_p onto a direction that is protected from N1N_1 washout effects, thus preventing it from being erased. In particular, we consider an asymmetry generated in N2N_2 decays, assuming that N1N_1 interactions are fast enough to bring N1N_1 into full thermal equilibrium. If N1N_1 decays occur at T\gsim 10^9 GeV, that is, before the muon Yukawa interactions enter into thermal equilibrium, then generically part of LpL_p survives. In this case some of the constraints implied by the standard N1N_1 leptogenesis scenario hold only if Lpβ‰ˆ0L_p \approx 0. For T\lsim 10^9 GeV, LpL_p is generally erased, unless special alignment/orthogonality conditions in flavor space are realized.Comment: 5 pages. A few clarifications added, conclusions unchanged. Version published in Phys. Rev. Lett. (Title changed in journal

    Adaptive KalmanNet: Data-Driven Kalman Filter with Fast Adaptation

    Full text link
    Combining the classical Kalman filter (KF) with a deep neural network (DNN) enables tracking in partially known state space (SS) models. A major limitation of current DNN-aided designs stems from the need to train them to filter data originating from a specific distribution and underlying SS model. Consequently, changes in the model parameters may require lengthy retraining. While the KF adapts through parameter tuning, the black-box nature of DNNs makes identifying tunable components difficult. Hence, we propose Adaptive KalmanNet (AKNet), a DNN-aided KF that can adapt to changes in the SS model without retraining. Inspired by recent advances in large language model fine-tuning paradigms, AKNet uses a compact hypernetwork to generate context-dependent modulation weights. Numerical evaluation shows that AKNet provides consistent state estimation performance across a continuous range of noise distributions, even when trained using data from limited noise settings
    • …
    corecore