Combining the classical Kalman filter (KF) with a deep neural network (DNN)
enables tracking in partially known state space (SS) models. A major limitation
of current DNN-aided designs stems from the need to train them to filter data
originating from a specific distribution and underlying SS model. Consequently,
changes in the model parameters may require lengthy retraining. While the KF
adapts through parameter tuning, the black-box nature of DNNs makes identifying
tunable components difficult. Hence, we propose Adaptive KalmanNet (AKNet), a
DNN-aided KF that can adapt to changes in the SS model without retraining.
Inspired by recent advances in large language model fine-tuning paradigms,
AKNet uses a compact hypernetwork to generate context-dependent modulation
weights. Numerical evaluation shows that AKNet provides consistent state
estimation performance across a continuous range of noise distributions, even
when trained using data from limited noise settings