Fall accidents are critical issues in an aging and aged society. Recently,
many researchers developed pre-impact fall detection systems using deep
learning to support wearable-based fall protection systems for preventing
severe injuries. However, most works only employed simple neural network models
instead of complex models considering the usability in resource-constrained
mobile devices and strict latency requirements. In this work, we propose a
novel pre-impact fall detection via CNN-ViT knowledge distillation, namely
PreFallKD, to strike a balance between detection performance and computational
complexity. The proposed PreFallKD transfers the detection knowledge from the
pre-trained teacher model (vision transformer) to the student model
(lightweight convolutional neural networks). Additionally, we apply data
augmentation techniques to tackle issues of data imbalance. We conduct the
experiment on the KFall public dataset and compare PreFallKD with other
state-of-the-art models. The experiment results show that PreFallKD could boost
the student model during the testing phase and achieves reliable F1-score
(92.66%) and lead time (551.3 ms)