Recent achievements in language models have showcased their extraordinary
capabilities in bridging visual information with semantic language
understanding. This leads us to a novel question: can language models connect
textual semantics with IoT sensory signals to perform recognition tasks, e.g.,
Human Activity Recognition (HAR)? If so, an intelligent HAR system with
human-like cognition can be built, capable of adapting to new environments and
unseen categories. This paper explores its feasibility with an innovative
approach, IoT-sEnsors-language alignmEnt pre-Training (TENT), which jointly
aligns textual embeddings with IoT sensor signals, including camera video,
LiDAR, and mmWave. Through the IoT-language contrastive learning, we derive a
unified semantic feature space that aligns multi-modal features with language
embeddings, so that the IoT data corresponds to specific words that describe
the IoT data. To enhance the connection between textual categories and their
IoT data, we propose supplementary descriptions and learnable prompts that
bring more semantic information into the joint feature space. TENT can not only
recognize actions that have been seen but also ``guess'' the unseen action by
the closest textual words from the feature space. We demonstrate TENT achieves
state-of-the-art performance on zero-shot HAR tasks using different modalities,
improving the best vision-language models by over 12%.Comment: Preprint manuscript in submissio