Despite an exciting new wave of multimodal machine learning models, current
approaches still struggle to interpret the complex contextual relationships
between the different modalities present in videos. Going beyond existing
methods that emphasize simple activities or objects, we propose a new
model-agnostic approach for generating detailed textual descriptions that
captures multimodal video information. Our method leverages the extensive
knowledge learnt by large language models, such as GPT-3.5 or Llama2, to reason
about textual descriptions of the visual and aural modalities, obtained from
BLIP-2, Whisper and ImageBind. Without needing additional finetuning of
video-text models or datasets, we demonstrate that available LLMs have the
ability to use these multimodal textual descriptions as proxies for ``sight''
or ``hearing'' and perform zero-shot multimodal classification of videos
in-context. Our evaluations on popular action recognition benchmarks, such as
UCF-101 or Kinetics, show these context-rich descriptions can be successfully
used in video understanding tasks. This method points towards a promising new
research direction in multimodal classification, demonstrating how an interplay
between textual, visual and auditory machine learning models can enable more
holistic video understanding.Comment: Accepted at "What is Next in Multimodal Foundation Models?" (MMFM)
workshop at ICCV 202