Large language models (LLMs), with their remarkable conversational
capabilities, have demonstrated impressive performance across various
applications and have emerged as formidable AI assistants. In view of this, it
raises an intuitive question: Can we harness the power of LLMs to build
multimodal AI assistants for visual applications? Recently, several multi-modal
models have been developed for this purpose. They typically pre-train an
adaptation module to align the semantics of the vision encoder and language
model, followed by fine-tuning on instruction-following data. However, despite
the success of this pipeline in image and language understanding, its
effectiveness in joint video and language understanding has not been widely
explored. In this paper, we aim to develop a novel multi-modal foundation model
capable of comprehending video, image, and language within a general framework.
To achieve this goal, we introduce Valley, a Video Assistant with Large
Language model Enhanced abilitY. The Valley consists of a LLM, a temporal
modeling module, a visual encoder, and a simple projection module designed to
bridge visual and textual modes. To empower Valley with video comprehension and
instruction-following capabilities, we construct a video instruction dataset
and adopt a two-stage tuning procedure to train it. Specifically, we employ
ChatGPT to facilitate the construction of task-oriented conversation data
encompassing various tasks, including multi-shot captions, long video
descriptions, action recognition, causal relationship inference, etc.
Subsequently, we adopt a pre-training-then-instructions-tuned pipeline to align
visual and textual modalities and improve the instruction-following capability
of Valley. Qualitative experiments demonstrate that Valley has the potential to
function as a highly effective video assistant that can make complex video
understanding scenarios easy