In this paper, we present an unsupervised learning framework for analyzing
activities and interactions in surveillance videos. In our framework, three
levels of video events are connected by Hierarchical Dirichlet Process (HDP)
model: low-level visual features, simple atomic activities, and multi-agent
interactions. Atomic activities are represented as distribution of low-level
features, while complicated interactions are represented as distribution of
atomic activities. This learning process is unsupervised. Given a training
video sequence, low-level visual features are extracted based on optic flow and
then clustered into different atomic activities and video clips are clustered
into different interactions. The HDP model automatically decide the number of
clusters, i.e. the categories of atomic activities and interactions. Based on
the learned atomic activities and interactions, a training dataset is generated
to train the Gaussian Process (GP) classifier. Then the trained GP models work
in newly captured video to classify interactions and detect abnormal events in
real time. Furthermore, the temporal dependencies between video events learned
by HDP-Hidden Markov Models (HMM) are effectively integrated into GP classifier
to enhance the accuracy of the classification in newly captured videos. Our
framework couples the benefits of the generative model (HDP) with the
discriminant model (GP). We provide detailed experiments showing that our
framework enjoys favorable performance in video event classification in
real-time in a crowded traffic scene