3 research outputs found

    Snatch theft detection in unconstrained surveillance videos using action attribute modelling

    Get PDF
    In a city with hundreds of cameras and thousands of interactions daily among people, manually identifying crimes like chain and purse snatching is a tedious and challenging task. Snatch thefts are complex actions containing attributes like walking, running etc. which are affected by actor and view variations. To capture the variation in these attributes in diverse scenarios, we propose to model snatch thefts using a Gaussian mixture model (GMM) with a large number of mixtures known as universal attribute model (UAM). However, the number of snatch thefts typically recorded in a surveillance videos is not sufficient enough to train the parameters of the UAM. Hence, we use the large human action datasets like UCF101 and HMDB51 to train the UAM as many of the actions in these datasets share attributes with snatch thefts. Then, a super-vector representation for each snatch theft clip is obtained using maximum aposteriori (MAP) adaptation of the universal attribute model. However, super-vectors are high-dimensional and contain many redundant attributes which do not contribute to snatch thefts. So, we propose to use factor analysis to obtain a low-dimensional representation called action-vector that contains only the relevant attributes. For evaluation, we introduce a video dataset called Snatch 1.0 created from many hours of surveillance footage obtained from different traffic cameras placed in the city of Hyderabad, India. We show that using action-vectors snatch thefts can be better identified than existing state-of-the-art feature representations

    Proposing an Analysis System to Monitoring Weightlifting Based on Training (Snatch and Clean and Jerk)

    Get PDF
    Analysis system of sports players is very important for individuals in weightlifting. Assessment of player and strength is important for the performance of weightlifting. This paper proposes an analytical method for weightlifters with check-by-frame video. This analysis system can compute the major steps of seven positions in both snatch and clean and jerk methods in frame-video weightlifting monitoring of movements. Each user can compute the major steps of the seven positions of Hu moments among two frames in the video during training, and the Euclidian distance can be computed for the Hu moment values and lifting moment values in the snatch and clean and jerk methods during training. The outcome of the proposed system shows on efficient, accurate results in monitoring movement analysis in weightlifting for playersduring training in this area

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods
    corecore