14,935 research outputs found

    Les manifestations violentes

    Get PDF
    Abstract. An automatic human shape-motion analysis method based on a fusion architecture is proposed for human action recognition in videos. Robust shape-motion features are extracted from human points detection and tracking. The features are combined within the Transferable Belief Model (TBM) framework for action recognition. The TBMbased modelling and fusion process allows to take into account imprecision, uncertainty and conflict inherent to the features. Action recognition is performed by a multilevel analysis. The sequencing is exploited for feedback information extraction in order to improve tracking results. The system is tested on real videos of athletics meetings to recognize four types of jumps: high jump, pole vault, triple jump and long jump.

    Human Shape-Motion Analysis In Athletics Videos for Coarse To Fine Action/Activity Recognition Using Transferable BeliefModel

    Get PDF
    We present an automatic human shape-motion analysis method based on a fusion architecture for human action and activity recognition in athletic videos. Robust shape and motion features are extracted from human detection and tracking. The features are combined within the Transferable Belief Model (TBM framework for two levels of recognition. The TBM-based modelling of the fusion process allows to take into account imprecision, uncertainty and conflict inherent to the features. First, in a coarse step, actions are roughly recognized. Then, in a fine step, an action sequence recognition method is used to discriminate activities. Belief on actions are made smooth by a Temporal Credal Filter and action sequences, i.e. activities, are recognized using a state machine, called belief scheduler, based on TBM. The belief scheduler is also exploited for feedback information extraction in order to improve tracking results. The system is tested on real videos of athletics meetings to recognize four types of actions (running, jumping, falling and standing) and four types of activities (high jump, pole vault, triple jump and long jump). Results on actions, activities and feedback demonstrate the relevance of the proposed features and as well the efficiency of the proposed recognition approach based on TBM

    Belief Scheduler based on model failure detection in the TBM framework. Application to human activity recognition.

    Get PDF
    International audienceA tool called Belief Scheduler is proposed for state sequence recognition in the Transferable Belief Model (TBM) framework. This tool makes noisy temporal belief functions smoother using a Temporal Evidential Filter (TEF). The Belief Scheduler makes belief on states smoother, separates the states (assumed to be true or false) and synchronizes them in order to infer the sequence. A criterion is also provided to assess the appropriateness between observed belief functions and a given sequence model. This criterion is based on the conflict information appearing explicitly in the TBM when combining observed belief functions with predictions. The Belief Scheduler is part of a generic architecture developed for on-line and automatic human action and activity recognition in videos of athletics taken with a moving camera. In experiments, the system is assessed on a database composed of 69 real athletics video sequences. The goal is to automatically recognize running, jumping, falling and standing-up actions as well as high jump, pole vault, triple jump and {long jump activities of an athlete. A comparison with Hidden Markov Models for video classification is also provided

    Vision-based human action recognition using machine learning techniques

    Get PDF
    The focus of this thesis is on automatic recognition of human actions in videos. Human action recognition is defined as automatic understating of what actions occur in a video performed by a human. This is a difficult problem due to the many challenges including, but not limited to, variations in human shape and motion, occlusion, cluttered background, moving cameras, illumination conditions, and viewpoint variations. To start with, The most popular and prominent state-of-the-art techniques are reviewed, evaluated, compared, and presented. Based on the literature review, these techniques are categorized into handcrafted feature-based and deep learning-based approaches. The proposed action recognition framework is then based on these handcrafted and deep learning based techniques, which are then adopted throughout the thesis by embedding novel algorithms for action recognition, both in the handcrafted and deep learning domains. First, a new method based on handcrafted approach is presented. This method addresses one of the major challenges known as “viewpoint variations” by presenting a novel feature descriptor for multiview human action recognition. This descriptor employs the region-based features extracted from the human silhouette. The proposed approach is quite simple and achieves state-of-the-art results without compromising the efficiency of the recognition process which shows its suitability for real-time applications. Second, two innovative methods are presented based on deep learning approach, to go beyond the limitations of handcrafted approach. The first method is based on transfer learning using pre-trained deep learning model as a source architecture to solve the problem of human action recognition. It is experimentally confirmed that deep Convolutional Neural Network model already trained on large-scale annotated dataset is transferable to action recognition task with limited training dataset. The comparative analysis also confirms its superior performance over handcrafted feature-based methods in terms of accuracy on same datasets. The second method is based on unsupervised deep learning-based approach. This method employs Deep Belief Networks (DBNs) with restricted Boltzmann machines for action recognition in unconstrained videos. The proposed method automatically extracts suitable feature representation without any prior knowledge using unsupervised deep learning model. The effectiveness of the proposed method is confirmed with high recognition results on a challenging UCF sports dataset. Finally, the thesis is concluded with important discussions and research directions in the area of human action recognition

    A Taxonomy of Deep Convolutional Neural Nets for Computer Vision

    Get PDF
    Traditional architectures for solving computer vision problems and the degree of success they enjoyed have been heavily reliant on hand-crafted features. However, of late, deep learning techniques have offered a compelling alternative -- that of automatically learning problem-specific features. With this new paradigm, every problem in computer vision is now being re-examined from a deep learning perspective. Therefore, it has become important to understand what kind of deep networks are suitable for a given problem. Although general surveys of this fast-moving paradigm (i.e. deep-networks) exist, a survey specific to computer vision is missing. We specifically consider one form of deep networks widely used in computer vision - convolutional neural networks (CNNs). We start with "AlexNet" as our base CNN and then examine the broad variations proposed over time to suit different applications. We hope that our recipe-style survey will serve as a guide, particularly for novice practitioners intending to use deep-learning techniques for computer vision.Comment: Published in Frontiers in Robotics and AI (http://goo.gl/6691Bm
    corecore