3 research outputs found

    A new framework of human interaction recognition based on multiple stage probability fusion

    Get PDF
    Visual-based human interactive behavior recognition is a challenging research topic in computer vision. There exist some important problems in the current interaction recognition algorithms, such as very complex feature representation and inaccurate feature extraction induced by wrong human body segmentation. In order to solve these problems, a novel human interaction recognition method based on multiple stage probability fusion is proposed in this paper. According to the human body’s contact in interaction as a cut-off point, the process of the interaction can be divided into three stages: start stage, execution stage and end stage. Two persons’ motions are respectively extracted and recognizes in the start stage and the finish stage when there is no contact between those persons. The two persons’ motion is extracted as a whole and recognized in the execution stage. In the recognition process, the final recognition results are obtained by the weighted fusing these probabilities in different stages. The proposed method not only simplifies the extraction and representation of features, but also avoids the wrong feature extraction caused by occlusion. Experiment results on the UT-interaction dataset demonstrated that the proposed method results in a better performance than other recent interaction recognition methods

    A new framework of human interaction recognition based on multiple stage probability fusion

    Get PDF
    Visual-based human interactive behavior recognition is a challenging research topic in computer vision. There exist some important problems in the current interaction recognition algorithms, such as very complex feature representation and inaccurate feature extraction induced by wrong human body segmentation. In order to solve these problems, a novel human interaction recognition method based on multiple stage probability fusion is proposed in this paper. According to the human body’s contact in interaction as a cut-off point, the process of the interaction can be divided into three stages: start stage, execution stage and end stage. Two persons’ motions are respectively extracted and recognizes in the start stage and the finish stage when there is no contact between those persons. The two persons’ motion is extracted as a whole and recognized in the execution stage. In the recognition process, the final recognition results are obtained by the weighted fusing these probabilities in different stages. The proposed method not only simplifies the extraction and representation of features, but also avoids the wrong feature extraction caused by occlusion. Experiment results on the UT-interaction dataset demonstrated that the proposed method results in a better performance than other recent interaction recognition methods

    Recognising human interaction from videos by a discriminative model

    No full text
    This study addresses the problem of recognising human interactions between two people. The main difficulties lie in the partial occlusion of body parts and the motion ambiguity in interactions. The authors observed that the interdependencies existing at both the action level and the body part level can greatly help disambiguate similar individual movements and facilitate human interaction recognition. Accordingly, they proposed a novel discriminative method, which model the action of each person by a large‐scale global feature and local body part features, to capture such interdependencies for recognising interaction of two people. A variant of multi‐class Adaboost method is proposed to automatically discover class‐specific discriminative three‐dimensional body parts. The proposed approach is tested on the authors newly introduced BIT‐interaction dataset and the UT‐interaction dataset. The results show that their proposed model is quite effective in recognising human interactions
    corecore