4 research outputs found

    Trajectory-based Human Action Recognition

    Get PDF
    Human activity recognition has been a hot topic for some time. It has several challenges, which makes this task hard and exciting for research. The sparse representation became more popular during the past decade or so. Sparse representation methods represent a video by a set of independent features. The features used in the literature are usually lowlevel features. Trajectories, as middle-level features, capture the motion of the scene, which is discriminant in most cases. Trajectories have also been proven useful for aligning small neighborhoods, before calculating the traditional descriptors. In fact, the trajectory aligned descriptors show better discriminant power than the trajectory shape descriptors proposed in the literature. However, trajectories have not been investigated thoroughly, and their full potential has not been put to the test before this work. This thesis examines trajectories, defined better trajectory shape descriptors and finally it augmented trajectories with disparity information. This thesis formally define three different trajectory extraction methods, namely interest point trajectories (IP), Lucas-Kanade based trajectories (LK), and Farnback optical flow based trajectories (FB). Their discriminant power for human activity recognition task is evaluated. Our tests reveal that LK and FB can produce similar reliable results, although the FB perform a little better in particular scenarios. These experiments demonstrate which method is suitable for the future tests. The thesis also proposes a better trajectory shape descriptor, which is a superset of existing descriptors in the literature. The examination reveals the superior discriminant power of this newly introduced descriptor. Finally, the thesis proposes a method to augment the trajectories with disparity information. Disparity information is relatively easy to extract from a stereo image, and they can capture the 3D structure of the scene. This is the first time that the disparity information fused with trajectories for human activity recognition. To test these ideas, a dataset of 27 activities performed by eleven actors is recorded and hand labelled. The tests demonstrate the discriminant power of trajectories. Namely, the proposed disparity-augmented trajectories improve the discriminant power of traditional dense trajectories by about 3.11%

    The things you do: Implicit person models guide online action observation.

    Get PDF
    Experiments 1a, b and d from Chapter Two have been published in a peer-reviewed journal: Schenke, K. C., Wyer, N. A., & Bach, P. (2016). The Things You Do: Internal Models of Others’ Expected Behaviour Guide Action Observation. PLoS One, 11(7), e0158910. http://dx.doi.org/10.1371/journal.pone.0158910.Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 16.03.2017 by SC, Graduate SchoolSocial perception is dynamic and ambiguous. Whilst previous research favoured bottom-up views where observed actions are matched to higher level (or motor) representations, recent accounts suggest top-down processes where prior knowledge guides perception of others’ actions, in a predictive manner. This thesis investigated how person-specific models of others’ typical behaviour in different situations are reactivated when they are re-encountered and predict their actions, using strictly controlled computer-based action identification tasks, event-related potentials (ERPs), as well as recording participants’ actions via motion tracking (using the Microsoft Kinect Sensor). The findings provided evidence that knowledge about seen actor’s typical behaviour is used in action observation. It was found, first, that actions are identified faster when performed by an actor that typically performed these actions compared to another actor who only performed them rarely (Chapters Two and Three). These effects were specific to meaningful actions with objects, not withdrawals from them, and went along with action-related ERP responses (oERN, observer related error negativity). Moreover, they occurred despite current actor identity not being relevant to the task, and were largely independent of the participants’ ability to report the individual’s behaviour. Second, the findings suggested that these predictive person models are embodied such that they influenced the observers own motor systems, even when the relevant actors were not seen acting (Chapter Four). Finally, evidence for theses person-models were found when naturalistic responding was required when participants had to use their feet to ‘block’ an incoming ball (measured by the Microsoft Kinect Sensor), where they made earlier and more pronounced movements when the observed actor behaved according to their usual action patterns (Chapter Five). The findings are discussed with respect to recent predictive coding theories of social perception, and a new model is proposed that integrates the findings

    A Real-Time System for Motion Retrieval and Interpretation

    No full text
    International audienceThis paper proposes a new examplar-based method for real-time human motion recognition using Motion Capture (MoCap) data. We have formalized streamed recognizable actions, coming from an online MoCap engine, into a motion graph that is similar to an animation motion graph. This graph is used as an automaton to recognize known actions as well as to add new ones. We have defined and used a spatio-temporal metric for similarity measurements to achieve more accurate feedbacks on classification. The proposed method has the advantage of being linear and incremental, making the recognition process very fast and the addition of a new action straightforward. Furthermore, actions can be recognized with a score even before they are fully completed. Thanks to the use of a skeleton-centric coordinate system, our recognition method has become view-invariant. We have successfully tested our action recognition method on both synthetic and real data. We have also compared our results with four state-of-the-art methods using three well known datasets for human action recognition. In particular, the comparisons have clearly shown the advantage of our method through betterrecognition rates
    corecore