13 research outputs found

    Tracking of Individuals in Very Long Video Sequences

    Get PDF

    View Invariant Gesture Recognition using the CSEM SwissRanger SR-2 Camera 1 View Invariant Gesture Recognition using the CSEM SwissRanger SR-2 Camera

    No full text
    This paper introduces use of range information acquired by a CSEM Swiss-Ranger SR-2 camera for view invariant recognition of one and two arms gestures. The range data enables motion detection and 3D representation of gestures. Motion is detected by double difference range images and filtered by a hysteresis bandpass filter. Gestures are represented by concatenating harmonic shape contexts over time. This representation allows for a view invariant matching of the gestures. The system is trained on gestures from one viewpoint and evaluated on gestures from other viewpoints. The results show a recognition rate of 93.75%

    Motion primitives for action recognition

    Get PDF
    Abstract. The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognition rates of 88.7 % and 85.5%, respectively.

    Action Recognition in Semi-synthetic Images using Motion Primitives

    No full text
    based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string containing a sequence of symbols, each representing a primitive. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The method is evaluated on five one-arm gestures. A test is performed with semi-synthetic input data achieving a recognition rate of 96.5%. 1 System overview This technical report describes the details about an action recognition approach based on motion primitives. This approach does not rely on first reconstructing the human and the pose of his/her limbs and then do the recognition on the joint angle data, but rather on recognition directly on the image data. We find a few characteristic time instances in an image sequence of a person performing an action and the action is classified from these instances

    Tracking of Individuals in Very Long Video Sequences. In 6 These frame indexes can be converted to 3D view direction using the camera calibration

    No full text
    Abstract. In this paper we present an approach for automatically detecting and tracking humans in very long video sequences. The detection is based on background subtraction using a multi-mode Codeword method. We enhance this method both in terms of representation and in terms of automatically updating the background allowing for handling gradual and rapid changes. Tracking is conducted by building appearance-based models and matching these over time. Tests show promising detection and tracking results in a ten hour video sequence.

    Action Recognition using Motion Primitives

    No full text
    Abstract. The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognizing rates of 88.7 % and 85.5%, respectively.
    corecore