2,538 research outputs found

    Detect-and-Track: Efficient Pose Estimation in Videos

    Full text link
    This paper addresses the problem of estimating and tracking human body keypoints in complex, multi-person video. We propose an extremely lightweight yet highly effective approach that builds upon the latest advancements in human detection and video understanding. Our method operates in two-stages: keypoint estimation in frames or short clips, followed by lightweight tracking to generate keypoint predictions linked over the entire video. For frame-level pose estimation we experiment with Mask R-CNN, as well as our own proposed 3D extension of this model, which leverages temporal information over small clips to generate more robust frame predictions. We conduct extensive ablative experiments on the newly released multi-person video pose estimation benchmark, PoseTrack, to validate various design choices of our model. Our approach achieves an accuracy of 55.2% on the validation and 51.8% on the test set using the Multi-Object Tracking Accuracy (MOTA) metric, and achieves state of the art performance on the ICCV 2017 PoseTrack keypoint tracking challenge.Comment: In CVPR 2018. Ranked first in ICCV 2017 PoseTrack challenge (keypoint tracking in videos). Code: https://github.com/facebookresearch/DetectAndTrack and webpage: https://rohitgirdhar.github.io/DetectAndTrack

    Human-activity-centered measurement system:challenges from laboratory to the real environment in assistive gait wearable robotics

    Get PDF
    Assistive gait wearable robots (AGWR) have shown a great advancement in developing intelligent devices to assist human in their activities of daily living (ADLs). The rapid technological advancement in sensory technology, actuators, materials and computational intelligence has sped up this development process towards more practical and smart AGWR. However, most assistive gait wearable robots are still confined to be controlled, assessed indoor and within laboratory environments, limiting any potential to provide a real assistance and rehabilitation required to humans in the real environments. The gait assessment parameters play an important role not only in evaluating the patient progress and assistive device performance but also in controlling smart self-adaptable AGWR in real-time. The self-adaptable wearable robots must interactively conform to the changing environments and between users to provide optimal functionality and comfort. This paper discusses the performance parameters, such as comfortability, safety, adaptability, and energy consumption, which are required for the development of an intelligent AGWR for outdoor environments. The challenges to measuring the parameters using current systems for data collection and analysis using vision capture and wearable sensors are presented and discussed

    Lane Change Classification and Prediction with Action Recognition Networks

    Full text link
    Anticipating lane change intentions of surrounding vehicles is crucial for efficient and safe driving decision making in an autonomous driving system. Previous works often adopt physical variables such as driving speed, acceleration and so forth for lane change classification. However, physical variables do not contain semantic information. Although 3D CNNs have been developing rapidly, the number of methods utilising action recognition models and appearance feature for lane change recognition is low, and they all require additional information to pre-process data. In this work, we propose an end-to-end framework including two action recognition methods for lane change recognition, using video data collected by cameras. Our method achieves the best lane change classification results using only the RGB video data of the PREVENTION dataset. Class activation maps demonstrate that action recognition models can efficiently extract lane change motions. A method to better extract motion clues is also proposed in this paper.Comment: Accepted by ECC
    • …
    corecore