469 research outputs found

    Beyond just keeping hands on the wheel: Towards visual interpretation of driver hand motion patterns

    Full text link
    Abstract — Observing hand activity in the car provides a rich set of patterns relating to vehicle maneuvering, secondary tasks, driver distraction, and driver intent inference. This work strives to develop a vision-based framework for analyzing such patterns in real-time. First, hands are detected and tracked from a monocular camera. This provides position information of the left and right hands with no intrusion over long, naturalistic drives. Second, the motion trajectories are studied in settings of activity recognition, prediction, and higher-level semantic categorization. I

    Fast and Robust Object Detection Using Visual Subcategories

    Full text link
    Object classes generally contain large intra-class varia-tion, which poses a challenge to object detection schemes. In this work, we study visual subcategorization as a means of capturing appearance variation. First, training data is clustered using color and gradient features. Second, the clustering is used to learn an ensemble of models that cap-ture visual variation due to varying orientation, truncation, and occlusion degree. Fast object detection is achieved with integral image features and pixel lookup features. The framework is studied in the context of vehicle detection on the challenging KITTI dataset. 1

    Detecting Hands in Egocentric Videos: Towards Action Recognition

    Full text link
    Recently, there has been a growing interest in analyzing human daily activities from data collected by wearable cameras. Since the hands are involved in a vast set of daily tasks, detecting hands in egocentric images is an important step towards the recognition of a variety of egocentric actions. However, besides extreme illumination changes in egocentric images, hand detection is not a trivial task because of the intrinsic large variability of hand appearance. We propose a hand detector that exploits skin modeling for fast hand proposal generation and Convolutional Neural Networks for hand recognition. We tested our method on UNIGE-HANDS dataset and we showed that the proposed approach achieves competitive hand detection results

    Understanding head and hand activities and coordination in naturalistic driving videos

    Full text link
    Abstract — In this work, we propose a vision-based analysis framework for recognizing in-vehicle activities such as interac-tions with the steering wheel, the instrument cluster and the gear. The framework leverages two views for activity analysis, a camera looking at the driver’s hand and another looking at the driver’s head. The techniques proposed can be used by researchers in order to extract ‘mid-level ’ information from video, which is information that represents some semantic understanding of the scene but may still require an expert in order to distinguish difficult cases or leverage the cues to perform drive analysis. Unlike such information, ’low-level’ video is large in quantity and can’t be used unless processed entirely by an expert. This work can apply to minimizing manual labor so that researchers may better benefit from the accessibility of the data and provide them with the ability to perform larger-scaled studies. I

    Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis

    Full text link
    Automotive systems provide a unique opportunity for mobile vision technologies to improve road safety by un-derstanding and monitoring the driver. In this work, we propose a real-time framework for early detection of driver maneuvers. The implications of this study would allow for better behavior prediction, and therefore the development of more efficient advanced driver assistance and warning systems. Cues are extracted from an array of sensors ob-serving the driver (head, hand, and foot), the environment (lane and surrounding vehicles), and the ego-vehicle state (speed, steering angle, etc.). Evaluation is performed on a real-world dataset with overtaking maneuvers, showing promising results. In order to gain better insight into the processes that characterize driver behavior, temporally dis-criminative cues are studied and visualized. 1
    • …
    corecore