13,430 research outputs found

    Drive Video Analysis for the Detection of Traffic Near-Miss Incidents

    Full text link
    Because of their recent introduction, self-driving cars and advanced driver assistance system (ADAS) equipped vehicles have had little opportunity to learn, the dangerous traffic (including near-miss incident) scenarios that provide normal drivers with strong motivation to drive safely. Accordingly, as a means of providing learning depth, this paper presents a novel traffic database that contains information on a large number of traffic near-miss incidents that were obtained by mounting driving recorders in more than 100 taxis over the course of a decade. The study makes the following two main contributions: (i) In order to assist automated systems in detecting near-miss incidents based on database instances, we created a large-scale traffic near-miss incident database (NIDB) that consists of video clip of dangerous events captured by monocular driving recorders. (ii) To illustrate the applicability of NIDB traffic near-miss incidents, we provide two primary database-related improvements: parameter fine-tuning using various near-miss scenes from NIDB, and foreground/background separation into motion representation. Then, using our new database in conjunction with a monocular driving recorder, we developed a near-miss recognition method that provides automated systems with a performance level that is comparable to a human-level understanding of near-miss incidents (64.5% vs. 68.4% at near-miss recognition, 61.3% vs. 78.7% at near-miss detection).Comment: Accepted to ICRA 201

    Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets

    Get PDF
    Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.Comment: Proceedings of the First International Conference on Deep Learning and Music, Anchorage, US, May, 2017 (arXiv:1706.08675v1 [cs.NE]

    Summarizing First-Person Videos from Third Persons' Points of Views

    Full text link
    Video highlight or summarization is among interesting topics in computer vision, which benefits a variety of applications like viewing, searching, or storage. However, most existing studies rely on training data of third-person videos, which cannot easily generalize to highlight the first-person ones. With the goal of deriving an effective model to summarize first-person videos, we propose a novel deep neural network architecture for describing and discriminating vital spatiotemporal information across videos with different points of view. Our proposed model is realized in a semi-supervised setting, in which fully annotated third-person videos, unlabeled first-person videos, and a small number of annotated first-person ones are presented during training. In our experiments, qualitative and quantitative evaluations on both benchmarks and our collected first-person video datasets are presented.Comment: 16+10 pages, ECCV 201

    Multimodal Polynomial Fusion for Detecting Driver Distraction

    Full text link
    Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone. Although there has been a considerable amount of research on modeling the distracted behavior of drivers under various conditions, accurate automatic detection using multiple modalities and especially the contribution of using the speech modality to improve accuracy has received little attention. This paper introduces a new multimodal dataset for distracted driving behavior and discusses automatic distraction detection using features from three modalities: facial expression, speech and car signals. Detailed multimodal feature analysis shows that adding more modalities monotonically increases the predictive accuracy of the model. Finally, a simple and effective multimodal fusion technique using a polynomial fusion layer shows superior distraction detection results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201

    Wayfinding in Complex Multi-storey Buildings: A vision-simulation-augmented wayfinding protocol study

    Get PDF
    Wayfinding in complex multi-storey buildings often brings newcomers and even some frequent visitors uncertainty and stress. However, there is little understanding on wayfinding in 3D structure which contains inter-storey and inter-building travelling. This paper presents the method of vision-simulation-augmented wayfinding protocol for the study of such 3D structure to find its application from investigating pedestrians’ wayfinding behaviour in general-purpose complex multi-storey buildings. Based on Passini’s studies as a starting point, an exploratory quasi-experiment was developed during the study and then conducted in a daily wayfinding context, adopting wayfinding protocol method with augmentation by the real-time vision simulation. The purpose is to identify people’s natural wayfinding strategies in natural settings, for both frequent visitors and newcomers. It is envisioned that the findings of the study can inspire potential design solutions for supporting pedestrian’s wayfinding in 3D indoor spaces. From the new method developed and new analytic framework, several findings were identified which differ from other wayfinding literature, such as (1) people seem to directly “make sense” of wayfinding settings, (2) people could translate recurring actions into unconscious operational behaviours, and (3) physical rotation and constrained views, instead of vertical travelling itself, should be problems for wayfinding process, etc. Keywords: Wayfinding Protocol; Real-time Vision Simulation; 3D Indoor Space; Activity Theory; Structure of Wayfinding process</p
    corecore