754 research outputs found

    Video Desnowing and Deraining via Saliency and Dual Adaptive Spatiotemporal Filtering

    Get PDF
    Outdoor vision sensing systems often struggle with poor weather conditions, such as snow and rain, which poses a great challenge to existing video desnowing and deraining methods. In this paper, we propose a novel video desnowing and deraining model that utilizes the salience information of moving objects to address this problem. First, we remove the snow and rain from the video by low-rank tensor decomposition, which makes full use of the spatial location information and the correlation between the three channels of the color video. Second, because existing algorithms often regard sparse snowflakes and rain streaks as moving objects, this paper injects salience information into moving object detection, which reduces the false alarms and missed alarms of moving objects. At the same time, feature point matching is used to mine the redundant information of moving objects in continuous frames, and a dual adaptive minimum filtering algorithm in the spatiotemporal domain is proposed by us to remove snow and rain in front of moving objects. Both qualitative and quantitative experimental results show that the proposed algorithm is more competitive than other state-of-the-art snow and rain removal methods

    On Spatio-Temporal Saliency Detection in Videos using Multilinear PCA

    No full text
    International audienceVisual saliency is an attention mechanism which helps to focus on regions of interest instead of processing the whole image or video data. Detecting salient objects in still images has been widely addressed in literature with several formulations and methods. However, visual saliency detection in videos has attracted little attention, although motion information is an important aspect of visual perception. A common approach for obtaining a spatio-temporal saliency map is to combine a static saliency map and a dynamic saliency map. In this paper, we extend a recent saliency detection approach based on principal component analysis (PCA) which have shwon good results when applied to static images. In particular, we explore different strategies to include temporal information into the PCA-based approach. The proposed models have been evaluated on a publicly available dataset which contain several videos of dynamic scenes with complex background, and the results show that processing the spatio-tempral data with multilinear PCA achieves competitive results against state-of-the-art methods

    Digging Deeper into Egocentric Gaze Prediction

    Full text link
    This paper digs deeper into factors that influence egocentric gaze. Instead of training deep models for this purpose in a blind manner, we propose to inspect factors that contribute to gaze guidance during daily tasks. Bottom-up saliency and optical flow are assessed versus strong spatial prior baselines. Task-specific cues such as vanishing point, manipulation point, and hand regions are analyzed as representatives of top-down information. We also look into the contribution of these factors by investigating a simple recurrent neural model for ego-centric gaze prediction. First, deep features are extracted for all input video frames. Then, a gated recurrent unit is employed to integrate information over time and to predict the next fixation. We also propose an integrated model that combines the recurrent model with several top-down and bottom-up cues. Extensive experiments over multiple datasets reveal that (1) spatial biases are strong in egocentric videos, (2) bottom-up saliency models perform poorly in predicting gaze and underperform spatial biases, (3) deep features perform better compared to traditional features, (4) as opposed to hand regions, the manipulation point is a strong influential cue for gaze prediction, (5) combining the proposed recurrent model with bottom-up cues, vanishing points and, in particular, manipulation point results in the best gaze prediction accuracy over egocentric videos, (6) the knowledge transfer works best for cases where the tasks or sequences are similar, and (7) task and activity recognition can benefit from gaze prediction. Our findings suggest that (1) there should be more emphasis on hand-object interaction and (2) the egocentric vision community should consider larger datasets including diverse stimuli and more subjects.Comment: presented at WACV 201
    • …
    corecore