5,165 research outputs found

    Spatio-Temporal Image Boundary Extrapolation

    Get PDF
    Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established real-world video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of 'intuitive physics' that can be applied to novel scenes

    Geometry-Based Next Frame Prediction from Monocular Video

    Full text link
    We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.Comment: To appear in 2017 IEEE Intelligent Vehicles Symposiu

    Improved detection of small objects in road network sequences using CNN and super resolution

    Get PDF
    The detection of small objects is one of the problems present in deep learning due to the context of the scene or the low number of pixels of the objects to be detected. According to these problems, current pre-trained models based on convolutional neural networks usually give a poor average precision, highlighting some as CenterNet HourGlass104 with a mean average precision of 25.6%, or SSD-512 with 9%. This work focuses on the detection of small objects. In particular, our proposal aims to vehicle detection from images captured by video surveillance cameras with pretrained models without modifying their structures, so it does not require retraining the network to improve the detection rate of the elements. For better performance, a technique has been developed which, starting from certain initial regions, detects a higher number of objects and improves their class inference without modifying or retraining the network. The neural network is integrated with processes that are in charge of increasing the resolution of the images to improve the object detection performance. This solution has been tested for a set of traffic images containing elements of different scales to check the efficiency depending on the detections obtained by the model. Our proposal achieves good results in a wide range of situations, obtaining, for example, an average score of 45.1% with the EfficientDet-D4 model for the first video sequence, compared to the 24.3% accuracy initially provided by the pre-trained model.This work is partially supported by the Ministry of Science, Innovation and Universities of Spain [grant number RTI2018-094645-B-I00], project name Automated detection with low-cost hardware of unusual activities in video sequences. It is also partially supported by the Autonomous Government of Andalusia (Spain) under project UMA18-FEDERJA-084, project name Detection of anomalous behaviour agents by deep learning in low-cost video surveillance intelligent systems. All of them include funds from the European Regional Development Fund (ERDF). It is also partially supported by the University of Málaga (Spain) under grants B1-2019_01, project name Anomaly detection on roads by moving cameras, and B1-2019_02, project name Self-Organizing Neural Systems for Non-Stationary Environments. The authors thankfully acknowledge the computer resources, technical expertise and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of Málaga. The authors acknowledge the funding from the Universidad de Málaga. I.G.-A. is funded by a scholarship from the Autonomous Government of Andalusia (Spain) under the Young Employment operative program [grant number SNGJ5Y6-15]. They also gratefully acknowledge the support of NVIDIA Corporation with the donation of two Titan X GPUs. Funding for open access charge: Universidad de Málaga / CBUA
    • …
    corecore