8,316 research outputs found

    Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

    Get PDF
    In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017, http://visual.cs.ucl.ac.uk/pubs/cofusion, https://github.com/martinruenz/co-fusio

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application

    Vision-based traffic surveys in urban environments

    Get PDF
    This paper presents a state-of-the-art, vision-based vehicle detection and type classification to perform traffic surveys from a roadside closed-circuit television camera. Vehicles are detected using background subtraction based on a Gaussian mixture model that can cope with vehicles that become stationary over a significant period of time. Vehicle silhouettes are described using a combination of shape and appearance features using an intensity-based pyramid histogram of orientation gradients (HOG). Classification is performed using a support vector machine, which is trained on a small set of hand-labeled silhouette exemplars. These exemplars are identified using a model-based preclassifier that utilizes calibrated images mapped by Google Earth to provide accurately surveyed scene geometry matched to visible image landmarks. Kalman filters track the vehicles to enable classification by majority voting over several consecutive frames. The system counts vehicles and separates them into four categories: car, van, bus, and motorcycle (including bicycles). Experiments with real-world data have been undertaken to evaluate system performance and vehicle detection rates of 96.45% and classification accuracy of 95.70% have been achieved on this data.The authors gratefully acknowledge the Royal Borough of Kingston for providing the video data. S.A. Velastin is grateful to funding received from the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement nº 600371, el Ministerio de Economía y Competitividad (COFUND2013-51509) and Banco Santander

    Motion detection using periodic background estimation subtraction method

    Get PDF
    This paper proposed a motion detection system using periodic background estimation subtraction method for outdoor illumination condition using MATLAB. The proposed method is robust to illumination change effect, change in background and noise. The method basically used background subtraction. The background image is estimated at every 0.8 second when the sum of absolute difference (SAD) is less than the motion threshold. The input image is luminance normalize before background subtraction. The results were converted into binary image by autothreshold and enhanced the results with dilation and erosion. Blobs were created for each motion objects. Experiment results of using background image estimated by periodic background estimation demonstrate their robustness and effectiveness in background subtraction for real world scene

    An effective video analysis method for detecting red light runners

    Get PDF
    This paper presents a novel method for automatic red light runner detection on a video, which is fundamentally different from the concept of conventional red light camera systems. In principle, it extracts the state of the traffic lights and vehicle motions without any physical or electronic interconnections to the traffic light control system or the buried loop detectors. Purely from the video, the new method first constructs a traffic light sequence and then it estimates vehicle motions beyond the stop line while the light is red. In the former, the spatial and temporal relationships of individual traffic lights are utilized. In the latter, the concept of virtual loop detector has been introduced to emulate the physical loop detectors. A prototype was implemented based on this method and was tested in a number of field trials. The results show that the new method is able to detect multiple red light runners in multiple lanes. It is also capable of tolerating a number of hostile but realistic situation such as: 1) minimum number of traffic light; 2) pseudomotions due to shadows; 3) poor contrast; 4) pedestrian motions; and 5) turning vehicles.published_or_final_versio

    Online real-time crowd behavior detection in video sequences

    Get PDF
    Automatically detecting events in crowded scenes is a challenging task in Computer Vision. A number of offline approaches have been proposed for solving the problem of crowd behavior detection, however the offline assumption limits their application in real-world video surveillance systems. In this paper, we propose an online and real-time method for detecting events in crowded video sequences. The proposed approach is based on the combination of visual feature extraction and image segmentation and it works without the need of a training phase. A quantitative experimental evaluation has been carried out on multiple publicly available video sequences, containing data from various crowd scenarios and different types of events, to demonstrate the effectiveness of the approach
    • …
    corecore