23,538 research outputs found

    Robust pan/tilt compensation for foreground-background segmentation

    Get PDF
    In this paper, we describe a robust method for compensating the panning and tilting motion of a camera, applied to foreground-background segmentation. First, the necessary internal camera parameters are determined through feature-point extraction and tracking. From these parameters, two motion models for points in the image plane are established. The first model assumes a fixed tilt angle, whereas the second model allows simultaneous pan and tilt. At runtime, these models are used to compensate for the motion of the camera in the background model. We will show that these methods provide a robust compensation mechanism and improve the foreground masks of an otherwise state-of-the-art unsupervised foreground-background segmentation method. The resulting algorithm is always able to obtain F1 scores above 80% on every daytime video in our test set when a minimal number of only eight feature matches are used to determine the background compensation, whereas the standard approaches need significantly more feature matches to produce similar results

    Automatic detection, tracking and counting of birds in marine video content

    Get PDF
    Robust automatic detection of moving objects in a marine context is a multi-faceted problem due to the complexity of the observed scene. The dynamic nature of the sea caused by waves, boat wakes, and weather conditions poses huge challenges for the development of a stable background model. Moreover, camera motion, reflections, lightning and illumination changes may contribute to false detections. Dynamic background subtraction (DBGS) is widely considered as a solution to tackle this issue in the scope of vessel detection for maritime traffic analysis. In this paper, the DBGS techniques suggested for ships are investigated and optimized for the monitoring and tracking of birds in marine video content. In addition to background subtraction, foreground candidates are filtered by a classifier based on their feature descriptors in order to remove non-bird objects. Different types of classifiers have been evaluated and results on a ground truth labeled dataset of challenging video fragments show similar levels of precision and recall of about 95% for the best performing classifier. The remaining foreground items are counted and birds are tracked along the video sequence using spatio-temporal motion prediction. This allows marine scientists to study the presence and behavior of birds

    Color Separation for Background Subtraction

    Get PDF
    Background subtraction is a vital step in many computer vision systems. In background subtraction, one is given two (or more) frames of a video sequence taken with a still camera. Due to the stationarity of the camera, any color change in the scene is mainly due to the presence of moving objects. The goal of background subtraction is to separate the moving objects (also called the foreground) from the stationary background. Many background subtraction approaches have been proposed over the years. They are usually composed of two distinct stages, background modeling and foreground detection. Most of the standard background subtraction techniques focus on the background modeling. In the thesis, we focus on the improvement of foreground detection performance. We formulate the background subtraction as a pixel labeling problem, where the goal is to assign each image pixel either a foreground or background labels. We solve the pixel labeling problem using a principled energy minimization framework. We design an energy function composed of three terms: the data, smoothness, and color separation terms. The data term is based on motion information between image frames. The smoothness term encourages the foreground and background regions to have spatially coherent boundaries. These two terms have been used for background subtraction before. The main contribution of this thesis is the introduction of a new color separation term into the energy function for background subtraction. This term models the fact that the foreground and background regions tend to have different colors. Thus, introducing a color separation term encourages foreground and background regions not to share the same colors. Color separation term can help to correct the mistakes made due to the data term when the motion information is not entirely reliable. We model color separation term with L1 distance, using the technique developed by Tang et.al. Color clustering is used to efficiently model the color space. Our energy function can be globally and efficiently optimized with graph cuts, which is a very effective method for solving binary energy minimization problems arising in computer vision. To prove the effectiveness of including the color separation term into the energy function for background subtraction, we conduct experiments on standard datasets. Our model depends on color clustering and background modeling. There are many possible ways to perform color clustering and background modeling. We evaluate several different combinations of popular color clustering and background modeling approaches. We find that incorporating spatial and motion information as part of the color clustering process can further improve the results. The best performance of our approach is 97% compared to the approach without color separation that achieves 90%

    Using Computer Vision And Volunteer Computing To Analyze Avian Nesting Patterns And Reduce Scientist Workload

    Get PDF
    This paper examines the use of feature detection and background subtraction algorithms to classify and detect events of interest within uncontrolled outdoor avian nesting video from the Wildlife@Home project. We tested feature detection using Speeded Up Robust Features (SURF) and a Support Vector Machine (SVM) along with four background subtraction algorithms — Mixture of Guassians (MOG), Running Gaussian Average (AccAvg), ViBe, and Pixel-Based Adaptive Segmentation (PBAS) — as methods to automatically detect and classify events from surveillance cameras. AccAvg and modified PBAS are shown to provide robust results and compensate for issues caused by cryptic coloration of the monitored species. Both methods utilize the Berkeley Open Infrastructure for Network Computing (BOINC) in order to provide the resources to be able to analyze the 68,000+ hours of video in the Wildlife@Home project in a reasonable amount of time. The feature detection technique failed to handle the many challenges found in the low quality uncontrolled outdoor video. The background subtraction work with AccAvg and the modified version of PBAS is shown to provide more accurate detection of events

    Sensor Selection and Integration to Improve Video Segmentation in Complex Environments

    Get PDF
    Background subtraction is often considered to be a required stage of any video surveillance system being used to detect objects in a single frame and/or track objects across multiple frames in a video sequence. Most current state-of-the-art techniques for object detection and tracking utilize some form of background subtraction that involves developing a model of the background at a pixel, region, or frame level and designating any elements that deviate from the background model as foreground. However, most existing approaches are capable of segmenting a number of distinct components but unable to distinguish between the desired object of interest and complex, dynamic background such as moving water and high reflections. In this paper, we propose a technique to integrate spatiotemporal signatures of an object of interest from different sensing modalities into a video segmentation method in order to improve object detection and tracking in dynamic, complex scenes. Our proposed algorithm utilizes the dynamic interaction information between the object of interest and background to differentiate between mistakenly segmented components and the desired component. Experimental results on two complex data sets demonstrate that our proposed technique significantly improves the accuracy and utility of state-of-the-art video segmentation technique. © 2014 Adam R. Reckley et al

    Moving Object Detection in Dynamic Background

    Get PDF
    Abstract: A new method of detecting moving object in dynamic background is proposed in this paper. At first, an adaptive threshold Harris algorithm is proposed in this paper to extract feature points, then, SIFT algorithm is used to describe these extracted feature points. The similarity function is used to match feature points and RANSAC algorithm is used to eliminate the pseudo matches. According to the correct matches, we get the affine transformation matrix which used to compensate the motion of background caused by camera motion, and update the dynamic background with the background model. Finally, the moving object can be detected by background subtraction method. Experimental results show that the method presented in this paper improves the accuracy of feature point extraction and detects moving target in dynamic background accurately
    • …
    corecore