4,640 research outputs found

    A spatially distributed model for foreground segmentation

    Get PDF
    Foreground segmentation is a fundamental first processing stage for vision systems which monitor real-world activity. In this paper we consider the problem of achieving robust segmentation in scenes where the appearance of the background varies unpredictably over time. Variations may be caused by processes such as moving water, or foliage moved by wind, and typically degrade the performance of standard per-pixel background models. Our proposed approach addresses this problem by modeling homogeneous regions of scene pixels as an adaptive mixture of Gaussians in color and space. Model components are used to represent both the scene background and moving foreground objects. Newly observed pixel values are probabilistically classified, such that the spatial variance of the model components supports correct classification even when the background appearance is significantly distorted. We evaluate our method over several challenging video sequences, and compare our results with both per-pixel and Markov Random Field based models. Our results show the effectiveness of our approach in reducing incorrect classifications

    Rejection-Cascade of Gaussians: Real-time adaptive background subtraction framework

    Full text link
    Background-Foreground classification is a well-studied problem in computer vision. Due to the pixel-wise nature of modeling and processing in the algorithm, it is usually difficult to satisfy real-time constraints. There is a trade-off between the speed (because of model complexity) and accuracy. Inspired by the rejection cascade of Viola-Jones classifier, we decompose the Gaussian Mixture Model (GMM) into an adaptive cascade of Gaussians(CoG). We achieve a good improvement in speed without compromising the accuracy with respect to the baseline GMM model. We demonstrate a speed-up factor of 4-5x and 17 percent average improvement in accuracy over Wallflowers surveillance datasets. The CoG is then demonstrated to over the latent space representation of images of a convolutional variational autoencoder(VAE). We provide initial results over CDW-2014 dataset, which could speed up background subtraction for deep architectures.Comment: Accepted for National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG 2019

    Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation

    Full text link
    Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.Comment: 30 page

    3D Spectrophotometry of Planetary Nebulae in the Bulge of M31

    Full text link
    We introduce crowded field integral field (3D) spectrophotometry as a useful technique for the study of resolved stellar populations in nearby galaxies. As a methodological test, we present a pilot study with selected extragalactic planetary nebulae (XPN) in the bulge of M31, demonstrating how 3D spectroscopy is able to improve the limited accuracy of background subtraction which one would normally obtain with classical slit spectroscopy. It is shown that due to the absence of slit effects, 3D is a most suitable technique for spectrophometry. We present spectra and line intensities for 5 XPN in M31, obtained with the MPFS instrument at the Russian 6m BTA, INTEGRAL at the WHT, and with PMAS at the Calar Alto 3.5m Telescope. Using 3D spectra of bright standard stars, we demonstrate that the PSF is sampled with high accuracy, providing a centroiding precision at the milli-arcsec level. Crowded field 3D spectrophotometry and the use of PSF fitting techniques is suggested as the method of choice for a number of similar observational problems, including luminous stars in nearby galaxies, supernovae, QSO host galaxies, gravitationally lensed QSOs, and others.Comment: (1) Astrophysikalisches Institut Potsdam, (2) University of Durham. 18 pages, 11 figures, accepted for publication in Ap

    Review of small-angle coronagraphic techniques in the wake of ground-based second-generation adaptive optics systems

    Get PDF
    Small-angle coronagraphy is technically and scientifically appealing because it enables the use of smaller telescopes, allows covering wider wavelength ranges, and potentially increases the yield and completeness of circumstellar environment - exoplanets and disks - detection and characterization campaigns. However, opening up this new parameter space is challenging. Here we will review the four posts of high contrast imaging and their intricate interactions at very small angles (within the first 4 resolution elements from the star). The four posts are: choice of coronagraph, optimized wavefront control, observing strategy, and post-processing methods. After detailing each of the four foundations, we will present the lessons learned from the 10+ years of operations of zeroth and first-generation adaptive optics systems. We will then tentatively show how informative the current integration of second-generation adaptive optics system is, and which lessons can already be drawn from this fresh experience. Then, we will review the current state of the art, by presenting world record contrasts obtained in the framework of technological demonstrations for space-based exoplanet imaging and characterization mission concepts. Finally, we will conclude by emphasizing the importance of the cross-breeding between techniques developed for both ground-based and space-based projects, which is relevant for future high contrast imaging instruments and facilities in space or on the ground.Comment: 21 pages, 7 figure

    Bayesian Modeling of Dynamic Scenes for Object Detection

    Get PDF
    Abstract—Accurate detection of moving objects is an important precursor to stable tracking or recognition. In this paper, we present an object detection scheme that has three innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic backgrounds. By using a nonparametric density estimation method over a joint domain-range representation of image pixels, multimodal spatial uncertainties and complex dependencies between the domain (location) and range (color) are directly modeled. We propose a model of the background as a single probability density. Second, temporal persistence is proposed as a detection criterion. Unlike previous approaches to object detection which detect objects by building adaptive models of the background, the foreground is modeled to augment the detection of objects (without explicit tracking) since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the proposed method is performed and presented on a diverse set of dynamic scenes. Index Terms—Object detection, kernel density estimation, joint domain range, MAP-MRF estimation. æ

    Moving cast shadows detection methods for video surveillance applications

    Get PDF
    Moving cast shadows are a major concern in today’s performance from broad range of many vision-based surveillance applications because they highly difficult the object classification task. Several shadow detection methods have been reported in the literature during the last years. They are mainly divided into two domains. One usually works with static images, whereas the second one uses image sequences, namely video content. In spite of the fact that both cases can be analogously analyzed, there is a difference in the application field. The first case, shadow detection methods can be exploited in order to obtain additional geometric and semantic cues about shape and position of its casting object (’shape from shadows’) as well as the localization of the light source. While in the second one, the main purpose is usually change detection, scene matching or surveillance (usually in a background subtraction context). Shadows can in fact modify in a negative way the shape and color of the target object and therefore affect the performance of scene analysis and interpretation in many applications. This chapter wills mainly reviews shadow detection methods as well as their taxonomies related with the second case, thus aiming at those shadows which are associated with moving objects (moving shadows).Peer Reviewe

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems

    High-resolution optical imaging of the core of the globular cluster M15 with FastCam

    Get PDF
    We present high-resolution I-band imaging of the core of the globular cluster M15 obtained at the 2.5 m Nordic Optical Telescope with FastCam, a low readout noise L3CCD based instrument. Short exposure times (30 ms) were used to record 200000 images (512 x 512 pixels each) over a period of 2 hours 43 min. The lucky imaging technique was then applied to generate a final image of the cluster centre with FWHM ~ 0".1 and 13" x 13" FoV. We obtained a catalogue of objects in this region with a limiting magnitude of I=19.5. I-band photometry and astrometry are reported for 1181 stars. This is the deepest I-band observation of the M15 core at this spatial resolution. Simulations show that crowding is limiting the completeness of the catalogue. At shorter wavelengths, a similar number of objects has been reported using HST/WFPC observations of the same field. The cross-match with the available HST catalogues allowed us to produce colour-magnitude diagrams where we identify new Blue Straggler star candidates and previously known stars of this class.Comment: 11 pages, 15 figures. Accepted for publication in Monthly Notices of the Royal Astronomical Societ
    corecore