10,938 research outputs found

    BSUV-Net: a fully-convolutional neural network for background subtraction of unseen videos

    Full text link
    Background subtraction is a basic task in computer vision and video processing often applied as a pre-processing step for object tracking, people recognition, etc. Recently, a number of successful background-subtraction algorithms have been proposed, however nearly all of the top-performing ones are supervised. Crucially, their success relies upon the availability of some annotated frames of the test video during training. Consequently, their performance on completely “unseen” videos is undocumented in the literature. In this work, we propose a new, supervised, background subtraction algorithm for unseen videos (BSUV-Net) based on a fully-convolutional neural network. The input to our network consists of the current frame and two background frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance of overfitting, we also introduce a new data-augmentation technique which mitigates the impact of illumination difference between the background frames and the current frame. On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of several metrics including F-measure, recall and precision.Accepted manuscrip

    Focal Spot, Summer 1987

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1046/thumbnail.jp

    The Compression-Mode Giant Resonances and Nuclear Incompressibility

    Get PDF
    The compression-mode giant resonances, namely the isoscalar giant monopole and isoscalar giant dipole modes, are examples of collective nuclear motion. Their main interest stems from the fact that one hopes to extrapolate from their properties the incompressibility of uniform nuclear matter, which is a key parameter of the nuclear Equation of State (EoS). Our understanding of these issues has undergone two major jumps, one in the late 1970s when the Isoscalar Giant Monopole Resonance (ISGMR) was experimentally identified, and another around the turn of the millennium since when theory has been able to start giving reliable error bars to the incompressibility. However, mainly magic nuclei have been involved in the deduction of the incompressibility from the vibrations of finite nuclei. The present review deals with the developments beyond all this. Experimental techniques have been improved, and new open-shell, and deformed, nuclei have been investigated. The associated changes in our understanding of the problem of the nuclear incompressibility are discussed. New theoretical models, decay measurements, and the search for the evolution of compressional modes in exotic nuclei are also discussed.Comment: Review paper to appear in "Progress in Particle and Nuclear Physics

    Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery

    Full text link
    PCA is one of the most widely used dimension reduction techniques. A related easier problem is "subspace learning" or "subspace estimation". Given relatively clean data, both are easily solved via singular value decomposition (SVD). The problem of subspace learning or PCA in the presence of outliers is called robust subspace learning or robust PCA (RPCA). For long data sequences, if one tries to use a single lower dimensional subspace to represent the data, the required subspace dimension may end up being quite large. For such data, a better model is to assume that it lies in a low-dimensional subspace that can change over time, albeit gradually. The problem of tracking such data (and the subspaces) while being robust to outliers is called robust subspace tracking (RST). This article provides a magazine-style overview of the entire field of robust subspace learning and tracking. In particular solutions for three problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition (S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an entire data vector is either an outlier or an inlier. The S+LR formulation instead assumes that outliers occur on only a few data vector indices and hence are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201
    corecore