8,153 research outputs found

    Counting the Number of Active Spermatozoa Movements Using Improvement Adaptive Background Learning Algorithm

    Get PDF
    The most important early stage in sperm infertility research is the detection of sperm objects. The success rate in separating sperm objects from semen fluid has an important role for further analysis. This research performed the detection and calculation of human spermatozoa. The detected sperm was the moving sperm in the video data. An improvement of Adaptive Background Learning was applied to detect the moving sperm. The purpose of this method is to improve the performance of Adaptive Background Learning algorithm in background subtraction process to detect and calculate moving sperm on the microscopic video of sperm fluid. This paper also compared several other background subtraction algorithms to conclude the appropriate background subtraction algorithm for sperm detection and sperm counting. The process done in this research was preprocessing using the Gaussian filter. The next was background subtraction process, followed by morphology operation. To test or validate the detection results of any background subtraction algorithm used, the foreground mask results from the morphological operation were compared to the ground truth of moving sperm image. For visualization purposes, every BLOB area (white object in binary image) on the foreground were given a bounding box to the original frame and the number of BLOB objects present in the foreground mask were counted. This shows that the system had been able to detect and calculate moving sperm. Based on the test results, Adaptive Background Learning method had a value of F-measure of 0.9205 and succeeded in extracting sperm shape close to the original form compared to other methods

    A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain

    Full text link
    Detecting camouflaged moving foreground objects has been known to be difficult due to the similarity between the foreground objects and the background. Conventional methods cannot distinguish the foreground from background due to the small differences between them and thus suffer from under-detection of the camouflaged foreground objects. In this paper, we present a fusion framework to address this problem in the wavelet domain. We first show that the small differences in the image domain can be highlighted in certain wavelet bands. Then the likelihood of each wavelet coefficient being foreground is estimated by formulating foreground and background models for each wavelet band. The proposed framework effectively aggregates the likelihoods from different wavelet bands based on the characteristics of the wavelet transform. Experimental results demonstrated that the proposed method significantly outperformed existing methods in detecting camouflaged foreground objects. Specifically, the average F-measure for the proposed algorithm was 0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI

    Adaptive-Rate Compressive Sensing Using Side Information

    Full text link
    We provide two novel adaptive-rate compressive sensing (CS) strategies for sparse, time-varying signals using side information. Our first method utilizes extra cross-validation measurements, and the second one exploits extra low-resolution measurements. Unlike the majority of current CS techniques, we do not assume that we know an upper bound on the number of significant coefficients that comprise the images in the video sequence. Instead, we use the side information to predict the number of significant coefficients in the signal at the next time instant. For each image in the video sequence, our techniques specify a fixed number of spatially-multiplexed CS measurements to acquire, and adjust this quantity from image to image. Our strategies are developed in the specific context of background subtraction for surveillance video, and we experimentally validate the proposed methods on real video sequences

    Background Subtraction via Generalized Fused Lasso Foreground Modeling

    Full text link
    Background Subtraction (BS) is one of the key steps in video analysis. Many background models have been proposed and achieved promising performance on public data sets. However, due to challenges such as illumination change, dynamic background etc. the resulted foreground segmentation often consists of holes as well as background noise. In this regard, we consider generalized fused lasso regularization to quest for intact structured foregrounds. Together with certain assumptions about the background, such as the low-rank assumption or the sparse-composition assumption (depending on whether pure background frames are provided), we formulate BS as a matrix decomposition problem using regularization terms for both the foreground and background matrices. Moreover, under the proposed formulation, the two generally distinctive background assumptions can be solved in a unified manner. The optimization was carried out via applying the augmented Lagrange multiplier (ALM) method in such a way that a fast parametric-flow algorithm is used for updating the foreground matrix. Experimental results on several popular BS data sets demonstrate the advantage of the proposed model compared to state-of-the-arts

    Adaptive low rank and sparse decomposition of video using compressive sensing

    Full text link
    We address the problem of reconstructing and analyzing surveillance videos using compressive sensing. We develop a new method that performs video reconstruction by low rank and sparse decomposition adaptively. Background subtraction becomes part of the reconstruction. In our method, a background model is used in which the background is learned adaptively as the compressive measurements are processed. The adaptive method has low latency, and is more robust than previous methods. We will present experimental results to demonstrate the advantages of the proposed method.Comment: Accepted ICIP 201

    Foreground Detection in Camouflaged Scenes

    Full text link
    Foreground detection has been widely studied for decades due to its importance in many practical applications. Most of the existing methods assume foreground and background show visually distinct characteristics and thus the foreground can be detected once a good background model is obtained. However, there are many situations where this is not the case. Of particular interest in video surveillance is the camouflage case. For example, an active attacker camouflages by intentionally wearing clothes that are visually similar to the background. In such cases, even given a decent background model, it is not trivial to detect foreground objects. This paper proposes a texture guided weighted voting (TGWV) method which can efficiently detect foreground objects in camouflaged scenes. The proposed method employs the stationary wavelet transform to decompose the image into frequency bands. We show that the small and hardly noticeable differences between foreground and background in the image domain can be effectively captured in certain wavelet frequency bands. To make the final foreground decision, a weighted voting scheme is developed based on intensity and texture of all the wavelet bands with weights carefully designed. Experimental results demonstrate that the proposed method achieves superior performance compared to the current state-of-the-art results.Comment: IEEE International Conference on Image Processing, 201
    corecore