664 research outputs found

    Low complexity object detection with background subtraction for intelligent remote monitoring

    Get PDF

    Background Subtraction in Video Surveillance

    Get PDF
    The aim of thesis is the real-time detection of moving and unconstrained surveillance environments monitored with static cameras. This is achieved based on the results provided by background subtraction. For this task, Gaussian Mixture Models (GMMs) and Kernel density estimation (KDE) are used. A thorough review of state-of-the-art formulations for the use of GMMs and KDE in the task of background subtraction reveals some further development opportunities, which are tackled in a novel GMM-based approach incorporating a variance controlling scheme. The proposed approach method is for parametric and non-parametric and gives us the better method for background subtraction, with more accuracy and easier parametrization of the models, for different environments. It also converges to more accurate models of the scenes. The detection of moving objects is achieved by using the results of background subtraction. For the detection of new static objects, two background models, learning at different rates, are used. This allows for a multi-class pixel classification, which follows the temporality of the changes detected by means of background subtraction. In a first approach, the subtraction of background models is done for parametric model and their results are shown. The second approach is for non-parametric models, where background subtraction is done using KDE non-parametric model. Furthermore, we have done some video engineering, where the background subtraction algorithm was employed so that, the background from one video and the foreground from another video are merged to form a new video. By doing this way, we can also do more complex video engineering with multiple videos. Finally, the results provided by region analysis can be used to improve the quality of the background models, therefore, considerably improving the detection results

    Motion detection using periodic background estimation subtraction method

    Get PDF
    This paper proposed a motion detection system using periodic background estimation subtraction method for outdoor illumination condition using MATLAB. The proposed method is robust to illumination change effect, change in background and noise. The method basically used background subtraction. The background image is estimated at every 0.8 second when the sum of absolute difference (SAD) is less than the motion threshold. The input image is luminance normalize before background subtraction. The results were converted into binary image by autothreshold and enhanced the results with dilation and erosion. Blobs were created for each motion objects. Experiment results of using background image estimated by periodic background estimation demonstrate their robustness and effectiveness in background subtraction for real world scene

    Human detection in surveillance videos and its applications - a review

    Get PDF
    Detecting human beings accurately in a visual surveillance system is crucial for diverse application areas including abnormal event detection, human gait characterization, congestion analysis, person identification, gender classification and fall detection for elderly people. The first step of the detection process is to detect an object which is in motion. Object detection could be performed using background subtraction, optical flow and spatio-temporal filtering techniques. Once detected, a moving object could be classified as a human being using shape-based, texture-based or motion-based features. A comprehensive review with comparisons on available techniques for detecting human beings in surveillance videos is presented in this paper. The characteristics of few benchmark datasets as well as the future research directions on human detection have also been discussed

    Video-based motion detection for stationary and moving cameras

    Get PDF
    In real world monitoring applications, moving object detection remains to be a challenging task due to factors such as background clutter and motion, illumination variations, weather conditions, noise, and occlusions. As a fundamental first step in many computer vision applications such as object tracking, behavior understanding, object or event recognition, and automated video surveillance, various motion detection algorithms have been developed ranging from simple approaches to more sophisticated ones. In this thesis, we present two moving object detection frameworks. The first framework is designed for robust detection of moving and static objects in videos acquired from stationary cameras. This method exploits the benefits of fusing a motion computation method based on spatio-temporal tensor formulation, a novel foreground and background modeling scheme, and a multi-cue appearance comparison. This hybrid system can handle challenges such as shadows, illumination changes, dynamic background, stopped and removed objects. Extensive testing performed on the CVPR 2014 Change Detection benchmark dataset shows that FTSG outperforms most state-of-the-art methods. The second framework adapts moving object detection to full motion videos acquired from moving airborne platforms. This framework has two main modules. The first module stabilizes the video with respect to a set of base-frames in the sequence. The stabilization is done by estimating four-point homographies using prominent feature (PF) block matching, motion filtering and RANSAC for robust matching. Once the frame to base frame homographies are available the flux tensor motion detection module using local second derivative information is applied to detect moving salient features. Spurious responses from the frame boundaries and other post- processing operations are applied to reduce the false alarms and produce accurate moving blob regions that will be useful for tracking

    Microparticle image processing and field profile optimisation for automated Lab-On-Chip magnetophoretic analytical systems

    Get PDF
    The work described in this thesis, concerns developments to analytical microfluidic Lab-On-Chip platform originally developed by Prof Pamme's research group at the University of Hull. This work aims to move away from traditional laboratory analysis system towards a more effective system design which is fully automated and therefore potentially deployable in applications such as point of care medical diagnosis. The microfluidic chip platform comprises an external permanent magnet and chip with multiple parallel reagent streams through which magnetic micro-particles pass in sequence. These streams may include particles, analyte, fluorescent labels and wash solutions; together they facilitate an on-chip multi-step analytical procedure. Analyte concentration is measured via florescent intensity of the exiting micro-particles. This has previously been experimentally proven for more than one analytical procedure. The work described here has addressed a couple of issues which needed improvement, specifically optimizing the magnetic field and automating the measurement process. These topics are related by the fact that an optimal field will reduce anomalies such as aggregated particles which may degrade automated measurements.For this system, the optimal magnetic field is homogeneous gradient of sufficient strength to pull the particles across the width of the device during fluid transit of its length. To optimise the magnetic field, COMSOL (a Multiphysics simulation program) was used to evaluate a number of multiple magnet configurations and demonstrate an improved field profile. The simulation approach was validated against experimental data for the original single-magnet design.To analyse the results automatically, a software tool has been developed using C++ which takes image files generated during an experiment and outputs a calibration curve or specific measurement result. The process involves detection of the particles (using image segmentation) and object tracking. The intensity measurement follows the same procedure as the original manual approach, facilitating comparison, but also includes analysis of particle motion behaviour to allow automatic rejection of data from anomalous particles (e.g. stuck particles). For image segmentation a novel texture based technique called Temporal- Adaptive Median Binary Pattern (T-AMBP) combining with Three Frame Difference method to model the background for representing the foreground was proposed. This proposed approached is based on previously developed Adaptive Median Binary Pattern (AMBP) and Gaussian Mixture Model (GMM) approach for image segmentation. The proposed method successfully detects micro-particles even when they have very low fluorescent intensity, while most of the previous approaches failed and is more robust to noise and artefacts. For tracking the micro-particles, we proposed a novel algorithm called "Hybrid Meanshift", which combines Meanshift, Histogram of oriented gradients (HOG) matching and optical flow techniques. Kalman filter was also combined with it to make the tracking robust.The processing of an experimental data set for generating a calibration curve, getting effectively the same results in less than 5 minutes was demonstrated, without needing experimental experience, compared with at least 2 hours work by an experienced experimenter using the manual approach

    INTELLIGENT VIDEO SURVEILLANCE OF HUMAN MOTION: ANOMALY DETECTION

    Get PDF
    Intelligent video surveillance is a system that can highlight extraction and video summarization that require recognition of the activities occurring in the video without any human supervision. Surveillance systems are extremely helpful to guard or protect you from any dangerous condition. In this project, we propose a system that can track and detect abnormal behavior in indoor environment. By concentrating on inside house enviromnent, we want to detect any abnormal behavior between adult and toddler to avoid abusing to happen. In general, the frameworks of a video surveillance system include the following stages: background estimator, segmentation, detection, tracking, behavior understanding and description. We use training behavior profile to collect the description and generate statistically behavior to perform anomaly detection later. We begin with modeling the simplest actions like: stomping, slapping, kicking, pointed sharp or blunt object that do not require sophisticated modeling. A method to model actions with more complex dynamic are then discussed. The results of the system manage to track adult figure, toddler figure and harm object as third subject. With this system, it can bring attention of human personnel security. For future work, we recommend to continue design methods for higher level representation of complex activities to do the matching anomaly detection with real-time video surveillance. We also propose the system to embed with hardware solution for triggered the matching detection as output

    The IPAC Image Subtraction and Discovery Pipeline for the intermediate Palomar Transient Factory

    Get PDF
    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, "bogus" candidates from processing artifacts and imperfect image subtractions outnumber real transients by ~ 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ~ 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.Comment: 66 pages, 21 figures, 7 tables, accepted by PAS

    Reconstruction of 3D rigid body motion in a virtual environment from a 2D image sequence

    Get PDF
    This research presents a procedure for interactive segmentation and automatic tracking of moving objects in a video sequence. The user outlines the region of interest (ROI) in the initial frame; the procedure builds a refined mask of the dominant object within the ROI. The refined mask is used to model a spline template of the object to be tracked. The tracking algorithm then employs a motion model to track the template through a sequence of frames and gathers the 3D affine motion parameters of the object from each frame. The extracted template is compared with a previously stored library of 3D shapes to determine the closest 3D object. If the extracted template is completely new, it is used to model a new 3D object which is added to the library. To recreate the motion, the motion parameters are applied to the 3D object in a virtual environment. The procedure described here can be applied to industrial problems such as traffic management and material flow congestion analysis
    corecore