9,096 research outputs found

    A region based approach to background modeling in a wavelet multi-resolution framework

    Get PDF
    In the field of detection and monitoring of dynamic objects in quasi-static scenes, background subtraction techniques where background is modeled at pixel-level, although showing very significant limitations, are extensively used. In this work we propose a novel approach to background modeling that operates at region-level in a wavelet based multi-resolution framework. Based on a segmentation of the background, characterization is made for each region independently as a mixture of K Gaussian modes, considering the model of the approximation and detail coefficients at the different wavelet decomposition levels. Background region characterization is updated along time, and the detection of elements of interest is carried out computing the distance between background region models and those of each incoming image in the sequence. The inclusion of the context in the modeling scheme through each region characterization makes the model robust, being able to support not only gradual illumination and long-term changes, but also sudden illumination changes and the presence of strong shadows in the scen

    Multitarget Tracking in Nonoverlapping Cameras Using a Reference Set

    Get PDF
    Tracking multiple targets in nonoverlapping cameras are challenging since the observations of the same targets are often separated by time and space. There might be significant appearance change of a target across camera views caused by variations in illumination conditions, poses, and camera imaging characteristics. Consequently, the same target may appear very different in two cameras. Therefore, associating tracks in different camera views directly based on their appearance similarity is difficult and prone to error. In most previous methods, the appearance similarity is computed either using color histograms or based on pretrained brightness transfer function that maps color between cameras. In this paper, a novel reference set based appearance model is proposed to improve multitarget tracking in a network of nonoverlapping cameras. Contrary to previous work, a reference set is constructed for a pair of cameras, containing subjects appearing in both camera views. For track association, instead of directly comparing the appearance of two targets in different camera views, they are compared indirectly via the reference set. Besides global color histograms, texture and shape features are extracted at different locations of a target, and AdaBoost is used to learn the discriminative power of each feature. The effectiveness of the proposed method over the state of the art on two challenging real-world multicamera video data sets is demonstrated by thorough experiments

    Automated Optical Inspection and Image Analysis of Superconducting Radio-Frequency Cavities

    Full text link
    The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97% and the PPV 99% within the resolution of 15.63 ÎŒm\mu \mathrm{m}. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ=−0.93\rho = -0.93 with a significance of 6 σ6\,\sigma between an obtained surface variable and the maximal accelerating field was found

    Implementation of an Automated Image Processing System for Observing the Activities of Honey Bees

    Get PDF
    This research designed and implemented an automated system to collect data on honey bees using computer science techniques. This system utilizes image processing techniques to extract data from the videos taken in front or at the top of the hive’s entrance. Several web-based applications are used to obtain temperature and humidity data from National weather Service to supplement the data that are collected at the hive locally. All the weather data and those extracted from the images are stored in a MySQL database for analysis and accessed by an iPhone App that is designed as part of this research

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu

    A Universal Background Subtraction System

    Get PDF
    Background Subtraction is one of the fundamental pre-processing steps in video processing. It helps to distinguish between foreground and background for any given image and thus has numerous applications including security, privacy, surveillance and traffic monitoring to name a few. Unfortunately, no single algorithm exists that can handle various challenges associated with background subtraction such as illumination changes, dynamic background, camera jitter etc. In this work, we propose a Multiple Background Model based Background Subtraction (MB2S) system, which is universal in nature and is robust against real life challenges associated with background subtraction. It creates multiple background models of the scene followed by both pixel and frame based binary classification on both RGB and YCbCr color spaces. The masks generated after processing these input images are then combined in a framework to classify background and foreground pixels. Comprehensive evaluation of proposed approach on publicly available test sequences show superiority of our system over other state-of-the-art algorithms
    • 

    corecore