4,163 research outputs found
Moving Object Detection in Wavelet Compressed Video
Cataloged from PDF version of article.In many surveillance systems the video is stored in wavelet compressed form.In this paper, an algorithm for moving
object and region detection in video which is compressed using a wavelet transform (WT) is developed.The algorithm
estimates the WT of the background scene from the WTs of the past image frames of the video.The WT of the current
image is compared with the WT of the background and the moving objects are determined from the difference.The
algorithm does not perform inverse WT to obtain the actual pixels of the current image nor the estimated background.
This leads to a computationally efficient method and a system compared to the existing motion estimation methods.
(C) 2005 Published by Elsevier B.V
Moving object detection and tracking in wavelet compressed video
Cataloged from PDF version of article.In many surveillance systems the video is stored in wavelet compressed form.
An algorithm for moving object and region detection in video that is
compressed using a wavelet transform (WT) is developed. The algorithm
estimates the WT of the background scene from the WTs of the past image
frames of the video. The WT of the current image is compared with the WT of
the background and the moving objects are determined from the difference.
The algorithm does not perform inverse WT to obtain the actual pixels of the
current image nor the estimated background. This leads to a computationally
efficient method and a system compared to the existing motion estimation
methods. In a second aspect, size and locations of moving objects and regions
in video is estimated from the wavelet coefficients of the current image, which
differ from the estimated background wavelet coefficients. This is possible
because wavelet coefficients of an image carry both frequency and space
information. In this way, we are able to track the detected objects in video.
Another feature of the algorithm is that it can determine slowing objects in
video. This is important in many practical applications including highway
monitoring, queue control, etc.Töreyin, Behçet UÄurM.S
A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain
Detecting camouflaged moving foreground objects has been known to be
difficult due to the similarity between the foreground objects and the
background. Conventional methods cannot distinguish the foreground from
background due to the small differences between them and thus suffer from
under-detection of the camouflaged foreground objects. In this paper, we
present a fusion framework to address this problem in the wavelet domain. We
first show that the small differences in the image domain can be highlighted in
certain wavelet bands. Then the likelihood of each wavelet coefficient being
foreground is estimated by formulating foreground and background models for
each wavelet band. The proposed framework effectively aggregates the
likelihoods from different wavelet bands based on the characteristics of the
wavelet transform. Experimental results demonstrated that the proposed method
significantly outperformed existing methods in detecting camouflaged foreground
objects. Specifically, the average F-measure for the proposed algorithm was
0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI
Frequency-modulated continuous-wave LiDAR compressive depth-mapping
We present an inexpensive architecture for converting a frequency-modulated
continuous-wave LiDAR system into a compressive-sensing based depth-mapping
camera. Instead of raster scanning to obtain depth-maps, compressive sensing is
used to significantly reduce the number of measurements. Ideally, our approach
requires two difference detectors. % but can operate with only one at the cost
of doubling the number of measurments. Due to the large flux entering the
detectors, the signal amplification from heterodyne detection, and the effects
of background subtraction from compressive sensing, the system can obtain
higher signal-to-noise ratios over detector-array based schemes while scanning
a scene faster than is possible through raster-scanning. %Moreover, we show how
a single total-variation minimization and two fast least-squares minimizations,
instead of a single complex nonlinear minimization, can efficiently recover
high-resolution depth-maps with minimal computational overhead. Moreover, by
efficiently storing only data points from measurements of an
pixel scene, we can easily extract depths by solving only two linear equations
with efficient convex-optimization methods
Structured Sparse Modelling with Hierarchical GP
In this paper a new Bayesian model for sparse linear regression with a
spatio-temporal structure is proposed. It incorporates the structural
assumptions based on a hierarchical Gaussian process prior for spike and slab
coefficients. We design an inference algorithm based on Expectation Propagation
and evaluate the model over the real data.Comment: SPARS 201
Streaming Aerial Video Textures
We present a streaming compression algorithm for huge time-varying aerial imagery. New airborne optical sensors are capable of collecting billion-pixel images at multiple frames per second. These images must be transmitted through a low-bandwidth pipe requiring aggressive compression techniques. We achieve such compression by treating foreground portions of the imagery separately from background portions. Foreground information consists of moving objects, which form a tiny fraction of the total pixels. Background areas are compressed effectively over time using streaming wavelet analysis to compute a compact video texture map that represents several frames of raw input images. This map can be rendered efficiently using an algorithm amenable to GPU implementation. The core algorithmic contributions of this work are methods for fast, low-memory streaming wavelet compression and efficient display of wavelet video textures resulting from such compression
Can we ID from CCTV? Image quality in digital CCTV and face identification performance
CCTV is used for an increasing number Of purposes, and the new generation of digital systems can be tailored to serve a wide range of security requirements. However, configuration decisions are often made without considering specific task requirements, e.g. the video quality needed for reliable person identification. Our Study investigated the relationship between video quality and the ability of untrained viewers to identify faces from digital CCTV images. The task required 80 participants to identify 64 faces belonging to 4 different ethnicities. Participants compared face images taken from a high quality photographs and low quality CCTV stills, which were recorded at 4 different video quality bit rates (32, 52, 72 and 92 Kbps). We found that the number of correct identifications decreased by 12 (similar to 18%) as MPEG-4 quality decreased from 92 to 32 Kbps, and by 4 (similar to 6%) as Wavelet video quality decreased from 92 to 32 Kbps. To achieve reliable and effective face identification, we recommend that MPEG-4 CCTV systems should be used over Wavelet, and video quality should not be lowered below 52 Kbps during video compression. We discuss the practical implications of these results for security, and contribute a contextual methodology for assessing CCTV video quality
- âŠ