427,487 research outputs found
An Adaptive Dictionary Learning Approach for Modeling Dynamical Textures
Video representation is an important and challenging task in the computer
vision community. In this paper, we assume that image frames of a moving scene
can be modeled as a Linear Dynamical System. We propose a sparse coding
framework, named adaptive video dictionary learning (AVDL), to model a video
adaptively. The developed framework is able to capture the dynamics of a moving
scene by exploring both sparse properties and the temporal correlations of
consecutive video frames. The proposed method is compared with state of the art
video processing methods on several benchmark data sequences, which exhibit
appearance changes and heavy occlusions
Moving Image Preservation and Cultural Capital
This article examines the changing landscape of moving image archiving
in the wake of recent developments in online video sharing
services such as YouTube and Google Video. The most crucial change
to moving image archives may not be in regard to the collections
themselves, but rather the social order that sustains cultural institutions
in their role as the creators and sustainers of objectified cultural
capital. In the future, moving image stewardship may no longer be
the exclusive province of institutions such as archives and libraries,
and may soon be accomplished in part through the work of other
interested individuals and organizations as they contribute to and
define collections. The technologies being built and tested in the
current Internet environment offer a new model for the reimagined
moving image archive, which foregrounds the user in the process of
creating the archive and strongly encourages the appropriation of
moving images for new works. This new archetype, which in theory
functions on democratic principles, considers moving images???along
with most other types of cultural heritage material???to be building
blocks of creative acts or public speech acts. One might argue that
the latter represents a new model for creating an archive; this new
democratic archive documents and facilitates social discourse.published or submitted for publicatio
In Girum (version/round 1.3, 2008) – Dir. Nick Cope: Video/DVD in collaboration with Composer Tim Howle, 6’05”. Awards: Abstracta International Abstract Cinema Exhibition, Rome, August 2009 – Honourable Mention of the Jury.
Short film/video exploring the encounter of electroacoustic music composition and moving image practice
Real Time Turbulent Video Perfecting by Image Stabilization and Super-Resolution
Image and video quality in Long Range Observation Systems (LOROS) suffer from
atmospheric turbulence that causes small neighbourhoods in image frames to
chaotically move in different directions and substantially hampers visual
analysis of such image and video sequences. The paper presents a real-time
algorithm for perfecting turbulence degraded videos by means of stabilization
and resolution enhancement. The latter is achieved by exploiting the turbulent
motion. The algorithm involves generation of a reference frame and estimation,
for each incoming video frame, of a local image displacement map with respect
to the reference frame; segmentation of the displacement map into two classes:
stationary and moving objects and resolution enhancement of stationary objects,
while preserving real motion. Experiments with synthetic and real-life
sequences have shown that the enhanced videos, generated in real time, exhibit
substantially better resolution and complete stabilization for stationary
objects while retaining real motion.Comment: Submitted to The Seventh IASTED International Conference on
Visualization, Imaging, and Image Processing (VIIP 2007) August, 2007 Palma
de Mallorca, Spai
Autonomous real-time surveillance system with distributed IP cameras
An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image
processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects
moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator
Vehicle Speed Measurement and Number Plate Detection using Real Time Embedded System
A real time system is proposed to detect moving vehicles that violate the speed limit. A dedicated digital signal processing chip is used to exploit computationally inexpensive image-processing techniques over the video sequence captured from the fixed position video camera for estimating the speed of the moving vehicles. The moving vehicles are detected by analysing the binary image sequences that are constructed from the captured frames by employing the inter-frame difference or the background subtraction techniques. The detected moving vehicles are tracked to estimate their speeds.This project deals with the tracking and following of single object in a sequence of frames and the velocity of the object is determined. The proposed method varies from previous existing methods in tracking moving objects, velocity determination and number plate detection. From the binary image generated, the moving vehicle is tracked using image segmentation of the video frames. The segmentation process is done by using the thresholding and morphological operations on the video frames. The object is visualized and its centroid is calculated. The distance it moved between frame to frame is stored and using this velocity is calculated with the frame rate of video.The images of the speeding vehicles are further analysed to detect license plate image regions. The entire simulation is done in matlab and simulink simulation software. Keywords:morphological;thresholding;segmentation;centroi
Silhouette coverage analysis for multi-modal video surveillance
In order to improve the accuracy in video-based object detection, the proposed multi-modal video surveillance system takes advantage of the different kinds of information represented by visual, thermal and/or depth imaging sensors.
The multi-modal object detector of the system can be split up in two consecutive parts: the registration and the coverage analysis. The multi-modal image registration is performed using a three step silhouette-mapping algorithm which detects the rotation, scale and translation between moving objects in the visual, (thermal) infrared and/or depth images. First, moving object silhouettes are extracted to separate the calibration objects, i.e., the foreground, from the static background. Key components are dynamic background subtraction, foreground enhancement and automatic thresholding. Then, 1D contour vectors are generated from the resulting multi-modal silhouettes using silhouette boundary extraction, cartesian to polar transform and radial vector analysis. Next, to retrieve the rotation angle and the scale factor between the multi-sensor image, these contours are mapped on each other using circular cross correlation and contour scaling. Finally, the translation between the images is calculated using maximization of binary correlation.
The silhouette coverage analysis also starts with moving object silhouette extraction. Then, it uses the registration information, i.e., rotation angle, scale factor and translation vector, to map the thermal, depth and visual silhouette images on each other. Finally, the coverage of the resulting multi-modal silhouette map is computed and is analyzed over time to reduce false alarms and to improve object detection.
Prior experiments on real-world multi-sensor video sequences indicate that automated multi-modal video surveillance is promising. This paper shows that merging information from multi-modal video further increases the detection results
- …