3,483 research outputs found
SAVASA project @ TRECVID 2012: interactive surveillance event detection
In this paper we describe our participation in the interactive surveillance event detection task at TRECVid 2012. The system we developed was comprised of individual classifiers brought together behind a simple video search interface that enabled users to select relevant segments based on down~sampled animated gifs. Two types of user -- `experts' and `end users' -- performed the evaluations. Due to time constraints we focussed on three events -- ObjectPut, PersonRuns and Pointing -- and two of the five available cameras (1 and 3). Results from the interactive runs as well as discussion of the performance of the underlying retrospective classifiers are presented
Particle-Filter-Based Intelligent Video Surveillance System
In this study, an intelligent video surveillance (IVS) system is designed based on the particle filter. The designed IVS system can gather the information of the number of persons in the area and hot spots of the area. At first, the Gaussian mixture background model is utilized to detect moving objects by background subtraction. The moving object appearing in the margin of the video frame is considered as a new person. Then, a new particle filter is assigned to track the new person when it is detected. A particle filter is canceled when the corresponding tracked person leaves the video frame. Moreover, the Kalman filter is utilized to estimate the position of the person when the person is occluded. Information of the number of persons in the area and hot spots is gathered by tracking persons in the video frame. Finally, a user interface is designed to feedback the gathered information to users of the IVS system. By applying the proposed IVS system, the load of security guards can be reduced. Moreover, by hot spot analysis, the business operator can understand customer habits to plan the traffic flow and adjust the product placement for improving customer experience
Controlling a remotely located Robot using Hand Gestures in real time: A DSP implementation
Telepresence is a necessity for present time as we can't reach everywhere and
also it is useful in saving human life at dangerous places. A robot, which
could be controlled from a distant location, can solve these problems. This
could be via communication waves or networking methods. Also controlling should
be in real time and smooth so that it can actuate on every minor signal in an
effective way. This paper discusses a method to control a robot over the
network from a distant location. The robot was controlled by hand gestures
which were captured by the live camera. A DSP board TMS320DM642EVM was used to
implement image pre-processing and fastening the whole system. PCA was used for
gesture classification and robot actuation was done according to predefined
procedures. Classification information was sent over the network in the
experiment. This method is robust and could be used to control any kind of
robot over distance
Multispectral object segmentation and retrieval in surveillance video
This paper describes a system for object segmentation and feature extraction for surveillance video. Segmentation is performed by a dynamic vision system that fuses information from thermal infrared video with standard CCTV video in order to detect and track objects. Separate background modelling in each modality and dynamic mutual information based thresholding are used to provide initial foreground candidates for tracking. The belief in the validity of these candidates is ascertained using knowledge of foreground pixels and temporal linking of candidates. The transferable belief model is used to combine these sources of information and segment objects. Extracted objects are subsequently tracked using adaptive thermo-visual appearance models. In order to facilitate search and classification of objects in large archives, retrieval features from both modalities are extracted for tracked objects. Overall system performance is demonstrated in a simple retrieval scenari
K-Space at TRECVid 2007
In this paper we describe K-Space participation in
TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance.
The first of the two systems was a āshotā based interface,
where the results from a query were presented as a ranked
list of shots. The second interface was ābroadcastā based,
where results were presented as a ranked list of broadcasts.
Both systems made use of the outputs of our high-level feature submission as well as low-level visual features
Online video streaming for human tracking based on weighted resampling particle filter
Ā© 2018 The Authors. Published by Elsevier Ltd. This paper proposes a weighted resampling method for particle filter which is applied for human tracking on active camera. The proposed system consists of three major parts which are human detection, human tracking, and camera control. The codebook matching algorithm is used for extracting human region in human detection system, and the particle filter algorithm estimates the position of the human in every input image. The proposed system in this paper selects the particles with highly weighted value in resampling, because it provides higher accurate tracking features. Moreover, a proportional-integral-derivative controller (PID controller) controls the active camera by minimizing difference between center of image and the position of object obtained from particle filter. The proposed system also converts the position difference into pan-tilt speed to drive the active camera and keep the human in the field of view (FOV) camera. The intensity of image changes overtime while tracking human therefore the proposed system uses the Gaussian mixture model (GMM) to update the human feature model. As regards, the temporal occlusion problem is solved by feature similarity and the resampling particles. Also, the particle filter estimates the position of human in every input frames, thus the active camera drives smoothly. The robustness of the accurate tracking of the proposed system can be seen in the experimental results
- ā¦