1,938 research outputs found
A Neural System for Automated CCTV Surveillance
This paper overviews a new system, the “Owens
Tracker,” for automated identification of suspicious
pedestrian activity in a car-park.
Centralized CCTV systems relay multiple video streams
to a central point for monitoring by an operator. The
operator receives a continuous stream of information,
mostly related to normal activity, making it difficult to
maintain concentration at a sufficiently high level.
While it is difficult to place quantitative boundaries on
the number of scenes and time period over which
effective monitoring can be performed, Wallace and
Diffley [1] give some guidance, based on empirical and
anecdotal evidence, suggesting that the number of
cameras monitored by an operator be no greater than 16,
and that the period of effective monitoring may be as
low as 30 minutes before recuperation is required.
An intelligent video surveillance system should
therefore act as a filter, censuring inactive scenes and
scenes showing normal activity. By presenting the
operator only with unusual activity his/her attention is
effectively focussed, and the ratio of cameras to
operators can be increased.
The Owens Tracker learns to recognize environmentspecific
normal behaviour, and refers sequences of
unusual behaviour for operator attention. The system
was developed using standard low-resolution CCTV
cameras operating in the car-parks of Doxford Park
Industrial Estate (Sunderland, Tyne and Wear), and
targets unusual pedestrian behaviour.
The modus operandi of the system is to highlight
excursions from a learned model of normal behaviour in
the monitored scene. The system tracks objects and
extracts their centroids; behaviour is defined as the
trajectory traced by an object centroid; normality as the
trajectories typically encountered in the scene. The
essential stages in the system are: segmentation of
objects of interest; disambiguation and tracking of
multiple contacts, including the handling of occlusion
and noise, and successful tracking of objects that
“merge” during motion; identification of unusual
trajectories. These three stages are discussed in more
detail in the following sections, and the system
performance is then evaluated
Tree models for difference and change detection in a complex environment
A new family of tree models is proposed, which we call "differential trees."
A differential tree model is constructed from multiple data sets and aims to
detect distributional differences between them. The new methodology differs
from the existing difference and change detection techniques in its
nonparametric nature, model construction from multiple data sets, and
applicability to high-dimensional data. Through a detailed study of an arson
case in New Zealand, where an individual is known to have been laying
vegetation fires within a certain time period, we illustrate how these models
can help detect changes in the frequencies of event occurrences and uncover
unusual clusters of events in a complex environment.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS548 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Video-based evidence analysis and extraction in digital forensic investigation
As a result of the popularity of smart mobile devices and the low cost of surveillance systems, visual data are increasingly being used in digital forensic investigation. Digital videos have been widely used as key evidence sources in evidence identification, analysis, presentation, and report. The main goal of this paper is to develop advanced forensic video analysis techniques to assist the forensic investigation. We first propose a forensic video analysis framework that employs an efficient video/image enhancing algorithm for the low quality of footage analysis. An adaptive video enhancement algorithm based on contrast limited adaptive histogram equalization (CLAHE) is introduced to improve the closed-circuit television (CCTV) footage quality for the use of digital forensic investigation. To assist the video-based forensic analysis, a deep-learning-based object detection and tracking algorithm are proposed that can detect and identify potential suspects and tools from footages
Modelling of content-aware indicators for effective determination of shot boundaries in compressed MPEG videos
In this paper, a content-aware approach is proposed to design multiple test conditions for shot cut detection, which are organized into a multiple phase decision tree for abrupt cut detection and a finite state machine for dissolve detection. In comparison with existing approaches, our algorithm is characterized with two categories of content difference indicators and testing. While the first category indicates the content changes that are directly used for shot cut detection, the second category indicates the contexts under which the content change occurs. As a result, indications of frame differences are tested with context awareness to make the detection of shot cuts adaptive to both content and context changes. Evaluations announced by TRECVID 2007 indicate that our proposed algorithm achieved comparable performance to those using machine learning approaches, yet using a simpler feature set and straightforward design strategies. This has validated the effectiveness of modelling of content-aware indicators for decision making, which also provides a good alternative to conventional approaches in this topic
Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network
Drone systems have been deployed by various law enforcement agencies to
monitor hostiles, spy on foreign drug cartels, conduct border control
operations, etc. This paper introduces a real-time drone surveillance system to
identify violent individuals in public areas. The system first uses the Feature
Pyramid Network to detect humans from aerial images. The image region with the
human is used by the proposed ScatterNet Hybrid Deep Learning (SHDL) network
for human pose estimation. The orientations between the limbs of the estimated
pose are next used to identify the violent individuals. The proposed deep
network can learn meaningful representations quickly using ScatterNet and
structural priors with relatively fewer labeled examples. The system detects
the violent individuals in real-time by processing the drone images in the
cloud. This research also introduces the aerial violent individual dataset used
for training the deep network which hopefully may encourage researchers
interested in using deep learning for aerial surveillance. The pose estimation
and violent individuals identification performance is compared with the
state-of-the-art techniques.Comment: To Appear in the Efficient Deep Learning for Computer Vision (ECV)
workshop at IEEE Computer Vision and Pattern Recognition (CVPR) 2018. Youtube
demo at this: https://www.youtube.com/watch?v=zYypJPJipY
Human shape modelling for carried object detection and segmentation
La détection des objets transportés est un des prérequis pour développer des systèmes qui cherchent à comprendre les activités impliquant des personnes et des objets. Cette thèse présente de nouvelles méthodes pour détecter et segmenter les objets transportés dans des vidéos de surveillance. Les contributions sont divisées en trois principaux chapitres. Dans le premier chapitre, nous introduisons notre détecteur d’objets transportés, qui nous permet de détecter un type générique d’objets. Nous formulons la détection d’objets transportés comme un problème de classification de contours. Nous classifions le contour des objets mobiles en deux classes : objets transportés et personnes. Un masque de probabilités est généré pour le contour d’une personne basé sur un ensemble d’exemplaires (ECE) de personnes qui marchent ou se tiennent debout de différents points de vue. Les contours qui ne correspondent pas au masque de probabilités généré sont considérés comme des candidats pour être des objets transportés. Ensuite, une région est assignée à chaque objet transporté en utilisant la Coupe Biaisée Normalisée (BNC) avec une probabilité obtenue par une fonction pondérée de son chevauchement avec l’hypothèse du masque de contours de la personne et du premier plan segmenté. Finalement, les objets transportés sont détectés en appliquant une Suppression des Non-Maxima (NMS) qui élimine les scores trop bas pour les objets candidats. Le deuxième chapitre de contribution présente une approche pour détecter des objets transportés avec une méthode innovatrice pour extraire des caractéristiques des régions d’avant-plan basée sur leurs contours locaux et l’information des super-pixels. Initiallement, un objet bougeant dans une séquence vidéo est segmente en super-pixels sous plusieurs échelles. Ensuite, les régions ressemblant à des personnes dans l’avant-plan sont identifiées en utilisant un ensemble de caractéristiques extraites de super-pixels dans un codebook de formes locales. Ici, les régions ressemblant à des humains sont équivalentes au masque de probabilités de la première méthode (ECE). Notre deuxième détecteur d’objets transportés bénéficie du nouveau descripteur de caractéristiques pour produire une carte de probabilité plus précise. Les compléments des super-pixels correspondants aux régions ressemblant à des personnes dans l’avant-plan sont considérés comme une carte de probabilité des objets transportés. Finalement, chaque groupe de super-pixels voisins avec une haute probabilité d’objets transportés et qui ont un fort support de bordure sont fusionnés pour former un objet transporté. Finalement, dans le troisième chapitre, nous présentons une méthode pour détecter et segmenter les objets transportés. La méthode proposée adopte le nouveau descripteur basé sur les super-pixels pour iii identifier les régions ressemblant à des objets transportés en utilisant la modélisation de la forme humaine. En utilisant l’information spatio-temporelle des régions candidates, la consistance des objets transportés récurrents, vus dans le temps, est obtenue et sert à détecter les objets transportés. Enfin, les régions d’objets transportés sont raffinées en intégrant de l’information sur leur apparence et leur position à travers le temps avec une extension spatio-temporelle de GrabCut. Cette étape finale sert à segmenter avec précision les objets transportés dans les séquences vidéo. Nos méthodes sont complètement automatiques, et font des suppositions minimales sur les personnes, les objets transportés, et les les séquences vidéo. Nous évaluons les méthodes décrites en utilisant deux ensembles de données, PETS 2006 et i-Lids AVSS. Nous évaluons notre détecteur et nos méthodes de segmentation en les comparant avec l’état de l’art. L’évaluation expérimentale sur les deux ensembles de données démontre que notre détecteur d’objets transportés et nos méthodes de segmentation surpassent de façon significative les algorithmes compétiteurs.Detecting carried objects is one of the requirements for developing systems that reason about activities involving people and objects. This thesis presents novel methods to detect and segment carried objects in surveillance videos. The contributions are divided into three main chapters. In the first, we introduce our carried object detector which allows to detect a generic class of objects. We formulate carried object detection in terms of a contour classification problem. We classify moving object contours into two classes: carried object and person. A probability mask for person’s contours is generated based on an ensemble of contour exemplars (ECE) of walking/standing humans in different viewing directions. Contours that are not falling in the generated hypothesis mask are considered as candidates for carried object contours. Then, a region is assigned to each carried object candidate contour using Biased Normalized Cut (BNC) with a probability obtained by a weighted function of its overlap with the person’s contour hypothesis mask and segmented foreground. Finally, carried objects are detected by applying a Non-Maximum Suppression (NMS) method which eliminates the low score carried object candidates. The second contribution presents an approach to detect carried objects with an innovative method for extracting features from foreground regions based on their local contours and superpixel information. Initially, a moving object in a video frame is segmented into multi-scale superpixels. Then human-like regions in the foreground area are identified by matching a set of extracted features from superpixels against a codebook of local shapes. Here the definition of human like regions is equivalent to a person’s probability map in our first proposed method (ECE). Our second carried object detector benefits from the novel feature descriptor to produce a more accurate probability map. Complement of the matching probabilities of superpixels to human-like regions in the foreground are considered as a carried object probability map. At the end, each group of neighboring superpixels with a high carried object probability which has strong edge support is merged to form a carried object. Finally, in the third contribution we present a method to detect and segment carried objects. The proposed method adopts the new superpixel-based descriptor to identify carried object-like candidate regions using human shape modeling. Using spatio-temporal information of the candidate regions, consistency of recurring carried object candidates viewed over time is obtained and serves to detect carried objects. Last, the detected carried object regions are refined by integrating information of their appearances and their locations over time with a spatio-temporal extension of GrabCut. This final stage is used to accurately segment carried objects in frames. Our methods are fully automatic, and make minimal assumptions about a person, carried objects and videos. We evaluate the aforementioned methods using two available datasets PETS 2006 and i-Lids AVSS. We compare our detector and segmentation methods against a state-of-the-art detector. Experimental evaluation on the two datasets demonstrates that both our carried object detection and segmentation methods significantly outperform competing algorithms
Detecting microcalcification clusters in digital mammograms: Study for inclusion into computer aided diagnostic prompting system
Among signs of breast cancer encountered in digital mammograms radiologists point to microcalcification clusters (MCCs). Their detection is a challenging problem from both medical and image processing point of views. This work presents two concurrent methods for MCC detection, and studies their possible inclusion to a computer aided diagnostic prompting system. One considers Wavelet Domain Hidden Markov Tree (WHMT) for modeling microcalcification edges. The model is used for differentiation between MC and non-MC edges based on the weighted maximum likelihood (WML) values. The classification of objects is carried out using spatial filters. The second method employs SUSAN edge detector in the spatial domain for mammogram segmentation. Classification of objects as calcifications is carried out using another set of spatial filters and Feedforward Neural Network (NN). A same distance filter is employed in both methods to find true clusters. The analysis of two methods is performed on 54 image regions from the mammograms selected randomly from DDSM database, including benign and cancerous cases as well as cases which can be classified as hard cases from both radiologists and the computer perspectives. WHMT/WML is able to detect 98.15% true positive (TP) MCCs under 1.85% of false positives (FP), whereas the SUSAN/NN method achieves 94.44% of TP at the cost of 1.85% for FP. The comparison of these two methods suggests WHMT/WML for the computer aided diagnostic prompting. It also certifies the low false positive rates for both methods, meaning less biopsy tests per patient
Automatic human behaviour anomaly detection in surveillance video
This thesis work focusses upon developing the capability to automatically evaluate
and detect anomalies in human behaviour from surveillance video. We work with
static monocular cameras in crowded urban surveillance scenarios, particularly air-
ports and commercial shopping areas. Typically a person is 100 to 200 pixels high
in a scene ranging from 10 - 20 meters width and depth, populated by 5 to 40 peo-
ple at any given time. Our procedure evaluates human behaviour unobtrusively to
determine outlying behavioural events,
agging abnormal events to the operator.
In order to achieve automatic human behaviour anomaly detection we address
the challenge of interpreting behaviour within the context of the social and physical
environment. We develop and evaluate a process for measuring social connectivity
between individuals in a scene using motion and visual attention features. To do this
we use mutual information and Euclidean distance to build a social similarity matrix
which encodes the social connection strength between any two individuals. We de-
velop a second contextual basis which acts by segmenting a surveillance environment
into behaviourally homogeneous subregions which represent high tra c slow regions
and queuing areas. We model the heterogeneous scene in homogeneous subgroups
using both contextual elements. We bring the social contextual information, the
scene context, the motion, and visual attention features together to demonstrate
a novel human behaviour anomaly detection process which nds outlier behaviour
from a short sequence of video. The method, Nearest Neighbour Ranked Outlier
Clusters (NN-RCO), is based upon modelling behaviour as a time independent se-
quence of behaviour events, can be trained in advance or set upon a single sequence.
We nd that in a crowded scene the application of Mutual Information-based social
context permits the ability to prevent self-justifying groups and propagate anomalies
in a social network, granting a greater anomaly detection capability. Scene context
uniformly improves the detection of anomalies in all the datasets we test upon.
We additionally demonstrate that our work is applicable to other data domains.
We demonstrate upon the Automatic Identi cation Signal data in the maritime
domain. Our work is capable of identifying abnormal shipping behaviour using joint
motion dependency as analogous for social connectivity, and similarly segmenting
the shipping environment into homogeneous regions
An Intelligent Reconnaissance Framework for Homeland Security
The cross border terrorism and internal terrorist attacks are critical issues for any country to deal with. In India, such types of incidents that breach homeland security are increasing now a day. Tracking and combating such incidents depends only on the radio communications and manual operations of security agencies. These security agencies face various challenges to get the real-time location of the targeted vehicles, their direction of fleeing, etc. This paper proposes a novel application for automatic tracking of suspicious vehicles in real-time. The proposed application tracks the vehicle based on their registration number, type, colour and RFID tag. The proposed approach for vehicle recognition based on image processing achieves 92.45 per cent accuracy. The RFID-based vehicle identification technique achieves 100 per cent accuracy. This paper also proposes an approach for vehicle classification. The average classification accuracy obtained by the proposed approach is 93.3 per cent. An integrated framework for tracking of any vehicle at the request of security agencies is also proposed. Security agencies can track any vehicles in a specific time period by using the user interface of the application
- …