262 research outputs found
Applying psychological science to the CCTV review process: a review of cognitive and ergonomic literature
As CCTV cameras are used more and more often to increase security in communities, police are spending a larger proportion of their resources, including time, in processing CCTV images when investigating crimes that have occurred (Levesley & Martin, 2005; Nichols, 2001). As with all tasks, there are ways to approach this task that will facilitate performance and other approaches that will degrade performance, either by increasing errors or by unnecessarily prolonging the process. A clearer understanding of psychological factors influencing the effectiveness of footage review will facilitate future training in best practice with respect to the review of CCTV footage. The goal of this report is to provide such understanding by reviewing research on footage review, research on related tasks that require similar skills, and experimental laboratory research about the cognitive skills underpinning the task. The report is organised to address five challenges to effectiveness of CCTV review: the effects of the degraded nature of CCTV footage, distractions and interrupts, the length of the task, inappropriate mindset, and variability in people’s abilities and experience. Recommendations for optimising CCTV footage review include (1) doing a cognitive task analysis to increase understanding of the ways in which performance might be limited, (2) exploiting technology advances to maximise the perceptual quality of the footage (3) training people to improve the flexibility of their mindset as they perceive and interpret the images seen, (4) monitoring performance either on an ongoing basis, by using psychophysiological measures of alertness, or periodically, by testing screeners’ ability to find evidence in footage developed for such testing, and (5) evaluating the relevance of possible selection tests to screen effective from ineffective screener
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
Large Scale Pattern Detection in Videos and Images from the Wild
PhDPattern detection is a well-studied area of computer vision, but still current methods are
unstable in images of poor quality. This thesis describes improvements over contemporary
methods in the fast detection of unseen patterns in a large corpus of videos that vary
tremendously in colour and texture definition, captured “in the wild” by mobile devices
and surveillance cameras.
We focus on three key areas of this broad subject;
First, we identify consistency weaknesses in existing techniques of processing an image
and it’s horizontally reflected (mirror) image. This is important in police investigations
where subjects change their appearance to try to avoid recognition, and we propose that
invariance to horizontal reflection should be more widely considered in image description
and recognition tasks too. We observe online Deep Learning system behaviours in
this respect, and provide a comprehensive assessment of 10 popular low level feature
detectors.
Second, we develop simple and fast algorithms that combine to provide memory- and
processing-efficient feature matching. These involve static scene elimination in the presence
of noise and on-screen time indicators, a blur-sensitive feature detection that finds
a greater number of corresponding features in images of varying sharpness, and a combinatorial
texture and colour feature matching algorithm that matches features when
either attribute may be poorly defined. A comprehensive evaluation is given, showing
some improvements over existing feature correspondence methods.
Finally, we study random decision forests for pattern detection. A new method of
indexing patterns in video sequences is devised and evaluated. We automatically label
positive and negative image training data, reducing a task of unsupervised learning to
one of supervised learning, and devise a node split function that is invariant to mirror
reflection and rotation through 90 degree angles. A high dimensional vote accumulator
encodes the hypothesis support, yielding implicit back-projection for pattern detection.European Union’s Seventh Framework Programme, specific
topic “framework and tools for (semi-) automated exploitation of massive amounts of digital data
for forensic purposes”, under grant agreement number 607480 (LASIE IP project)
AN OBJECT-BASED MULTIMEDIA FORENSIC ANALYSIS TOOL
With the enormous increase in the use and volume of photographs and videos, multimedia-based digital evidence now plays an increasingly fundamental role in criminal investigations. However, with the increase, it is becoming time-consuming and costly for investigators to analyse content manually. Within the research community, focus on multimedia content has tended to be on highly specialised scenarios such as tattoo identification, number plate recognition, and child exploitation. An investigator’s ability to search multimedia data based on keywords (an approach that already exists within forensic tools for character-based evidence) could provide a simple and effective approach for identifying relevant imagery.
This thesis proposes and demonstrates the value of using a multi-algorithmic approach via fusion to achieve the best image annotation performance. The results show that from existing systems, the highest average recall was achieved by Imagga with 53% while the proposed multi-algorithmic system achieved 77% across the select datasets.
Subsequently, a novel Object-based Multimedia Forensic Analysis Tool (OM-FAT) architecture was proposed. The OM-FAT automates the identification and extraction of annotation-based evidence from multimedia content. Besides making multimedia data searchable, the OM-FAT system enables investigators to perform various forensic analyses (search using annotations, metadata, object matching, text similarity and geo-tracking) to help investigators understand the relationship between artefacts, thus reducing the time taken to perform an investigation and the investigator’s cognitive load. It will enable investigators to ask higher-level and more abstract questions of the data, then find answers to the essential questions in the investigation: what, who, why, how, when, and where. The research includes a detailed illustration of the architectural requirements, engines, and complete design of the system workflow, which represents a full case management system.
To highlight the ease of use and demonstrate the system’s ability to correlate between multimedia, a prototype was developed. The prototype integrates the functionalities of the OM-FAT tool and demonstrates how the system would help digital investigators find pieces of evidence among a large number of images starting from the acquisition stage and ending in the reporting stage with less effort and in less time.The Higher Committee for Education Development in Iraq (HCED
UNSUPERVISED DETECTION AND LOCALIZATION OF ANOMALOUS MOTION PATTERNS IN SURVEILLANCE VIDEO
Master'sMASTER OF SCIENC
Recommended from our members
Testing and training lifeguard visual search
Lifeguards play a crucial role in drowning prevention. However, current U.K. lifeguard qualifications are limited in training and assessing visual surveillance skills, and little is known about how lifeguards successfully detect drowning swimmers. To improve our understanding of lifeguard visual search skill, and explore the potential for improving this skill through training, this thesis had the following aims: (a) to identify whether visual skills for drowning detection improve with lifeguard experience, (b) to understand why such differences occur, and (c) design and valid a visual training intervention to improve drowning detection on the basis of these results.
The first two studies investigated drowning-detection skills of participants with differing levels of lifeguard experience in a dynamic search task with simulated drownings. Lifeguards were found to detect drownings faster and more often than non-lifeguards. In three follow-up studies these results were replicated with more naturalistic stimuli. Video footage from an American wave pool was extracted, which showed genuine instances of swimmer distress. Results again demonstrated lifeguard superiority in detecting the drowning targets.
Eye tracking measures, recorded on both the simulated and naturalistic clips, failed to reveal any differences between lifeguards and non-lifeguards, suggesting that superior drowning detection for lifeguards did not result from better scanning strategies per se.
Following this, two cognitive mechanisms that may underlie drowning-detection skill were investigated. Lifeguard and non-lifeguard performance on Multiple Object Avoidance (MOA) and Functional Field of View (FFOV) tests was assessed. Although lifeguards had better MOA task performance compared to non-lifeguards, only the lifeguards’ accuracy at detecting the central target in the FFOV task predicted performance on a subsequent drowning detection task. It was concluded that superior drowning detection was a result of better classification recognition of drowning swimmers (which was the central task in the FFOV test).
Based on these findings the final experiment explored the effectiveness of an intense classification training task to improve drowning detection. An intervention was designed that required participants to differentiate between videos of isolated drowning and non-drowning swimmers. Non-lifeguards trained in this intervention showed greater improvement on a subsequent drowning-detection task compared to untrained control participants, who completed an active-control task.
The results of this thesis suggest that drowning-detection skill can be reliably assessed, and that foveal processing of drowning characteristics is key to lifeguards' superior performance. Isolating and training this key sub-skill improves drowning-detection performance and offers a method for training future lifeguards
- …