264 research outputs found
See no Evil: Challenges of security surveillance and monitoring
While the development of intelligent technologies in security surveillance can augment human capabilities, they do not replace the role of the operator entirely; as such, when developing surveillance support it is critical that limitations to the cognitive system are taken into account. The current article reviews the cognitive challenges associated with the task of a CCTV operator: visual search and cognitive/perceptual overload, attentional failures, vulnerability to distraction, and decision-making in a dynamically evolving environment. While not directly applied to surveillance issues, we suggest that the NSEEV (noticing – salience, effort, expectancy, value) model of attention could provide a useful theoretical basis for understanding the challenges faced in detection and monitoring tasks. Having identified cognitive limitations of the human operator, this review sets out a research agenda for further understanding the cognitive functioning related to surveillance, and highlights the need to consider the human element at the design stage when developing technological solutions to security surveillance
Factors affecting the probability of detecting a counterfeit banknote:attitude, situation and design
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 324)
This bibliography lists 200 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during May, 1989. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
Attention capture by multiple events using dynamic displays
Being able to select relevant visual information from among irrelevant
information is critical for the successful accomplishment of many day to day
activities. However, the locus of attentional selection is not always under the
control of the observer. Certain events and stimuli in the visual environment have
been shown to control selection against observers’ intentions and goals. These are
said to capture attention in an automatic and stimulus driven manner. The events
and stimuli that capture attention can be static (colour, shape, size, etc.) or
dynamic (motion, flicker, etc.).
This thesis examines the effect of dynamic stimuli on attentional selection
by using a visual search paradigm. The findings suggest that neither motion per se
nor the onset of motion captures attention. They also suggest that when low
refresh rate motion is used, capture occurs, but this effect cannot be attributed to
capture by motion onset (Chapter 3). Further, the second study suggests that
attention capture is observed using low refresh rate motion onsets because they
are not masked as compared with the static items in the display. Thus capture is
put down to a relatively better visual quality and stimulus encoding rather than
motion (Chapter 4). The findings from this thesis also suggests that when back
and forth oscillatory motion is used, capture re-emerges, but this effect is best
attributed to a change in direction that happens to be temporally unique (Chapter
5). Another important finding is that in attention capture by abrupt onset, only one
onset is prioritised in search (Chapter 6). The findings overall argue for a strong
role of low level factors in attention capture by dynamic stimuli
Recommended from our members
Testing and training lifeguard visual search
Lifeguards play a crucial role in drowning prevention. However, current U.K. lifeguard qualifications are limited in training and assessing visual surveillance skills, and little is known about how lifeguards successfully detect drowning swimmers. To improve our understanding of lifeguard visual search skill, and explore the potential for improving this skill through training, this thesis had the following aims: (a) to identify whether visual skills for drowning detection improve with lifeguard experience, (b) to understand why such differences occur, and (c) design and valid a visual training intervention to improve drowning detection on the basis of these results.
The first two studies investigated drowning-detection skills of participants with differing levels of lifeguard experience in a dynamic search task with simulated drownings. Lifeguards were found to detect drownings faster and more often than non-lifeguards. In three follow-up studies these results were replicated with more naturalistic stimuli. Video footage from an American wave pool was extracted, which showed genuine instances of swimmer distress. Results again demonstrated lifeguard superiority in detecting the drowning targets.
Eye tracking measures, recorded on both the simulated and naturalistic clips, failed to reveal any differences between lifeguards and non-lifeguards, suggesting that superior drowning detection for lifeguards did not result from better scanning strategies per se.
Following this, two cognitive mechanisms that may underlie drowning-detection skill were investigated. Lifeguard and non-lifeguard performance on Multiple Object Avoidance (MOA) and Functional Field of View (FFOV) tests was assessed. Although lifeguards had better MOA task performance compared to non-lifeguards, only the lifeguards’ accuracy at detecting the central target in the FFOV task predicted performance on a subsequent drowning detection task. It was concluded that superior drowning detection was a result of better classification recognition of drowning swimmers (which was the central task in the FFOV test).
Based on these findings the final experiment explored the effectiveness of an intense classification training task to improve drowning detection. An intervention was designed that required participants to differentiate between videos of isolated drowning and non-drowning swimmers. Non-lifeguards trained in this intervention showed greater improvement on a subsequent drowning-detection task compared to untrained control participants, who completed an active-control task.
The results of this thesis suggest that drowning-detection skill can be reliably assessed, and that foveal processing of drowning characteristics is key to lifeguards' superior performance. Isolating and training this key sub-skill improves drowning-detection performance and offers a method for training future lifeguards
Survey of Human Models for Verification of Human-Machine Systems
We survey the landscape of human operator modeling ranging from the early
cognitive models developed in artificial intelligence to more recent formal
task models developed for model-checking of human machine interactions. We
review human performance modeling and human factors studies in the context of
aviation, and models of how the pilot interacts with automation in the cockpit.
The purpose of the survey is to assess the applicability of available
state-of-the-art models of the human operators for the design, verification and
validation of future safety-critical aviation systems that exhibit higher-level
of autonomy, but still require human operators in the loop. These systems
include the single-pilot aircraft and NextGen air traffic management. We
discuss the gaps in existing models and propose future research to address
them
Face Centered Image Analysis Using Saliency and Deep Learning Based Techniques
Image analysis starts with the purpose of configuring vision machines that can perceive like human to intelligently infer general principles and sense the surrounding situations from imagery. This dissertation studies the face centered image analysis as the core problem in high level computer vision research and addresses the problem by tackling three challenging subjects: Are there anything interesting in the image? If there is, what is/are that/they? If there is a person presenting, who is he/she? What kind of expression he/she is performing? Can we know his/her age? Answering these problems results in the saliency-based object detection, deep learning structured objects categorization and recognition, human facial landmark detection and multitask biometrics.
To implement object detection, a three-level saliency detection based on the self-similarity technique (SMAP) is firstly proposed in the work. The first level of SMAP accommodates statistical methods to generate proto-background patches, followed by the second level that implements local contrast computation based on image self-similarity characteristics. At last, the spatial color distribution constraint is considered to realize the saliency detection. The outcome of the algorithm is a full resolution image with highlighted saliency objects and well-defined edges.
In object recognition, the Adaptive Deconvolution Network (ADN) is implemented to categorize the objects extracted from saliency detection. To improve the system performance, L1/2 norm regularized ADN has been proposed and tested in different applications. The results demonstrate the efficiency and significance of the new structure.
To fully understand the facial biometrics related activity contained in the image, the low rank matrix decomposition is introduced to help locate the landmark points on the face images. The natural extension of this work is beneficial in human facial expression recognition and facial feature parsing research.
To facilitate the understanding of the detected facial image, the automatic facial image analysis becomes essential. We present a novel deeply learnt tree-structured face representation to uniformly model the human face with different semantic meanings. We show that the proposed feature yields unified representation in multi-task facial biometrics and the multi-task learning framework is applicable to many other computer vision tasks
Neural dynamics of selective attention to speech in noise
This thesis investigates how the neural system instantiates selective attention to speech in challenging acoustic conditions, such as spectral degradation and the presence of background noise. Four studies using behavioural measures, magneto- and electroencephalography (M/EEG) recordings were conducted in younger (20–30 years) and older participants (60–80 years). The overall results can be summarized as follows. An EEG experiment demonstrated that slow negative potentials reflect participants’ enhanced allocation of attention when they are faced with more degraded acoustics. This basic mechanism of attention allocation was preserved at an older age. A follow-up experiment in younger listeners indicated that attention allocation can be further enhanced in a context of increased task-relevance through monetary incentives. A subsequent study focused on brain oscillatory dynamics in a demanding speech comprehension task. The power of neural alpha oscillations (~10 Hz) reflected a decrease in demands on attention with increasing acoustic detail and critically also with increasing predictiveness of the upcoming speech content. Older listeners’ behavioural responses and alpha power dynamics were stronger affected by acoustic detail compared with younger listeners, indicating that selective attention at an older age is particularly dependent on the sensory input signal. An additional analysis of listeners’ neural phase-locking to the temporal envelopes of attended speech and unattended background speech revealed that younger and older listeners show a similar segregation of attended and unattended speech on a neural level. A dichotic listening experiment in the MEG aimed at investigating how neural alpha oscillations support selective attention to speech. Lateralized alpha power modulations in parietal and auditory cortex regions predicted listeners’ focus of attention (i.e., left vs right). This suggests that alpha oscillations implement an attentional filter mechanism to enhance the signal and to suppress noise. A final behavioural study asked whether acoustic and semantic aspects of task-irrelevant speech determine how much it interferes with attention to task-relevant speech. Results demonstrated that younger and older adults were more distracted when acoustic detail of irrelevant speech was enhanced, whereas predictiveness of irrelevant speech had no effect. All findings of this thesis are integrated in an initial framework for the role of attention for speech comprehension under demanding acoustic conditions
- …