42 research outputs found

    Development of methodologies to analyze and visualize air traffic controllers’ visual scanning strategies

    Get PDF
    The Federal Aviation Administration (FAA) estimates a 60 million air traffic volume by 2040. However, the available workforce of expert air traffic controllers (ATCs) might not be sufficient to manage this anticipated high traffic volume. Thus, to maintain the same safety standard and service level for air travel, more ATCs will need to be trained quickly. Previous research shows eye tracking technology can be used to enhance the training of the ATC’s by reducing their false alarm rate, thereby helping to mitigate the impact of increasing demand. Methods need to be developed to better understand experts’ eye movement (EM) data so as to incorporate them effectively in ATCs’ training process. However, it’s challenging to analyze ATCs’ EM data for several factors: (i) aircraft representation on radar display (i.e. targets) are dynamic, as their shape and position changes with time; (ii) raw EM data is very complex to visualize, even for the meaningful small duration (e.g. task completion time of 1 min); (iii) in the absence of any predefined order of visual scanning, each ATC employ a variety of scanning strategies to manage traffic, making it challenging to extract relevant patterns that can be taught. To address these aforementioned issues, a threefold framework was developed: (i) a dynamic network-based approach that can map expert ATCs’ EM data to dynamic targets, enabling the representation of visual scanning strategy evolution with time; (ii) a novel density-based clustering method to reduce the inherent complexity of ATCs’ raw EM data to enhance its visualization; (iii) a new modified n-gram based similarity analysis method, to evaluate the consistency and similarity of visual scanning strategies among experts. Two different experiments were conducted at the FAA Civil Aerospace Medical Institute in Oklahoma City, where EM data of 15 veteran ATCs’ (> 20 years of experience) were collected using eye trackers (Facelab and Tobii eye trackers), while they were controlling a high-fidelity simulated air traffic. The first experiment involved en-route traffic scenario (with aircraft above 18,000 feet) and the second experiment consisted of airport tower traffic (aircraft within 30 miles radius from an airport). The dynamic network analysis showed three important results: (i) it can be used to effectively represent which are the important targets and how their significance evolves over time, (ii) in dynamic scenarios, having targets having variable time on display, traditional target importance measure (i.e. the number of eye fixations and duration) can be misleading, and (iii) importance measures derived from the network-based approach (e.g. closeness, betweenness) can be used to understand how ATCs’ visual attention moves between targets. The result from the density-based clustering method shows that by controlling its two parameter values(i.e. spatial and temporal approximation), the visualization of the raw EM data can be substantially simplified. This approximate representation can be used for better training purpose where expert ATC’s visual scanning strategy can be visualized with reduced complexity, thereby enhancing the understanding of novices while maintaining its significant pattern (key for visual pattern mining). Moreover, the model parameters enable the decision-maker to incorporate context-dependent factors by adjusting the spatial (in pixel) and temporal (in milliseconds) thresholds used for the visual scanning approximation. The modified n-gram approach allows for twofold similarity analysis of EM data: (i) detecting similar EM patterns due to exact sequential match in which the targets are focused and/or grouped together visually because of several eye fixation transitions among them, and (ii) unearth similar visual scanning behavior which is otherwise small perturbed version of each other that arise as a result of idiosyncrasies of ATCs. Thus, this method is more robust compared to other prevalent approaches which employ strict definitions for similarity that are difficult to empirically observe in real-life scenarios. To summarize, the three methods developed allow us to apply a comprehensible framework to understand the evolving nature of the visual scanning strategy in complex environments (e.g. air traffic control task) by: (i) by identifying target importance & their evolution; (ii) simplifying visualizing of complex EM strategy for easier comprehension; (iii) evaluating similarity among various visual scanning strategies in dynamic scenarios

    Eight-Channel Multispectral Image Database for Saliency Prediction

    Get PDF
    Saliency prediction is a very important and challenging task within the computer vision community. Many models exist that try to predict the salient regions on a scene from its RGB image values. Several new models are developed, and spectral imaging techniques may potentially overcome the limitations found when using RGB images. However, the experimental study of such models based on spectral images is difficult because of the lack of available data to work with. This article presents the first eight-channel multispectral image database of outdoor urban scenes together with their gaze data recorded using an eyetracker over several observers performing different visualization tasks. Besides, the information from this database is used to study whether the complexity of the images has an impact on the saliency maps retrieved from the observers. Results show that more complex images do not correlate with higher differences in the saliency maps obtained.Spanish Ministry of Science, Innovation, and Universities (MICINN) RTI2018-094738-B-I00European Commissio

    A Multi-scale colour and Keypoint Density-based Approach for Visual Saliency Detection.

    Get PDF
    In the first seconds of observation of an image, several visual attention processes are involved in the identification of the visual targets that pop-out from the scene to our eyes. Saliency is the quality that makes certain regions of an image stand out from the visual field and grab our attention. Saliency detection models, inspired by visual cortex mechanisms, employ both colour and luminance features. Furthermore, both locations of pixels and presence of objects influence the Visual Attention processes. In this paper, we propose a new saliency method based on the combination of the distribution of interest points in the image with multiscale analysis, a centre bias module and a machine learning approach. We use perceptually uniform colour spaces to study how colour impacts on the extraction of saliency. To investigate eye-movements and assess the performances of saliency methods over object-based images, we conduct experimental sessions on our dataset ETTO (Eye Tracking Through Objects). Experiments show our approach to be accurate in the detection of saliency concerning state-of-the-art methods and accessible eye-movement datasets. The performances over object-based images are excellent and consistent on generic pictures. Besides, our work reveals interesting findings on some relationships between saliency and perceptually uniform colour spaces

    Cognitive Image Fusion and Assessment

    Get PDF

    Saliency Prediction in the Data Visualization Design Process

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Color in context and spatial color computation

    Get PDF
    The purpose of this dissertation is to contribute in the field of spatial color computation models.We begin introducing an overview about different approaches in the definitionof computational models of color in digital imaging. In particular, we present a recent accurate mathematical definition of the Retinex algorithm, that lead to the definition of a new computational model called Random Spray Retinex (RSR). We then introduce the tone mapping problem, discussing the need for color computation in the implementation of a perceptual correct computational model. At this aim we will present the HDR Retinex algorithm, that addresses tone mappingand color constancy at the same time. In the end, we present some experiments analyzing the influence of HDR Retinex spatial color computation on tristimulus colors obtained using different Color Matching Functions (CMFs) on spectral luminance distribution generated by a photometric raytracer

    Modelling eye movements and visual attention in synchronous visual and linguistic processing

    Get PDF
    This thesis focuses on modelling visual attention in tasks in which vision interacts with language and other sources of contextual information. The work is based on insights provided by experimental studies in visual cognition and psycholinguistics, particularly cross-modal processing. We present a series of models of eye-movements in situated language comprehension capable of generating human-like scan-paths. Moreover we investigate the existence of high level structure of the scan-paths and applicability of tools used in Natural Language Processing in the analysis of this structure. We show that scan paths carry interesting information that is currently neglected in both experimental and modelling studies. This information, studied at a level beyond simple statistical measures such as proportion of looks, can be used to extract knowledge of more complicated patterns of behaviour, and to build models capable of simulating human behaviour in the presence of linguistic material. We also revisit classical model saliency and its extensions, in particular the Contextual Guidance Model of Torralba et al. (2006), and extend it with memory of target positions in visual search. We show that models of contextual guidance should contain components responsible for short term learning and memorisation. We also investigate the applicability of this type of model to prediction of human behaviour in tasks with incremental stimuli as in situated language comprehension. Finally we investigate the issue of objectness and object saliency, including their effects on eye-movements and human responses to experimental tasks. In a simple experiment we show that when using an object-based notion of saliency it is possible to predict fixation locations better than using pixel-based saliency as formulated by Itti et al. (1998). In addition we show that object based saliency fits into current theories such as cognitive relevance and can be used to build unified models of cross-referential visual and linguistic processing. This thesis forms a foundation towards a more detailed study of scan-paths within an object-based framework such as Cognitive Relevance Framework (Henderson et al., 2007, 2009) by providing models capable of explaining human behaviour, and the delivery of tools and methodologies to predict which objects would be attended to during synchronous visual and linguistic processing
    corecore