55 research outputs found

    Scan path visualization and comparison using visual aggregation techniques

    Get PDF
    We demonstrate the use of different visual aggregation techniques to obtain non-cluttered visual representations of scanpaths. First, fixation points are clustered using the mean-shift algorithm. Second, saccades are aggregated using the Attribute-Driven Edge Bundling (ADEB) algorithm that handles a saccades direction, onset timestamp, magnitude or their combination for the edge compatibility criterion. Flow direction maps, computed during bundling, can be visualized separately (vertical or horizontal components) or as a single image using the Oriented Line Integral Convolution (OLIC) algorithm. Furthermore, cosine similarity between two flow direction maps provides a similarity map to compare two scanpaths. Last, we provide examples of basic patterns, visual search task, and art perception. Used together, these techniques provide valuable insights about scanpath exploration and informative illustrations of the eye movement data

    Using Eye Movement Data Visualization to Enhance Training of Air Traffic Controllers: A Dynamic Network Approach

    Get PDF
    The Federal Aviation Administration (FAA) forecasted substantial increase in the US air traffic volume creating a high demand in Air Traffic Control Specialists (ATCSs). Training times and passing rates for ATCSs might be improved if expert ATCSs’ eye movement (EM) characteristics can be utilized to support effective training. However, effective EM visualization is difficult for a dynamic task (e.g. aircraft conflict detection and mitigation) that includes interrogating multi-element targets that are dynamically moving, appearing, disappearing, and overlapping within a display. To address the issues, a dynamic network-based approach is introduced that integrates adapted visualizations (i.e. time-frame networks and normalized dot/bar plots) with measures used in network science (i.e. indegree, closeness, and betweenness) to provide in-depth EM analysis. The proposed approach was applied in an aircraft conflict task using a high-fidelity simulator; employing the use of veteran ATCSs and pseudo pilots. Results show that, ATCSs’ visual attention to multi-element dynamic targets can be effectively interpreted and supported through multiple evidences obtained from the various visualization and associated measures. In addition, we discovered that fewer eye fixation numbers or shorter eye fixation durations on a target may not necessarily indicate the target is less important when analyzing the flow of visual attention within a network. The results show promise in cohesively analyzing and visualizing various eye movement characteristics to better support training. 

    Exploring the cognitive processes of map users employing eye tracking and EEG

    Get PDF

    Sonography data science

    Get PDF
    Fetal sonography remains a highly specialised skill in spite of its necessity and importance. Because of differences in fetal and maternal anatomy, and human pyschomotor skills, there is an intra- and inter-sonographer variability amoungst expert sonographers. By understanding their similarities and differences, we want to build more interpretive models to assist a sonographer who is less experienced in scanning. This thesis’s contributions to the field of fetal sonography can be grouped into two themes. First I have used data visualisation and machine learning methods to show that a sonographer’s search strategy is anatomical (plane) dependent. Second, I show that a sonographer’s style and human skill of scanning is not easily disentangled. We first examine task-specific spatio-temporal gaze behaviour through the use of data visualisation, where a task is defined as a specific anatomical plane the sonographer is searching for. The qualitative analysis is performed at both a population and individual level, where we show that the task being performed determines the sonographer’s gaze behaviour. In our population-level analysis, we use unsupervised methods to identify meaningful gaze patterns and visualise task-level differences. In our individual-level analysis, we use a deep learning model to provide context to the eye-tracking data with respect to the ultrasound image. We then use an event-based visualisation to understand differences between gaze patterns of sonographers performing the same task. In some instances, sonographers adopt a different search strategy which is seen in the misclassified instances of an eye-tracking task classification model. Our task classification model supports the qualitative behaviour seen in our population-level analysis, where task-specific gaze behaviour is quantitatively distinct. We also investigate the use of time-based skill definitions and their appropriateness in fetal ultrasound sonography; a time-based skill definition uses years of clinical experience as an indicator of skill. The developed task-agnostic skill classification model differentiates gaze behaviour between sonographers in training and fully qualified sonographers. The preliminary results also show that fetal sonography scanning remains an operator-dependent skill, where the notion of human skill and individual scanning stylistic differences cannot be easily disentangled. Our work demonstrates how and where sonographers look at whilst scanning, which can be used as a stepping stone for building style-agnostic skill models

    Development of methodologies to analyze and visualize air traffic controllers’ visual scanning strategies

    Get PDF
    The Federal Aviation Administration (FAA) estimates a 60 million air traffic volume by 2040. However, the available workforce of expert air traffic controllers (ATCs) might not be sufficient to manage this anticipated high traffic volume. Thus, to maintain the same safety standard and service level for air travel, more ATCs will need to be trained quickly. Previous research shows eye tracking technology can be used to enhance the training of the ATC’s by reducing their false alarm rate, thereby helping to mitigate the impact of increasing demand. Methods need to be developed to better understand experts’ eye movement (EM) data so as to incorporate them effectively in ATCs’ training process. However, it’s challenging to analyze ATCs’ EM data for several factors: (i) aircraft representation on radar display (i.e. targets) are dynamic, as their shape and position changes with time; (ii) raw EM data is very complex to visualize, even for the meaningful small duration (e.g. task completion time of 1 min); (iii) in the absence of any predefined order of visual scanning, each ATC employ a variety of scanning strategies to manage traffic, making it challenging to extract relevant patterns that can be taught. To address these aforementioned issues, a threefold framework was developed: (i) a dynamic network-based approach that can map expert ATCs’ EM data to dynamic targets, enabling the representation of visual scanning strategy evolution with time; (ii) a novel density-based clustering method to reduce the inherent complexity of ATCs’ raw EM data to enhance its visualization; (iii) a new modified n-gram based similarity analysis method, to evaluate the consistency and similarity of visual scanning strategies among experts. Two different experiments were conducted at the FAA Civil Aerospace Medical Institute in Oklahoma City, where EM data of 15 veteran ATCs’ (> 20 years of experience) were collected using eye trackers (Facelab and Tobii eye trackers), while they were controlling a high-fidelity simulated air traffic. The first experiment involved en-route traffic scenario (with aircraft above 18,000 feet) and the second experiment consisted of airport tower traffic (aircraft within 30 miles radius from an airport). The dynamic network analysis showed three important results: (i) it can be used to effectively represent which are the important targets and how their significance evolves over time, (ii) in dynamic scenarios, having targets having variable time on display, traditional target importance measure (i.e. the number of eye fixations and duration) can be misleading, and (iii) importance measures derived from the network-based approach (e.g. closeness, betweenness) can be used to understand how ATCs’ visual attention moves between targets. The result from the density-based clustering method shows that by controlling its two parameter values(i.e. spatial and temporal approximation), the visualization of the raw EM data can be substantially simplified. This approximate representation can be used for better training purpose where expert ATC’s visual scanning strategy can be visualized with reduced complexity, thereby enhancing the understanding of novices while maintaining its significant pattern (key for visual pattern mining). Moreover, the model parameters enable the decision-maker to incorporate context-dependent factors by adjusting the spatial (in pixel) and temporal (in milliseconds) thresholds used for the visual scanning approximation. The modified n-gram approach allows for twofold similarity analysis of EM data: (i) detecting similar EM patterns due to exact sequential match in which the targets are focused and/or grouped together visually because of several eye fixation transitions among them, and (ii) unearth similar visual scanning behavior which is otherwise small perturbed version of each other that arise as a result of idiosyncrasies of ATCs. Thus, this method is more robust compared to other prevalent approaches which employ strict definitions for similarity that are difficult to empirically observe in real-life scenarios. To summarize, the three methods developed allow us to apply a comprehensible framework to understand the evolving nature of the visual scanning strategy in complex environments (e.g. air traffic control task) by: (i) by identifying target importance & their evolution; (ii) simplifying visualizing of complex EM strategy for easier comprehension; (iii) evaluating similarity among various visual scanning strategies in dynamic scenarios

    Landmark Visualization on Mobile Maps – Effects on Visual Attention, Spatial Learning, and Cognitive Load during Map-Aided Real-World Navigation of Pedestrians

    Full text link
    Even though they are day-to-day activities, humans find navigation and wayfinding to be cognitively challenging. To facilitate their everyday mobility, humans increasingly rely on ubiquitous mobile maps as navigation aids. However, the over-reliance on and habitual use of omnipresent navigation aids deteriorate humans' short-term ability to learn new information about their surroundings and induces a long-term decline in spatial skills. This deterioration in spatial learning is attributed to the fact that these aids capture users' attention and cause them to enter a passive navigation mode. Another factor that limits spatial learning during map-aided navigation is the lack of salient landmark information on mobile maps. Prior research has already demonstrated that wayfinders rely on landmarks—geographic features that stand out from their surroundings—to facilitate navigation and build a spatial representation of the environments they traverse. Landmarks serve as anchor points and help wayfinders to visually match the spatial information depicted on the mobile map with the information collected during the active exploration of the environment. Considering the acknowledged significance of landmarks for human wayfinding due to their visibility and saliency, this thesis investigates an open research question: how to graphically communicate landmarks on mobile map aids to cue wayfinders' allocation of attentional resources to these task-relevant environmental features. From a cartographic design perspective, landmarks can be depicted on mobile map aids on a graphical continuum ranging from abstract 2D text labels to realistic 3D buildings with high visual fidelity. Based on the importance of landmarks for human wayfinding and the rich cartographic body of research concerning their depiction on mobile maps, this thesis investigated how various landmark visualization styles affect the navigation process of two user groups (expert and general wayfinders) in different navigation use contexts (emergency and general navigation tasks). Specifically, I conducted two real-world map-aided navigation studies to assess the influence of various landmark visualization styles on wayfinders' navigation performance, spatial learning, allocation of visual attention, and cognitive load. In Study I, I investigated how depicting landmarks as abstract 2D building footprints or realistic 3D buildings on the mobile map affected expert wayfinders' navigation performance, visual attention, spatial learning, and cognitive load during an emergency navigation task. I asked expert navigators recruited from the Swiss Armed Forces to follow a predefined route using a mobile map depicting landmarks as either abstract 2D building footprints or realistic 3D buildings and to identify the depicted task-relevant landmarks in the environment. I recorded the experts' gaze behavior with a mobile eye-tracer and their cognitive load with EEG during the navigation task, and I captured their incidental spatial learning at the end of the task. The wayfinding experts' exhibited high navigation performance and low cognitive load during the map-aided navigation task regardless of the landmark visualization style. Their gaze behavior revealed that wayfinding experts navigating with realistic 3D landmarks focused more on the visualizations of landmarks on the mobile map than those who navigated with abstract 2D landmarks, while the latter focused more on the depicted route. Furthermore, when the experts focused for longer on the environment and the landmarks, their spatial learning improved regardless of the landmark visualization style. I also found that the spatial learning of experts with self-reported low spatial abilities improved when they navigated with landmarks depicted as realistic 3D buildings. In Study II, I investigated the influence of abstract and realistic 3D landmark visualization styles on wayfinders sampled from the general population. As in Study I, I investigated wayfinders' navigation performance, visual attention, spatial learning, and cognitive load. In contrast to Study I, the participants in Study II were exposed to both landmark visualization styles in a navigation context that mimics everyday navigation. Furthermore, the participants were informed that their spatial knowledge of the environment would be tested after navigation. As in Study I, the wayfinders in Study II exhibited high navigation performance and low cognitive load regardless of the landmark visualization style. Their visual attention revealed that wayfinders with low spatial abilities and wayfinders familiar with the study area fixated on the environment longer when they navigated with realistic 3D landmarks on the mobile map. Spatial learning improved when wayfinders with low spatial abilities were assisted by realistic 3D landmarks. Also, when wayfinders were assisted by realistic 3D landmarks and paid less attention to the map aid, their spatial learning improved. Taken together, the present real-world navigation studies provide ecologically valid results on the influence of various landmark visualization styles on wayfinders. In particular, the studies demonstrate how visualization style modulates wayfinders' visual attention and facilitates spatial learning across various user groups and navigation use contexts. Furthermore, the results of both studies highlight the importance of individual differences in spatial abilities as predictors of spatial learning during map-assisted navigation. Based on these findings, the present work provides design recommendations for future mobile maps that go beyond the traditional concept of "one fits all." Indeed, the studies support the cause for landmark depiction that directs individual wayfinders' visual attention to task-relevant landmarks to further enhance spatial learning. This would be especially helpful for users with low spatial skills. In doing so, future mobile maps could dynamically adapt the visualization style of landmarks according to wayfinders' spatial abilities for cued visual attention, thus meeting individuals' spatial learning needs

    Analysis of the Learning Process through Eye Tracking Technology and Feature Selection Techniques

    Get PDF
    In recent decades, the use of technological resources such as the eye tracking methodology is providing cognitive researchers with important tools to better understand the learning process. However, the interpretation of the metrics requires the use of supervised and unsupervised learning techniques. The main goal of this study was to analyse the results obtained with the eye tracking methodology by applying statistical tests and supervised and unsupervised machine learning techniques, and to contrast the effectiveness of each one. The parameters of fixations, saccades, blinks and scan path, and the results in a puzzle task were found. The statistical study concluded that no significant differences were found between participants in solving the crossword puzzle task; significant differences were only detected in the parameters saccade amplitude minimum and saccade velocity minimum. On the other hand, this study, with supervised machine learning techniques, provided possible features for analysis, some of them different from those used in the statistical study. Regarding the clustering techniques, a good fit was found between the algorithms used (k-means ++, fuzzy k-means and DBSCAN). These algorithms provided the learning profile of the participants in three types (students over 50 years old; and students and teachers under 50 years of age). Therefore, the use of both types of data analysis is considered complementary.European Project “Self-Regulated Learning in SmartArt” 2019-1-ES01-KA204-065615

    Supporting Situation Awareness and Decision Making in Weather Forecasting

    Get PDF
    Weather forecasting is full of uncertainty, and as in domains such as air traffic control or medical decision making, decision support systems can affect a forecaster’s ability to make accurate and timely judgments. Well-designed decision aids can help forecasters build situation awareness (SA), a construct regarded as a component of decision making. SA involves the ability to perceive elements within a system, comprehend their significance, and project their meaning into the future in order to make a decision. However, how SA is affected by uncertainty within a system has received little attention. This tension between managing uncertainty, situation assessment, and the impact that technology has on the two, is the focus of this dissertation. To address this tension, this dissertation is centered on the evaluation of a set of coupled models that integrate rainfall observations and hydrologic simulations, coined “the FLASH system” (Flooded Locations and Simulated Hydrographs project). Prediction of flash flooding is unique from forecasting other weather-related threats due to its multi-disciplinary nature. In the United States, some weather forecasters have limited hydrologic forecasting experience. Unlike FLASH, current flash flood forecasting tools are based upon rainfall rates, and with the recent expansion into coupled rainfall and hydrologic models, forecasters have to learn quickly how to incorporate these new data sources into their work. New models may help forecasters to increase their prediction skill, but no matter how far the technology advances, forecasters must be able to accept and integrate the new tools into their work in order to gain any benefit. A focus on human factors principles in the design stage can help to ensure that by the time the product is transitioned into operational use, the decision support system addresses users’ needs while minimizing task time, workload, and attention constraints. This dissertation discusses three qualitative and quantitative studies designed to explore the relationship between flash flood forecasting, decision aid design, and SA. The first study assessed the effects of visual data aggregation methods on perception and comprehension of a flash flood threat. Next, a mixed methods approach described how forecasters acquire SA and mitigate situational uncertainty during real-time forecasting operations. Lastly, the third study used eye tracking assessment to identify the effects of an automated forecasting decision support tool on SA and information scanning behavior. Findings revealed that uncertainty management in forecasting involves individual, team, and organizational processes. We make several recommendations for future decision support systems to promote SA and performance in the weather forecasting domain
    • …
    corecore