1,189 research outputs found

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments

    Gaze-Aware Cognitive Assistant for Multiscreen Surveillance

    Get PDF
    Surveillance operators must scan multiple camera feeds to ensure timely detection of incidents; however, variability in scanning behavior can lead to untimely/failed detection of critical information in feeds that were neglected for a long period. Us-ing an eye tracker to monitor screen fixations we can calculate (in real-time) the time elapsed since the last scan of each particular feed, allowing the setting-up of targeted countermeasures contingent on operator oculomotor behavior. One ave-nue is to provide operators with timely alerts to modulate the scan pattern to avoid attentional tunneling and inattentional blindness. We test such an adaptive solution within a major event surveillance simulation and preliminary results show that operator scan behavior can be modulated, although further investigation is re-quired to determine warning frequency and modality to optimize the balance be-tween saliency and workload increase. Future work will focus on adding a real-time vigilance detection and countermeasure capability

    The Integration Of Audio Into Multimodal Interfaces: Guidelines And Applications Of Integrating Speech, Earcons, Auditory Icons, and Spatial Audio (SEAS)

    Get PDF
    The current research is directed at providing validated guidelines to direct the integration of audio into human-system interfaces. This work first discusses the utility of integrating audio to support multimodal human-information processing. Next, an auditory interactive computing paradigm utilizing Speech, Earcons, Auditory icons, and Spatial audio (SEAS) cues is proposed and guidelines for the integration of SEAS cues into multimodal systems are presented. Finally, the results of two studies are presented that evaluate the utility of using SEAS cues, developed following the proposed guidelines, in relieving perceptual and attention processing bottlenecks when conducting Unmanned Air Vehicle (UAV) control tasks. The results demonstrate that SEAS cues significantly enhance human performance on UAV control tasks, particularly response accuracy and reaction time on a secondary monitoring task. The results suggest that SEAS cues may be effective in overcoming perceptual and attentional bottlenecks, with the advantages being most revealing during high workload conditions. The theories and principles provided in this paper should be of interest to audio system designers and anyone involved in the design of multimodal human-computer systems

    Attention Restraint, Working Memory Capacity, and Mind Wandering: Do Emotional Valence or Intentionality Matter?

    Get PDF
    Attention restraint appears to mediate the relationship between working memory capacity (WMC) and mind wandering (Kane et al., 2016). Prior work has identifed two dimensions of mind wandering—emotional valence and intentionality. However, less is known about how WMC and attention restraint correlate with these dimensions. Te current study examined the relationship between WMC, attention restraint, and mind wandering by emotional valence and intentionality. A confrmatory factor analysis demonstrated that WMC and attention restraint were strongly correlated, but only attention restraint was related to overall mind wandering, consistent with prior fndings. However, when examining the emotional valence of mind wandering, attention restraint and WMC were related to negatively and positively valenced, but not neutral, mind wandering. Attention restraint was also related to intentional but not unintentional mind wandering. Tese results suggest that WMC and attention restraint predict some, but not all, types of mind wandering

    The Virtual Driver: Integrating Physical and Cognitive Human Models to Simulate Driving with a Secondary In-Vehicle Task.

    Full text link
    Models of human behavior provide insight into people’s choices and actions and form the basis of engineering tools for predicting performance and improving interface design. Most human models are either cognitive, focusing on the information processing underlying the decisions made when performing a task, or physical, representing postures and motions used to perform the task. In general, cognitive models contain a highly simplified representation of the physical aspects of a task and are best suited for analysis of tasks with only minor motor components. Physical models require a person experienced with the task and the software to enter detailed information about how and when movements should be made, a process that can be costly, time consuming, and inaccurate. Many tasks have both cognitive and physical components, which may interact in ways that could not be predicted using a cognitive or physical model alone. This research proposes a solution by combining a cognitive model, the Queuing Network – Model Human Processor, and a physical model, the Human Motion Simulation (HUMOSIM) Framework, to produce an integrated cognitive-physical human model that makes it possible to study complex human-machine interactions. The physical task environment is defined using the HUMOSIM Framework, which communicates relevant information such as movement times and difficulty to the QN-MHP. Action choice and movement sequencing are performed in the QN-MHP. The integrated model’s more natural movements, generated by motor commands from the QN-MHP, and more realistic cognitive decisions, made using physical information from the Framework, make it useful for evaluating different designs for tasks, spaces, systems, and jobs. The Virtual Driver is the application of the integrated model to driving with an in-vehicle task. A driving simulator experiment was used to tune and evaluate the integrated model. Increasing the visual and physical difficulty of the in-vehicle task affected the resource-sharing strategies drivers used and resulted in deterioration in driving and in-vehicle task performance, especially for shorter drivers. The Virtual Driver replicates basic driving, in-vehicle task, and resource-sharing behaviors and provides a new way to study driver distraction. The model has applicability to interface design and predictions about staffing requirements and performance.Ph.D.Biomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75847/1/hjaf_1.pd

    The role of phonology in visual word recognition: evidence from Chinese

    Get PDF
    Posters - Letter/Word Processing V: abstract no. 5024The hypothesis of bidirectional coupling of orthography and phonology predicts that phonology plays a role in visual word recognition, as observed in the effects of feedforward and feedback spelling to sound consistency on lexical decision. However, because orthography and phonology are closely related in alphabetic languages (homophones in alphabetic languages are usually orthographically similar), it is difficult to exclude an influence of orthography on phonological effects in visual word recognition. Chinese languages contain many written homophones that are orthographically dissimilar, allowing a test of the claim that phonological effects can be independent of orthographic similarity. We report a study of visual word recognition in Chinese based on a mega-analysis of lexical decision performance with 500 characters. The results from multiple regression analyses, after controlling for orthographic frequency, stroke number, and radical frequency, showed main effects of feedforward and feedback consistency, as well as interactions between these variables and phonological frequency and number of homophones. Implications of these results for resonance models of visual word recognition are discussed.postprin

    Designing Attentive Information Dashboards with Eye Tracking Technology

    Get PDF

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with

    Applied and laboratory-based autonomic and neurophysiological monitoring during sustained attention tasks

    Get PDF
    Fluctuations during sustained attention can cause momentary lapses in performance which can have a significant impact on safety and wellbeing. However, it is less clear how unrelated tasks impact current task processes, and whether potential disturbances can be detected by autonomic and central nervous system measures in naturalistic settings. In a series of five experiments, I sought to investigate how prior attentional load impacts semi-naturalistic tasks of sustained attention, and whether neurophysiological and psychophysiological monitoring of continuous task processes and performance could capture attentional lapses. The first experiment explored various non-invasive electrophysiological and subjective methods during multitasking. The second experiment employed a manipulation of multitasking, task switching, to attempt to unravel the negative lasting impacts of multitasking on neural oscillatory activity, while the third experiment employed a similar paradigm in a semi-naturalistic environment of simulated driving. The fourth experiment explored the feasibility of measuring changes in autonomic processing during a naturalistic sustained monitoring task, autonomous driving, while the fifth experiment investigated the visual demands and acceptability of a biological based monitoring system. The results revealed several findings. While the first experiment demonstrated that only self-report ratings were able to successfully disentangle attentional load during multitasking; the second and third experiment revealed deficits in parieto-occipital alpha activity and continuous performance depending on the attentional load of a previous unrelated task. The fourth experiment demonstrated increased sympathetic activity and a smaller distribution of fixations during an unexpected event in autonomous driving, while the fifth experiment revealed the acceptability of a biological based monitoring system although further research is needed to unpick the effects on attention. Overall, the results of this thesis help to provide insight into how autonomic and central processes manifest during semi-naturalistic sustained attention tasks. It also provides support for a neuro- or biofeedback system to improve safety and wellbeing

    Interactive effects of orthography and semantics in Chinese picture naming

    Get PDF
    Posters - Language Production/Writing: abstract no. 4035Picture-naming performance in English and Dutch is enhanced by presentation of a word that is similar in form to the picture name. However, it is unclear whether facilitation has an orthographic or a phonological locus. We investigated the loci of the facilitation effect in Cantonese Chinese speakers by manipulating—at three SOAs (2100, 0, and 1100 msec)—semantic, orthographic, and phonological similarity. We identified an effect of orthographic facilitation that was independent of and larger than phonological facilitation across all SOAs. Semantic interference was also found at SOAs of 2100 and 0 msec. Critically, an interaction of semantics and orthography was observed at an SOA of 1100 msec. This interaction suggests that independent effects of orthographic facilitation on picture naming are located either at the level of semantic processing or at the lemma level and are not due to the activation of picture name segments at the level of phonological retrieval.postprin
    • …
    corecore