58 research outputs found

    Effect of Tactile Feedback on Performance

    Get PDF
    Humans interact with their environment by obtaining information from various modalities of sensing. These various modalities of sensing combine to facilitate manipulation and interaction with objects and the environment. The way humans interact with computers mirrors this environmental interaction with the absence of feedback from the tactile channel. The majority of computer operation is completed visually because currently, the primary feedback humans receive from computers is through the eyes. This strong dependence on the visual modality can cause visual fatigue and fixation on displays, resulting in errors and a decrease in performance. Distributing tasks and information across sensory modalities could possibly solve this problem. This study added tactile feedback to the human computer interface through vibration of a mouse to more accurately reflect a human\u27s multi-sensory interaction with their environment. This investigation used time off target to measure performance in a pursuit-tracking task. The independent variables were type of feedback with two levels, (i.e., tactile feedback vs no tactile feedback) and speed of target at three different levels, (i.e., slow, medium, and fast). Tactile feedback improved pursuit-tracking performance by 6%. Significant main effects where found for both the speed and feedback factors, but no significant interaction between speed and feedback was obtained. This improvement in performance was consistent with previous research, and lends further support to the advantages multimodal feedback may have to offer man-machine interfaces

    The workload implications of haptic displays in multi-display environments such as the cockpit: Dual-task interference of within-sense haptic inputs (tactile/proprioceptive) and between-sense inputs (tactile/proprioceptive/auditory/visual)

    Get PDF
    Visual workload demand within the cockpit is reaching saturation, whereas the haptic sense (proprioceptive and tactile sensation) is relatively untapped, despite studies suggesting the benefits of haptic displays. MRT suggests that inputs from haptic displays will not interfere with inputs from visual or auditory displays. MRT is based on the premise that multisensory integration occurs only after unisensory processing. However, recent neuroscientific findings suggest that the distinction between unisensory versus multisensory processing is much more blurred than previously thought. This programme of work had the following two research objectives: 1. To examine whether multiple haptic inputs can be processed at the same time without performance decrement - Study One 2. To examine whether haptic inputs can be processed at the same time as visual or auditory inputs without performance decrement - Study Two In Study One participants performed dual-tasks, consisting of same-sense tasks (tactile or proprioceptive) or different-sense tasks (tactile and proprioceptive). These tasks also varied in terms of processing code, in line with MRT. The results found significantly more performance decrement for the same-sense dual-tasks than for the different-sense dual-tasks, in accordance with MRT, suggesting that performance will suffer if two haptic displays of the same type are used concurrently. An adjustment to the MRT model is suggested to incorporate these results. In Study Two, participants performed different-sense dual-tasks, consisting of auditory or visual tasks with tactile or proprioceptive tasks. The tasks also varied in terms of processing code. Contrary to MRT, the results found that when processing code was different, there was significant performance decrement for all of the dual-tasks, but not when processing code was the same. These results reveal an exception to two key MRT rules, the sensory resource rule and the processing code rule. It is suggested that MRT may be oversimplistic and other factors highlighted by recent neuroscientific research should be taken into account in theories of dual-task performance

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div

    The workload implications of haptic displays in multi-display environments such as the cockpit : dual-task interference of within-sense haptic inputs (tactile/proprioceptive) and between-sense inputs (tactile/proprioceptive/auditory/visual)

    Get PDF
    Visual workload demand within the cockpit is reaching saturation, whereas the haptic sense (proprioceptive and tactile sensation) is relatively untapped, despite studies suggesting the benefits of haptic displays. MRT suggests that inputs from haptic displays will not interfere with inputs from visual or auditory displays. MRT is based on the premise that multisensory integration occurs only after unisensory processing. However, recent neuroscientific findings suggest that the distinction between unisensory versus multisensory processing is much more blurred than previously thought. This programme of work had the following two research objectives: 1. To examine whether multiple haptic inputs can be processed at the same time without performance decrement - Study One 2. To examine whether haptic inputs can be processed at the same time as visual or auditory inputs without performance decrement - Study Two In Study One participants performed dual-tasks, consisting of same-sense tasks (tactile or proprioceptive) or different-sense tasks (tactile and proprioceptive). These tasks also varied in terms of processing code, in line with MRT. The results found significantly more performance decrement for the same-sense dual-tasks than for the different-sense dual-tasks, in accordance with MRT, suggesting that performance will suffer if two haptic displays of the same type are used concurrently. An adjustment to the MRT model is suggested to incorporate these results. In Study Two, participants performed different-sense dual-tasks, consisting of auditory or visual tasks with tactile or proprioceptive tasks. The tasks also varied in terms of processing code. Contrary to MRT, the results found that when processing code was different, there was significant performance decrement for all of the dual-tasks, but not when processing code was the same. These results reveal an exception to two key MRT rules, the sensory resource rule and the processing code rule. It is suggested that MRT may be oversimplistic and other factors highlighted by recent neuroscientific research should be taken into account in theories of dual-task performance.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Sonic Interactions in Virtual Environments

    Get PDF

    Holistic Approach for Authoring Immersive and Smart Environments for the Integration in Engineering Education

    Get PDF
    Die vierte industrielle Revolution und der rasante technologische Fortschritt stellen die etablierten Bildungsstrukturen und traditionellen Bildungspraktiken in Frage. Besonders in der Ingenieurausbildung erfordert das lebenslange Lernen, dass man sein Wissen und seine FĂ€higkeiten stĂ€ndig verbessern muss, um auf dem Arbeitsmarkt wettbewerbsfĂ€hig zu sein. Es besteht die Notwendigkeit eines Paradigmenwechsels in der Bildung und Ausbildung hin zu neuen Technologien wie virtueller RealitĂ€t und kĂŒnstlicher Intelligenz. Die Einbeziehung dieser Technologien in ein Bildungsprogramm ist jedoch nicht so einfach wie die Investition in neue GerĂ€te oder Software. Es mĂŒssen neue Bildungsprogramme geschaffen oder alte von Grund auf umgestaltet werden. Dabei handelt es sich um komplexe und umfangreiche Prozesse, die Entscheidungsfindung, Design und Entwicklung umfassen. Diese sind mit erheblichen Herausforderungen verbunden, die die Überwindung vieler Hindernisse erfordert. Diese Arbeit stellt eine Methodologie vor, die sich mit den Herausforderungen der Nutzung von Virtueller RealitĂ€t und KĂŒnstlicher Intelligenz als SchlĂŒsseltechnologien in der Ingenieurausbildung befasst. Die Methodologie hat zum Ziel, die Hauptakteure anzuleiten, um den Lernprozess zu verbessern, sowie neuartige und effiziente Lernerfahrungen zu ermöglichen. Da jedes Bildungsprogramm einzigartig ist, folgt die Methodik einem ganzheitlichen Ansatz, um die Erstellung maßgeschneiderter Kurse oder Ausbildungen zu unterstĂŒtzen. Zu diesem Zweck werden die Wechselwirkung zwischen verschiedenen Aspekten berĂŒcksichtigt. Diese werden in den drei Ebenen - Bildung, Technologie und Management zusammengefasst. Die Methodik betont den Einfluss der Technologien auf die Unterrichtsgestaltung und die Managementprozesse. Sie liefert Methoden zur Entscheidungsfindung auf der Grundlage einer umfassenden pĂ€dagogischen, technologischen und wirtschaftlichen Analyse. DarĂŒber hinaus unterstĂŒtzt sie den Prozess der didaktischen Gestaltung durch eine umfassende Kategorisierung der Vor- und Nachteile immersiver Lernumgebungen und zeigt auf, welche ihrer Eigenschaften den Lernprozess verbessern können. Ein besonderer Schwerpunkt liegt auf der systematischen Gestaltung immersiver Systeme und der effizienten Erstellung immersiver Anwendungen unter Verwendung von Methoden aus dem Bereich der kĂŒnstlichen Intelligenz. Es werden vier AnwendungsfĂ€lle mit verschiedenen Ausbildungsprogrammen vorgestellt, um die Methodik zu validieren. Jedes Bildungsprogramm hat seine eigenen Ziele und in Kombination decken sie die Validierung aller Ebenen der Methodik ab. Die Methodik wurde iterativ mit jedem Validierungsprojekt weiterentwickelt und verbessert. Die Ergebnisse zeigen, dass die Methodik zuverlĂ€ssig und auf viele Szenarien sowie auf die meisten Bildungsstufen und Bereiche ĂŒbertragbar ist. Durch die Anwendung der in dieser Arbeit vorgestellten Methoden können Interessengruppen immersiven Technologien effektiv und effizient in ihre Unterrichtspraxis integrieren. DarĂŒber hinaus können sie auf der Grundlage der vorgeschlagenen AnsĂ€tze Aufwand, Zeit und Kosten fĂŒr die Planung, Entwicklung und Wartung der immersiven Systeme sparen. Die Technologie verlagert die Rolle des Lehrenden in eine Moderatorrolle. Außerdem bekommen die LehrkrĂ€fte die Möglichkeit die Lernenden individuell zu unterstĂŒtzen und sich auf deren kognitive FĂ€higkeiten höherer Ordnung zu konzentrieren. Als Hauptergebnis erhalten die Lernenden eine angemessene, qualitativ hochwertige und zeitgemĂ€ĂŸe Ausbildung, die sie qualifizierter, erfolgreicher und zufriedener macht
    • 

    corecore