3 research outputs found

    A system for synthetic vision and augmented reality in future flight decks

    Get PDF
    Rockwell Science Center is investigating novel human-computer interaction techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays that provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information. Orientation of the camera is obtained from an inclinometer and a magnetometer; position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual cues with database features. This technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background with an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer

    Enabling technologies for audio augmented reality systems

    Get PDF
    Audio augmented reality (AAR) refers to technology that embeds computer-generated auditory content into a user's real acoustic environment. An AAR system has specific requirements that set it apart from regular human--computer interfaces: an audio playback system to allow the simultaneous perception of real and virtual sounds; motion tracking to enable interactivity and location-awareness; the design and implementation of auditory display to deliver AAR content; and spatial rendering to display spatialised AAR content. This thesis presents a series of studies on enabling technologies to meet these requirements. A binaural headset with integrated microphones is assumed as the audio playback system, as it allows mobility and precise control over the ear input signals. Here, user position and orientation tracking methods are proposed that rely on speech signals recorded at the binaural headset microphones. To evaluate the proposed methods, the head orientations and positions of three conferees engaged in a discussion were tracked. The binaural microphones improved tracking performance substantially. The proposed methods are applicable to acoustic tracking with other forms of user-worn microphones. Results from a listening test investigating the effect of auditory display parameters on user performance are reported. The parameters studied were derived from the design choices to be made when implementing auditory display. The results indicate that users are able to detect a sound sample among distractors and estimate sample numerosity accurately with both speech and non-speech audio, if the samples are presented with adequate temporal separation. Whether or not samples were separated spatially had no effect on user performance. However, with spatially separated samples, users were able to detect a sample among distractors and simultaneously localise it. The results of this study are applicable to a variety of AAR applications that require conveying sample presence or numerosity. Spatial rendering is commonly implemented by convolving virtual sounds with head-related transfer functions (HRTFs). Here, a framework is proposed that interpolates HRTFs measured at arbitrary directions and distances. The framework employs Delaunay triangulation to group HRTFs into subsets suitable for interpolation and barycentric coordinates as interpolation weights. The proposed interpolation framework allows the realtime rendering of virtual sources in the near-field via HRTFs measured at various distances

    The Effectiveness of Aural Instructions with Visualisations in E-Learning Environments

    Get PDF
    Based on Mayer’s (2001) model for more effective learning by exploiting the brain’s dual sensory channels for information processing, this research investigates the effectiveness of using aural instructions together with visualisation in teaching the difficult concepts of data structures to novice computer science students. A small number of previous studies have examined the use of audio and visualisation in teaching and learning environments but none has explored the integration of both technologies in teaching data structures programming to reduce the cognitive load on learners’ working memory. A prototype learning tool, known as the Data Structure Learning (DSL) tool, was developed and used first in a short mini study that showed that, used together with visualisations of algorithms, aural instructions produced faster student response times than did textual instructions. This result suggested that the additional use of the auditory sensory channel did indeed reduce the cognitive load. The tool was then used in a second, longitudinal, study over two academic terms in which students studying the Data Structures module were offered the opportunity to use the DSL approach with either aural or textual instructions. Their use of the approach was recorded by the DSL system and feedback was invited at the end of every visualisation task. The collected data showed that the tool was used extensively by the students. A comparison of the students’ DSL use with their end-of-year assessment marks revealed that academically weaker students had tended to use the tool most. This suggests that less able students are keen to use any useful and available instrument to aid their understanding, especially of difficult concepts. Both the quantitative data provided by the automatic recording of DSL use and an end-of-study questionnaire showed appreciation by students of the help the tool had provided and enthusiasm for its future use and development. These findings were supported by qualitative data provided by student written feedback at the end of each task, by interviews at the end of the experiment and by interest from the lecturer in integrating use of the tool with the teaching of the module. A variety of suggestions are made for further work and development of the DSL tool. Further research using a control group and/or pre and post tests would be particularly useful
    corecore