8 research outputs found

    Effects of Lower Frame Rates in a Remote Tower Environment

    Get PDF
    In the field of aviation, Remote Tower is a current and fast-growing concept offering cost-efficient Air Traffic Services (ATS) for aerodromes. In its basics it relies on optical camera sensor, whose video images are relayed from the aerodrome to an ATS facility situated anywhere, to be displayed on a video panorama to provide ATS independent on the out-of-the-tower-window view. Bandwidth, often limited and costly, plays a crucial role in such a cost-efficient system. Reducing the Frame Rate (FR, expressed in fps) of the relayed video stream is one parameter to save bandwidth, but at the cost of video quality. Therefore, the present article evaluates how much FR can be reduced without compromising operational performance and human factor issues. In our study, seven Air Traffic Control Officers watched real air traffic videos, recorded by the Remote Tower field test platform at the German Aerospace Center (DLR e.V.) at Braunschweig-Wolfsburg Airport (BWE). In a passive shadow mode, they executed ATS relevant tasks in four different FR conditions (2 fps, 5 fps, 10 fps & 15 fps) to objectively measure their visual detection performance and subjectively assess their current physiological state and their perceived video quality and system operability. Study results have shown that by reducing the FR, neither the visual detection performance nor physiological state is impaired. Only the perceived video quality and the perceived system operability drop by reducing FR to 2 fps. The findings of this study will help to better adjust video parameters in bandwidth limited applications in general, and in particular to alleviate large scale deployment of Remote Towers in a safe and cost-efficient way

    Scene-motion thresholds during head yaw for immersive virtual environments

    Get PDF
    In order to better understand how scene motion is perceived in immersive virtual environments, we measured scene-motion thresholds under different conditions across three experiments. Thresholds were measured during quasi-sinusoidal head yaw, single left-to-right or right-to-left head yaw, different phases of head yaw, slow to fast head yaw, scene motion relative to head yaw, and two scene illumination levels. We found that across various conditions 1) thresholds are greater when the scene moves with head yaw (corresponding to gain 1:0), and 2) thresholds increase as head motion increases

    Méthode de correction des erreurs de mesure appliquée à un système FASTRAK 3SPACE

    Get PDF
    La méthode de correction développée au LIO permet de réduire l’erreur de mesure en translation et en rotation produites par des distorsions électromagnétiques et qui affectent la précision des capteurs Fastrak. La méthode permet la caractérisation des erreurs de mesure d’un champ électromagnétique et l’étalonnage d’un volume de 0,458 m3 en moins de 35 secondes. Il est possible de réduire l’erreur moyenne d’un ensemble de mesures, sans en augmenter la variabilité, pour autant que l’erreur de mesure ne dépasse pas 35 mm en translation. Nous avons réussi à réduire cette erreur à 6,1 mm +/- 3,0 mm en translation et à 0,49° +/- 0,17° en rotation avec seulement 1000 étalons sur un ensemble de 712 mesures. La méthode fonctionne sur un nombre illimité de mesures pouvant se situer jusqu’à 115 cm de l’émetteur. Le nombre de mesures étalons utilisées à montrer qu’il influence les performances de la méthode de correction, ce nombre se situant entre 100 et 1000 mesures. À 115 cm de l’émetteur, les meilleurs résultats ont été obtenus en divisant le volume global d’étalonnage en deux sous volumes distinctes. Dans ce cas, le calcul des polynômes se fait donc sur les sections plus petites, ce qui permet d’éliminer les distorsions aux frontières du volume d’étalonnage dans les conditions optimales de la méthode de correction du LIO. Pour des distances inférieures à 75 cm de l’émetteur, l’utilisation d’un seul ensemble de polynômes est possible

    Postural costs of performing cognitive tasks in non-coincident reference frames

    Get PDF
    An extensive literature exists attesting to the limited-capacity performance of everyday tasks, such as looking and mental manipulation. Only relatively recently has empirical interest turned towards the capacity limitations of the body coordinations (such as posture control) that provide the physical substrate for cognitive operations (and so mandatorily coexist with cognition). What are the capacity implications for the body’s safety and mobility, for example, in accommodating the need to stabilize the eye-head apparatus for looking, or when mentally manipulating objects in 3-D space? Specifically, what are the postural costs in having to position and orientate the body in its own task space while supporting spatial operations in cognitive task space? What are the performance implications, in turn, for everyday cognitive tasks when posture control is challenged in this way? The purpose of this thesis was to establish a theoretical and methodological basis for examining any postural costs that may arise from the sharing or partitioning of spatial reference frames between these two components (a frame co-registration cost hypothesis). In 7 experiments, young adults performed either conjunction visual search or mental rotation tasks (cognitive component) while standing upright (postural component). Visual search probed cognitive operations in extrapersonal space and mental rotation probed operations in representational space. Immersive visualization was used to operationalise postural and cognitive task contexts, by arranging for the two tasks (under varying postural and cognitive task-load conditions) to be carried out with respect to two spatial reference frames that were either coincident or noncoincident with each other. Aside from the expected performance trade-offs due to task-load manipulations, non-coincidence of reference frames was found to significantly add to postural costs for cognitive operations in extrapersonal space (visual search) and for representational space (mental rotation). These results demonstrate that the maintenance of multiple task-spaces can be a source of interference in posture-cognition dual-tasking. Such interference may arise, it is suggested, from the dynamics of time-sharing between underlying spatial coordinations required for these tasks. Beyond its importance within embodied cognition research, this work may have theoretical and methodological relevance to the study of posture-cognition in the elderly, and to the study of balance and coordination problems in learning difficulties such as those encountered in dyslexia and the autistic spectrum.EThOS - Electronic Theses Online ServiceEconomic and Social Research Council (ESRC)Warwick Postgraduate Research FellowshipGBUnited Kingdo

    Scene-motion- and latency-perception thresholds for head-mounted displays

    Get PDF
    A fundamental task of an immersive virtual environment (IVE) system is to present images of the virtual world that change appropriately as the user's head moves. Current IVE systems, especially those using head-mounted displays (HMDs), often produce spatially unstable scenes, resulting in simulator sickness, degraded task performance, degraded visual acuity, and breaks in presence. In HMDs, instability resulting from latency is greater than all other causes of instability combined. The primary way users perceive latency in an HMD is by improper motion of scenes that should be stationary in the world. Whereas latency-induced scene motion is well defined mathematically, less is understood about how much scene motion and/or latency can occur without subjects noticing, and how this varies under different conditions. I built a simulated HMD system with zero effective latency---no scene motion occurs due to latency. I intentionally and artificially inserted scene motion into the virtual environment in order to determine how much scene motion and/or latency can occur without subjects noticing. I measured perceptual thresholds of scene-motion and latency under different conditions across five experiments. Based on the study of latency, head motion, scene motion, and perceptual thresholds, I developed a mathematical model of latency thresholds as an inverse function of peak head-yaw acceleration. Psychophysics studies showed that measured latency thresholds correlate with this inverse function better than with a linear function. The work reported here readily enables scientists and engineers to, under their particular conditions, measure latency thresholds as a function of head motion by using an off-the-shelf projector system. Latency requirements can thus be determined before designing HMD systems

    Postural costs of performing cognitive tasks in non-coincident reference frames

    Get PDF
    An extensive literature exists attesting to the limited-capacity performance of everyday tasks, such as looking and mental manipulation. Only relatively recently has empirical interest turned towards the capacity limitations of the body coordinations (such as posture control) that provide the physical substrate for cognitive operations (and so mandatorily coexist with cognition). What are the capacity implications for the body’s safety and mobility, for example, in accommodating the need to stabilize the eye-head apparatus for looking, or when mentally manipulating objects in 3-D space? Specifically, what are the postural costs in having to position and orientate the body in its own task space while supporting spatial operations in cognitive task space? What are the performance implications, in turn, for everyday cognitive tasks when posture control is challenged in this way? The purpose of this thesis was to establish a theoretical and methodological basis for examining any postural costs that may arise from the sharing or partitioning of spatial reference frames between these two components (a frame co-registration cost hypothesis). In 7 experiments, young adults performed either conjunction visual search or mental rotation tasks (cognitive component) while standing upright (postural component). Visual search probed cognitive operations in extrapersonal space and mental rotation probed operations in representational space. Immersive visualization was used to operationalise postural and cognitive task contexts, by arranging for the two tasks (under varying postural and cognitive task-load conditions) to be carried out with respect to two spatial reference frames that were either coincident or noncoincident with each other. Aside from the expected performance trade-offs due to task-load manipulations, non-coincidence of reference frames was found to significantly add to postural costs for cognitive operations in extrapersonal space (visual search) and for representational space (mental rotation). These results demonstrate that the maintenance of multiple task-spaces can be a source of interference in posture-cognition dual-tasking. Such interference may arise, it is suggested, from the dynamics of time-sharing between underlying spatial coordinations required for these tasks. Beyond its importance within embodied cognition research, this work may have theoretical and methodological relevance to the study of posture-cognition in the elderly, and to the study of balance and coordination problems in learning difficulties such as those encountered in dyslexia and the autistic spectrum

    Latency guidelines for touchscreen virtual button feedback

    Get PDF
    Touchscreens are very widely used, especially in mobile phones. They feature many interaction methods, pressing a virtual button being one of the most popular ones. In addition to an inherent visual feedback, virtual button can provide audio and tactile feedback. Since mobile phones are essentially computers, the processing causes latencies in interaction. However, it has not been known, if the latency is an issue in mobile touchscreen virtual button interaction, and what the latency recommendations for visual, audio and tactile feedback are. The research in this thesis has investigated multimodal latency in mobile touchscreen virtual button interaction. For the first time, an affordable, but accurate tool was built to measure all three feedback latencies in touchscreens. For the first time, simultaneity perception of touch and feedback, as well as the effect of latency on virtual button perceived quality has been studied and thresholds found for both unimodal and bimodal feedback. The results from these studies were combined as latency guidelines for the first time. These guidelines enable interaction designers to establish requirements for mobile phone engineers to optimise the latencies on the right level. The latency measurement tool consisted of a high-speed camera, a microphone and an accelerometer for visual, audio and tactile feedback measurements. It was built with off-the-shelf components and, in addition, it was portable. Therefore, it could be copied at low cost or moved wherever needed. The tool enables touchscreen interaction designers to validate latencies in their experiments, making their results more valuable and accurate. The tool could benefit the touchscreen phone manufacturers, since it enables engineers to validate latencies during development of mobile phones. The tool has been used in mobile phone R&D within Nokia Corporation and for validation of a research device within the University of Glasgow. The guidelines established for unimodal feedback was as follows: visual feedback latency should be between 30 and 85 ms, audio between 20 and 70 ms and tactile between 5 and 50 ms. The guidelines were found to be different for bimodal feedback: visual feedback latency should be 95 and audio 70 ms when the feedback was visual-audio, visual 100 and tactile 55 ms when the feedback was visual-tactile and tactile 25 and audio 100 ms when the feedback was tactile-audio. These guidelines will help engineers and interaction designers to select and optimise latencies to be low enough, but not too low. Designers using these guidelines will make sure that most of the users will both perceive the feedback as simultaneous with their touch and experience high quality virtual buttons. The results from this thesis show that latency has a remarkable effect on touchscreen virtual buttons, and it is a key part of virtual button feedback design. The novel results enable researchers, designers and engineers to master the effect of latencies in research and development. This will lead to more accurate and reliable research results and help mobile phone manufacturers make better products
    corecore