1,050 research outputs found

    Measuring latency in virtual environments

    Get PDF
    Abstract—Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user’s sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed’s and Di Luca’s measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed’s Method is both accurate and easy to use without requiring specialised hardware. Index Terms—Latency, measurement

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    A comprehensive study on light signals of opportunity for subdecimetre unmodulated visible light positioning

    Get PDF
    Currently, visible light positioning (VLP) enabling an illumination infrastructure requires a costly retrofit. Intensity modulation systems not only necessitate changes to the internal LED driving module, but decrease the LEDs' radiant flux as well. This hinders the infrastructure's ability to meet the maintained illuminance standards. Ideally, the LEDs could be left unmodulated, i.e., unmodulated VLP (uVLP). uVLP systems, inherently low-cost, exploit the characteristics of the light signals of opportunity (LSOOP) to infer a position. In this paper, it is shown that proper signal processing allows using the LED's characteristic frequency (CF) as a discriminative feature in photodiode (PD)-based received signal strength (RSS) uVLP. This manuscript investigates and compares the aptitude of (future) RSS-based uVLP and VLP systems in terms of their feasibility, cost and accuracy. It demonstrates that CF-based uVLP exhibits an acceptable loss of accuracy compared to (regular) VLP. For point source-like LEDs, uVLP only worsens the trilateration-based median p50 and 90th percentile root-mean-square error p90 from 5.3cm to 7.9cm (+50%) and from 9.6cm to 15.6cm (+62%), in the 4m x 4m room under consideration. A large experimental validation shows that employing a robust model-based fingerprinting localisation procedure, instead of trilateration, further boosts uVLP's p50 and p90 accuracy to 5.0cm and 10.6cm. When collating with VLP's p50=3.5cm and p90=6.8cm, uVLP exhibits a comparable positioning performance at a significantly lower cost and at a higher maintained illuminance, all of which underline uVLP's high adoption potential. With this work, a significant step is taken towards the development of an accurate and low-cost tracking system

    Spatial-Temporal Characteristics of Multisensory Integration

    Get PDF
    abstract: We experience spatial separation and temporal asynchrony between visual and haptic information in many virtual-reality, augmented-reality, or teleoperation systems. Three studies were conducted to examine the spatial and temporal characteristic of multisensory integration. Participants interacted with virtual springs using both visual and haptic senses, and their perception of stiffness and ability to differentiate stiffness were measured. The results revealed that a constant visual delay increased the perceived stiffness, while a variable visual delay made participants depend more on the haptic sensations in stiffness perception. We also found that participants judged stiffness stiffer when they interact with virtual springs at faster speeds, and interaction speed was positively correlated with stiffness overestimation. In addition, it has been found that participants could learn an association between visual and haptic inputs despite the fact that they were spatially separated, resulting in the improvement of typing performance. These results show the limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian inference model should be used.Dissertation/ThesisDoctoral Dissertation Human Systems Engineering 201

    Evaluation of an Augmented Reality Audio Headset and Mixer

    Get PDF
    Lisätty Audiotodellisuus (LAT) on käsite, joka on määritelty todellisen ja virtuaalisen maailman reaaliaikaisena yhdistelmänä. Täten jokapäiväiväiseen äänimaailmaan voidaan lisätä virtuaalisia ääniobjekteja. Lisätyn audiotodellisuuden laitteisto, jota tutkitaan tässä työssä, koostuu kuulokeparista sekä kontrolliyksiköstä, nimeltään LAT-mikseri. LAT-kuulokkeet koostuvat binauraalisista kuuloke-elementeistä sekä sisäänrakennetuista mikrofoneista. LAT-mikserissä on kaikki LAT-sovellusten tarvitsemat liittimet sekä signaalinkäsittelyelektroniikka. LAT-kuulokkeiden toimintaperiaate perustuu siihen, että binauraalisten mikrofonien tulisi välittää äänisignaalit muuttumattomana kuuloke-elementeille, jotta todellinen äänimaailma saataisiin kopioitua muuttumattomana. Valitettavasti LAT-kuulokkeet aiheuttavat muutoksia kopioituun äänimaailmaan. Näiden muutoksien takia tarvitaan LAT-mikseriä ekvalisoimaan kuulokkeita. LAT-mikseri mahdollistaa myös virtuaalisten ääniobjektien lisäämisen. Virtuaaliset ääniobjektit voidaan lisätä todellisen äänimaailmaan siten, että käyttäjä voi erottaa ne todellisesta äänimaailmasta tai siten, että käyttäjä ei erota virtuaalisia ja todellisia äänilähteitä toisistaan. Tämän diplomityön tavoitteena on mitata LAT-laitteiston suorituskykyä erilaisten laboratoriomittausten avulla sekä suorittaa käyttäjäkoe. Mittausten ja käyttäjäkokeen avulla pyritään selvittämään LAT-laitteiston tekniset tiedot sekä ymmärtämään miten käyttäjät kokevat LATlaitteiston käytettävyyden jokapäiväisessä elämässä. Kerätyn informaation avulla on mahdollista kehittää LAT-laitteiston käytettävyyttä sekä äänenlaatua.Augmented Reality Audio (ARA) is a concept that is defined as a real-time combination of real and virtual auditory worlds, that is, the everyday sound surroundings can be extended with virtual sounds. The hardware used in this study for augmented reality audio consists of a pair of headphones and a controlling unit, called an ARA mixer. The ARA headphones are composed of binaural earphone elements with integrated microphones. The ARA mixer provides all the connections and signal processing electronics needed in ARA applications. The basic operating principle of the ARA headset is that the binaural microphones should relay the sound signals unaltered to the earphones in order to create an accurate copy of the surrounding sound environment. Unfortunately, the ARA headset creates some alterations to the copied representation of the real sound environment. Because of these alterations, the ARA mixer is needed to equalize the headphones. Furthermore, the ARA mixer enables the addition of virtual sound objects. Virtual sound objects can be embedded into the real environment in a way that the user can distinguish them from the real sound environment or in a way that the user cannot tell the difference between the real and virtual sounds. The aim of this thesis is to perform full-scale laboratory measurements and an usability evaluation of the ARA hardware. The objective is to collect technical data about the hardware and to gather knowledge concerning how users perceive the usability of the ARA headset in everyday-life situations. With the gathered information it is possible to further improve the usability and sound quality of the ARA hardware

    A Programmable Display-Layer Architecture for Virtual-Reality Applications

    Get PDF
    Two important technical objectives of virtual-reality systems are to provide compelling visuals and effective 3D user interaction. In this respect, modern virtual reality system architectures suffer from a number of short-comings. The reduction of end-to-end latency, crosstalk and judder are especially difficult challenges, each of which negatively affects visual quality or user interaction. In order to provide higher quality visuals, complex scenes consisting of large models are often used. Rendering such a complex scene is a time-consuming process resulting in high end-to-end latency, thereby hampering user interaction. Classic virtual-reality architectures can not adequately address these challenges due to their inherent design principles. In particular, the tight coupling between input devices, the rendering loop and the display system inhibits these systems from addressing all the aforementioned challenges simultaneously. In this thesis, a virtual-reality architecture design is introduced that is based on the addition of a new logical layer: the Programmable Display Layer (PDL). The governing idea is that an extra layer is inserted between the rendering system and the display. In this way, the display can be updated at a fast rate and in a custom manner independent of the other components in the architecture, including the rendering system. To generate intermediate display updates at a fast rate, the PDL performs per-pixel depth-image warping by utilizing the application data. Image warping is the process of computing a new image by transforming individual depth-pixels from a closely matching previous image to their updated locations. The PDL architecture can be used for a range of algorithms and to solve problems that are not easily solved using classic architectures. In particular, techniques to reduce crosstalk, judder and latency are examined using algorithms implemented on top of the PDL. Concerning user interaction techniques, several six-degrees-of-freedom input methods exists, of which optical tracking is a popular option. However, optical tracking methods also introduce several constraints that depend on the camera setup, such as line-of-sight requirements, the volume of the interaction space and the achieved tracking accuracy. These constraints generally cause a decline in the effectiveness of user interaction. To investigate the effectiveness of optical tracking methods, an optical tracker simulation framework has been developed, including a novel optical tracker to test this framework. In this way, different optical tracking algorithms can be simulated and quantitatively evaluated under a wide range of conditions. A common approach in virtual reality is to implement an algorithm and then to evaluate the efficacy of that algorithm by either subjective, qualitative metrics or quantitative user experiments, after which an updated version of the algorithm may be implemented and the cycle repeated. A different approach is followed here. Throughout this thesis, an attempt is made to automatically detect and quantify errors using completely objective and automated quantitative methods and to subsequently attempt to resolve these errors dynamically

    Adaptiiviset läpikuuluvuuskuulokkeet

    Get PDF
    Hear-through equalization can be used to make a headset acoustically transparent, i.e.~to produce sound perception that is similar to perception without the headset. The headset must have microphones outside the earpieces to capture the ambient sounds, which is then reproduced with the headset transducers after the equalization. The reproduced signal is called the hear-through signal. Equalization is needed, since the headset affects the acoustics of the outer ear. \\ In addition to the external microphones, the headset used in this study has additional internal microphones. Together these microphones can be used to estimate the attenuation of the headset online and to detect poor fit. Since the poor fit causes leaks and decreased attenuation, the combined effect of the leaked sound and the hear-through signal changes, when compared to proper fit situation. Therefore, the isolation estimate is used to control the hear-through equalization in order to produce better acoustical transparency. Furthermore, the proposed adaptive hear-through algorithm includes manual controls for the equalizers and the volume of the hear-through signal. \\ The proposed algorithm is found to transform the used headset acoustically transparent. The equalization controls improve the performance of the headset, when the fit is poor or when the volume of the hear-through signal is adjusted, by reducing the comb-filtering effect due to the summation of the leaked sound and the hear-through signal inside the ear canal. The behavior of the proposed algorithm can be demonstrated with an implemented Matlab simulator.Läpikuuluvuusekvalisoinnilla voidaan saavuttaa akustinen läpinäkyvyys kuulokkeita käytettäessä, eli tuottaa samankaltainen ääniaistimus kuin mikä havaittaisiin ilman kuulokkeita. Käytetyissä kuulokkeissa tulee olla mikrofonit kuulokkeen ulkopinnalla, joiden avulla voidaan tallentaa ympäröiviä ääniä. Mikrofonisignaalit ekvalisoidaan, jolloin niistä tulee läpikuuluvuussignaalit, ja toistetaan kuulokkeista. Ekvalisointi on tarpeellista, sillä kuulokkeet muuttavat ulkokorvan akustiikka ja siten myös äänihavaintoa. \\ Tässä diplomityössä käytetyssä prototyyppikuulokeparissa on edellä mainittujen mikrofonien lisäksi myös toiset, korvakäytävän sisälle asettuvat mikrofonit. Yhdessä näiden kahden mikrofonin avulla voidaan määrittää reaaliaikainen estimaatti kuulokkeen vaimennukselle ja tunnistaa huono istuvuus. Koska huonosti asetettu kuuloke vuotaa enemmän ääntä korvakäytävän sisään kuin kunnolla asetettu, kuulokkeen äänen ja vuotavan äänen yhteisvaikutus muuttuu. Tästä syystä vaimennusestimaattia käytetään läpikuuluvuusekvalisoinnin säätöön, jotta akustinen läpinäkyvyys ei kärsisi. Lisäksi esitellyssä algoritmissa on manuaaliset säädöt ekvalisaattoreille ja läpikuuluvuussignaalin voimakkuudelle.\\ Esitetyn algoritmin havaitaan tuottavan akustinen läpinäkyvyys, kun sitä käytetään prototyyppikuulokkeiden kanssa. Ekvalisointisäädöt parantavat kuulokkeiden toimintaa istuvuuden ollessa huono tai säädettäessä läpikuuluvuussignaalin voimakkuutta, koska ne vähentävät kampasuodatusefektiä, joka voi aiheutua vuotavan äänen ja läpikuuluvuussignaalin summautuessa. Esitellyn algoritmin toimintaa voidaan havainnollistaa toteutetulla Matlab-simulaattorilla
    corecore