23,268 research outputs found

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    Relative Auditory Distance Discrimination With Virtual Nearby Sound Sources

    Get PDF
    In this paper a psychophysical experiment targeted at exploring relative distance discrimination thresholds with binaurally rendered virtual sound sources in the near field is described. Pairs of virtual sources are spatialized around 6 different spatial locations (2 directions 7 3 reference distances) through a set of generic far-field Head-Related Transfer Functions (HRTFs) coupled with a near-field correction model proposed in the literature, known as DVF (Distance Variation Function). Individual discrimination thresholds for each spatial location and for each of the two orders of presentation of stimuli (approaching or receding) are calculated on 20 subjects through an adaptive procedure. Results show that thresholds are higher than those reported in the literature for real sound sources, and that approaching and receding stimuli behave differently. In particular, when the virtual source is close (< 25 cm) thresholds for the approaching condition are significantly lower compared to thresholds for the receding condition, while the opposite behaviour appears for greater distances (~ 1 m). We hypothesize such an asymmetric bias to be due to variations in the absolute stimulus level

    Upper limits on stray force noise for LISA

    Full text link
    We have developed a torsion pendulum facility for LISA gravitational reference sensor ground testing that allows us to put significant upper limits on residual stray forces exerted by LISA-like position sensors on a representative test mass and to characterize specific sources of disturbances for LISA. We present here the details of the facility, the experimental procedures used to maximize its sensitivity, and the techniques used to characterize the pendulum itself that allowed us to reach a torque sensitivity below 20 fNm /sqrt{Hz} from 0.3 to 10 mHz. We also discuss the implications of the obtained results for LISA.Comment: To be published in Classical and Quantum Gravity, special issue on Amaldi5 2003 conference proceedings (10 pages, 6 figures

    Timing and correction of stepping movements with a virtual reality avatar

    Get PDF
    Research into the ability to coordinate one’s movements with external cues has focussed on the use of simple rhythmic, auditory and visual stimuli, or interpersonal coordination with another person. Coordinating movements with a virtual avatar has not been explored, in the context of responses to temporal cues. To determine whether cueing of movements using a virtual avatar is effective, people’s ability to accurately coordinate with the stimuli needs to be investigated. Here we focus on temporal cues, as we know from timing studies that visual cues can be difficult to follow in the timing context. Real stepping movements were mapped onto an avatar using motion capture data. Healthy participants were then motion captured whilst stepping in time with the avatar’s movements, as viewed through a virtual reality headset. The timing of one of the avatar step cycles was accelerated or decelerated by 15% to create a temporal perturbation, for which participants would need to correct to, in order to remain in time. Step onset times of participants relative to the corresponding step-onsets of the avatar were used to measure the timing errors (asynchronies) between them. Participants completed either a visual-only condition, or auditory-visual with footstep sounds included, at two stepping tempo conditions (Fast: 400ms interval, Slow: 800ms interval). Participants’ asynchronies exhibited slow drift in the Visual-Only condition, but became stable in the Auditory-Visual condition. Moreover, we observed a clear corrective response to the phase perturbation in both the fast and slow tempo auditory-visual conditions. We conclude that an avatar’s movements can be used to influence a person’s own motion, but should include relevant auditory cues congruent with the movement to ensure a suitable level of entrainment is achieved. This approach has applications in physiotherapy, where virtual avatars present an opportunity to provide the guidance to assist patients in adhering to prescribed exercises

    Testing Lorentz and CPT symmetry with hydrogen masers

    Full text link
    We present details from a recent test of Lorentz and CPT symmetry using hydrogen masers. We have placed a new limit on Lorentz and CPT violation of the proton in terms of a recent standard model extension by placing a bound on sidereal variation of the F = 1 Zeeman frequency in hydrogen. Here, the theoretical standard model extension is reviewed. The operating principles of the maser and the double resonance technique used to measure the Zeeman frequency are discussed. The characterization of systematic effects is described, and the method of data analysis is presented. We compare our result to other recent experiments, and discuss potential steps to improve our measurement.Comment: 26 pages, 16 figure
    corecore