386 research outputs found

    Effect of footstep vibrations and proprioceptive vibrations used with an innovative navigation method

    Get PDF
    This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness.Burgundy region through the JCE funding projec

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Walking with virtual humans : understanding human response to virtual humanoids' appearance and behaviour while navigating in immersive VR

    Get PDF
    In this thesis, we present a set of studies whose results have allowed us to analyze how to improve the realism, navigation, and behaviour of the avatars in an immersive virtual reality environment. In our simulations, participants must perform a series of tasks and we have analyzed perceptual and behavioural data. The results of the studies have allowed us to deduce what improvements are needed to be incorporated to the original simulations, in order to enhance the perception of realism, the navigation technique, the rendering of the avatars, their behaviour or their animations. The most reliable technique for simulating avatars’ behaviour in a virtual reality environment should be based on the study of how humans behave within the environment. For this purpose, it is necessary to build virtual environments where participants can navigate safely and comfortably with a proper metaphor and, if the environment is populated with avatars, simulate their behaviour accurately. All these aspects together will make the participants behave in a way that is closer to how they would behave in the real world. Besides, the integration of these concepts could provide an ideal platform to develop different types of applications with and without collaborative virtual reality such as emergency simulations, teaching, architecture, or designing. In the first contribution of this thesis, we carried out an experiment to study human decision making during an evacuation. We were interested to evaluate to what extent the behaviour of a virtual crowd can affect individuals' decisions. From the second contribution, in which we studied the perception of realism with bots and humans performing just locomotion or varied animations, we can conclude that the combination of having human-like avatars with animation variety can increase the overall realism of a crowd simulation, trajectories and animation. The preliminary study presented in the third contribution of this thesis showed that realistic rendering of the environment and the avatars do not appear to increase the perception of realism in the participants, which is consistent with works presented previously. The preliminary results in our walk-in-place contribution showed a seamless and natural transition between walk-in-place and normal walk. Our system provided a velocity mapping function that closely resembles natural walk. We observed through a pilot study that the system successfully reduces motion sickness and enhances immersion. Finally, the results of the contribution related to locomotion in collaborative virtual reality showed that animation synchronism and footstep sound of the avatars representing the participants do not seem to have a strong impact in terms of presence and feeling of avatar control. However, in our experiment, incorporating natural animations and footstep sound resulted in smaller clearance values in VR than previous work in the literature. The main objective of this thesis was to improve different factors related to virtual reality experiences to make the participants feel more comfortable in the virtual environment. These factors include the behaviour and appearance of the virtual avatars and the navigation through the simulated space in the experience. By increasing the realism of the avatars and facilitating navigation, high scores in presence are achieved during the simulations. This provides an ideal framework for developing collaborative virtual reality applications or emergency simulations that require participants to feel as if they were in real life.En aquesta tesi, es presenta un conjunt d'estudis els resultats dels quals ens han permès analitzar com millorar el realisme, la navegació i el comportament dels avatars en un entorn de realitat virtual immersiu. En les nostres simulacions, els participants han de realitzar una sèrie de tasques i hem analitzat dades perceptives i de comportament mentre les feien. Els resultats dels estudis ens han permès deduir quines millores són necessàries per a ser incorporades a les simulacions originals, amb la finalitat de millorar la percepció del realisme, la tècnica de navegació, la representació dels avatars, el seu comportament o les seves animacions. La tècnica més fiable per simular el comportament dels avatars en un entorn de realitat virtual hauria de basar-se en l'estudi de com es comporten els humans dins de l¿entorn virtual. Per a aquest propòsit, és necessari construir entorns virtuals on els participants poden navegar amb seguretat i comoditat amb una metàfora adequada i, si l¿entorn està poblat amb avatars, simular el seu comportament amb precisió. Tots aquests aspectes junts fan que els participants es comportin d'una manera més pròxima a com es comportarien en el món real. A més, la integració d'aquests conceptes podria proporcionar una plataforma ideal per desenvolupar diferents tipus d'aplicacions amb i sense realitat virtual col·laborativa com simulacions d'emergència, ensenyament, arquitectura o disseny. En la primera contribució d'aquesta tesi, vam realitzar un experiment per estudiar la presa de decisions durant una evacuació. Estàvem interessats a avaluar en quina mesura el comportament d'una multitud virtual pot afectar les decisions dels participants. A partir de la segona contribució, en la qual estudiem la percepció del realisme amb robots i humans que realitzen només una animació de caminar o bé realitzen diverses animacions, vam arribar a la conclusió que la combinació de tenir avatars semblants als humans amb animacions variades pot augmentar la percepció del realisme general de la simulació de la multitud, les seves trajectòries i animacions. L'estudi preliminar presentat en la tercera contribució d'aquesta tesi va demostrar que la representació realista de l¿entorn i dels avatars no semblen augmentar la percepció del realisme en els participants, que és coherent amb treballs presentats anteriorment. Els resultats preliminars de la nostra contribució de walk-in-place van mostrar una transició suau i natural entre les metàfores de walk-in-place i caminar normal. El nostre sistema va proporcionar una funció de mapatge de velocitat que s'assembla molt al caminar natural. Hem observat a través d'un estudi pilot que el sistema redueix amb èxit el motion sickness i millora la immersió. Finalment, els resultats de la contribució relacionada amb locomoció en realitat virtual col·laborativa van mostrar que el sincronisme de l'animació i el so dels avatars que representen els participants no semblen tenir un fort impacte en termes de presència i sensació de control de l'avatar. No obstant això, en el nostre experiment, la incorporació d'animacions naturals i el so de passos va donar lloc a valors de clearance més petits en RV que treballs anteriors ja publicats. L'objectiu principal d'aquesta tesi ha estat millorar els diferents factors relacionats amb experiències de realitat virtual immersiva per fer que els participants se sentin més còmodes en l'entorn virtual. Aquests factors inclouen el comportament i l'aparença dels avatars i la navegació a través de l'entorn virtual. En augmentar el realisme dels avatars i facilitar la navegació, s'aconsegueixen altes puntuacions en presència durant les simulacions. Això proporciona un marc ideal per desenvolupar aplicacions col·laboratives de realitat virtual o simulacions d'emergència que requereixen que els participants se sentin com si estiguessin en la vida realPostprint (published version

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Into the Machine

    Get PDF
    A thesis presented to the faculty of the Caudill College of Humanities at Morehead State University in partial fulfillment of the requirements for the Degree of Master of Arts by Scarlett Stewart on November 26, 2002

    Visually-guided walking reference modification for humanoid robots

    Get PDF
    Humanoid robots are expected to assist humans in the future. As for any robot with mobile characteristics, autonomy is an invaluable feature for a humanoid interacting with its environment. Autonomy, along with components from artificial intelligence, requires information from sensors. Vision sensors are widely accepted as the source of richest information about the surroundings of a robot. Visual information can be exploited in tasks ranging from object recognition, localization and manipulation to scene interpretation, gesture identification and self-localization. Any autonomous action of a humanoid, trying to accomplish a high-level goal, requires the robot to move between arbitrary waypoints and inevitably relies on its selflocalization abilities. Due to the disturbances accumulating over the path, it can only be achieved by gathering feedback information from the environment. This thesis proposes a path planning and correction method for bipedal walkers based on visual odometry. A stereo camera pair is used to find distinguishable 3D scene points and track them over time, in order to estimate the 6 degrees-of-freedom position and orientation of the robot. The algorithm is developed and assessed on a benchmarking stereo video sequence taken from a wheeled robot, and then tested via experiments with the humanoid robot SURALP (Sabanci University Robotic ReseArch Laboratory Platform)

    A virtual reality approach to the study of visually driven postural control in developing and aging humans

    Full text link
    L'être humain utilise trois systèmes sensoriels distincts pour réguler le maintien de la station debout: la somesthésie, le système vestibulaire, et le système visuel. Le rôle de la vision dans la régulation posturale demeure peu connu, notamment sa variabilité en fonction de l'âge, du type développemental, et des atteintes neurologiques. Dans notre travail, la régulation posturale induite visuellement a été évaluée chez des participants au développement et vieillissement normaux âgés de 5-85 ans, chez des individus autistes (développement atypique) âgés de 12-33 ans, ainsi que chez des enfants entre 9-18 ans ayant subi un TCC léger. À cet effet, la réactivité posturale des participants en réponse à un tunnel virtuel entièrement immersif, se mouvant à trois niveaux de vélocité, a été mesurée; des conditions contrôles, où le tunnel était statique ou absent, ont été incluses. Les résultats montrent que la réactivité (i.e. instabilité) posturale induite visuellement est plus élevée chez les jeunes enfants; ensuite, elle s'atténue pour rejoindre des valeurs adultes vers 16-19 ans et augmente de façon linéaire en fonction de l'âge après 45 ans jusqu'à redevenir élevée vers 60 ans. De plus, à la plus haute vélocité du tunnel, les plus jeunes participants autistes ont manifesté significativement moins de réactivité posturale comparativement à leurs contrôles; cette différence n'était pas présente chez des participants plus âgés (16-33 ans). Enfin, les enfants ayant subi un TCC léger, et qui étaient initialement modérément symptomatiques, ont montré un niveau plus élevé d'instabilité posturale induite visuellement que les contrôles, et ce jusqu'à 12 semaines post-trauma malgré le fait que la majorité d'entre eux (89%) n'étaient plus symptomatiques à ce stade. En somme, cela suggère la présence d'une importante période de transition dans la maturation des systèmes sous-tendant l'intégration sensorimotrice impliquée dans le contrôle postural vers l'âge de 16 ans, et d'autres changements sensorimoteurs vers l'âge de 60 ans; cette sur-dépendance visuelle pour la régulation posturale chez les enfants et les aînés pourrait guider l'aménagement d'espaces et l'élaboration d'activités ajustés à l'âge des individus. De plus, le fait que l'hypo-réactivité posturale aux informations visuelles chez les autistes dépende des caractéristiques de l'environnement visuel et de l'âge chronologique, affine notre compréhension des anomalies sensorielles propres à l'autisme. Par ailleurs, le fait que les enfants ayant subi un TCC léger montrent des anomalies posturales jusqu'à 3 mois post-trauma, malgré une diminution significative des symptômes rapportés, pourrait être relié à une altération du traitement de l'information visuelle dynamique et pourrait avoir des implications quant à la gestion clinique des patients aux prises avec un TCC léger, puisque la résolution des symptômes est actuellement le principal critère utilisé pour la prise de décision quant au retour aux activités. Enfin, les résultats obtenus chez une population à développement atypique (autisme) et une population avec atteinte neurologique dite transitoire (TCC léger), contribuent non seulement à une meilleure compréhension des mécanismes d'intégration sensorimotrice sous-tendant le contrôle postural mais pourraient aussi servir comme marqueurs sensibles et spécifiques de dysfonction chez ces populations. Mots-clés : posture, équilibre, vision, développement/vieillissement sensorimoteur, autisme, TCC léger symptomatique, réalité virtuelle.Maintaining upright stance is essential for the accomplishment of several goal-directed behaviors, such as walking. Humans use three distinct sensory systems to regulate their posture: the somatosensory, the vestibular and the visual systems. The role of vision in postural regulation remains poorly understood, notably its variability across the life-span, developmental type and neurological insult. Hence, visually-driven postural regulation was examined in typically developing and aging participants (5-85 years-old), as well as in atypically developing individuals with autism (12-33 years-old) and in children having sustained mTBI (9-18 years-old). In order to do so, participants' postural reactivity was assessed in response to a fully immersive virtual tunnel moving at 3 different velocities; control conditions were also included wherein the tunnel was either static or absent. Results show that visually-induced postural reactivity was strongest in young children, then attenuated to become adult-like between 16-19 years of age, and started increasing again linearly with age after 45 years until becoming strong again around 60 years. Moreover, at the highest tunnel velocity, younger autistic participants showed significantly less postural reactivity compared to age-matched controls and young adults (16-33 years-old). Finally, children having sustained mTBI, who were initially moderately symptomatic, exhibited increased visually-induced instability compared to their matched controls up to 12 weeks post-injury, although most of them (89%) were no longer highly symptomatic. Altogether, this suggests the presence of an important transition period for the maturation of the systems underlying sensorimotor integration in postural control at around 16 years of age, and further sensorimotor changes after 60 years of age; this over-reliance on vision for postural regulation in childhood and late adulthood could guide the design of age-appropriate facilities/ activities. Furthermore, the fact that postural hypo-reactivity to visual information present in autism is contingent on both the visual environment and on chronological age, enhances our understanding of autism-specific sensory anomalies. Additionally, the fact that children with mTBI show balance anomalies up to 3 months post-injury, even when they are no longer highly symptomatic may be related to altered processing of dynamic visual information and could have implications for the clinical management of mTBI patients, since symptoms resolution is commonly used as a criterion for return to activities. Finally, results stemming from populations with atypical development (autism) and with so-called transient neurological insult (mild TBI) not only contribute to enhance our understanding of sensorimotor integration mechanisms underlying postural control, but could also consist of sensitive and specific markers of dysfunction in these populations. Keywords : posture, balance, vision, sensorimotor development/ aging, autism, symptomatic mTBI, virtual reality

    The effects of expectancy and control on the perception of ego-motion in space: a combined postural and electrophysiological study

    Get PDF
    In the beginning of this work was the scientific question: does the amount of control over visual self-motion cues influence their processing and/or perception? In Experiment 1, we tried to explore the possibility to use optic flow as a visual motion cue and see whether we can observe a sensory attenuation or modulation on the behavioural level in trials in which the optic flow was self-initiated using putative different levels of control by instructed or uninstructed button-presses compared to passive flow. This experiment, while not able to demonstrate a sensory modulation and with several important limitations (see below), was however an important basis for the planning of Experiment 2 and a proof-of-concept that this method has the potential to address our research question and is feasible given our facilities. In Experiment 2, we tried to overcome some of the limitations, further improved the re-producibility (e.g. stimuli and instructions) and extended our methodology to the measurement of neurophysiological and postural data to enquire about not only the behavioural level but also the processing on the physiological level. This experiment presented evidence that self-motion cues with the same physical properties are somehow processed differently at the cortical level depending on whether they are self-initiated or not. In addition to overcoming certain limitations in Experiment 3 (e.g. having a no optic flow control condition and using the standard EEG setup besides the mobile setup from Experiment 2), we were able to reproduce our findings in different subjects, a larger population and under a different posture. We were also able to show that our results are highly robust (e.g. removal of half the participants from the analysis did not change the pattern). Further outcomes from our study are that the scientific community can put more trust into mobile EEG setups given robust effects and diligent artifact removal. Additionally, we contributed findings on the relationship of vection and VIMS and tried to bridge the gap between the highly relevant fields of research on visual motion perception and sense of agency. This might act as an exploratory foundation for further research which will be essential for the economical and medical applicability of VR devices and for a deeper understanding of locomotion and navigation per se. The ability to perceive self-motion cues and dissociate them from cues for motion in the environment is fundamental for being able to take actions in the complex, dynamic environments which are our daily lives. In fact, it could be seen as a classical example of the dynamic coupling of action and perception to reach goals which is one of the most fundamental abilities not only for humans, but throughout the animal kingdom which may have lain the evolutionary basis for the later development of the human brain with its complexity as we see it nowadays (Godfrey-Smith 2016).Den Grundstein für diese Arbeit legte die Frage: spielt es für die Wahrnehmung und Verarbeitung von visuellem Feedback, das in Folge von Eigenbewegung im Raum entsteht, eine Rolle wie viel Kontrolle wir über die Bewegung haben? Wird das Feedback von aktiven Bewegungen anders verarbeitet als das von passiven? Im ersten Experiment explorierten wir die Möglichkeit uns dieser Fragestellung mit optic flow als visuellem Stimulus zu nähern. Wir haben dazu ein Experiment entwickelt bei dem gesunde Proband:innen unterschiedlich viel Kontrolle über den optic flow haben und sie anschließend zu ihrem Bewegungsempfinden (Vection) befragt. Während dieses Experiment keine relevante Modulation nachweisen konnte, so stellte es doch eine wichtige methodologische Grundlage für die Entwicklung der weiteren Experimente dar. Die wichtigsten Änderungen in Experiment 2 umfassten zum einen Modifikationen an den Stimuli und eine ausgeprägtere Formalisierung der Instruktionen, zum anderen die zusätzliche Erhebung von neurophysiologischen und posturalen Daten. Diese Änderungen erlaubten uns nicht nur explizite Unterschiede in der Intensität der Wahrnehmung von Vection zu erfassen, sondern auch eventuelle Modifikationen in der Verarbeitung der Stimuli messbar zu machen. Dieses Experiment lieferte Hinweise darauf, dass Stimuli mit denselben physikalischen Eigenschaften auf kortikaler Ebene anders verarbeitet werden, je nachdem ob sie selbst initiiert oder Computer-generiert sind. In Experiment 3 führten wir klassische Kontrollbedingungen wie zum Beispiel Versuche mit statischen Stimuli ein. Wir veränderten weiterhin die Körperposition, so dass Proband:innen nun saßen und die Hälfte der Versuche mit einer Kinnstütze stattfand. Damit konnten wir das Risiko, das unsere neurophysiologischen Effekte Bewegungsartefakte sind, minimieren. Insgesamt waren wir dazu in der Lage die Haupteffekte von Experiment 2 (agency-abhängige Modulation der evozierten Desynchronisation und der Amplitude der evozierten Potentiale) in Experiment 3 zu reproduzieren, obwohl wir hier eine deutlich größere Kohorte sowie andere Pro-band:innen in einer anderen Körperhaltung testeten. Diese Resultate sind sehr robust, so dass sie weiterhin deutlich erkennbar sind, auch nachdem wir ver-suchsweise die Hälfte der Proband:innen aus der Analyse ausgeschlossen hatten. Zusätzlich zu unserer ursprünglichen Fragestellung zeigten unsere Experimente, dass die wissenschaftliche Community mehr auf die Ergebnisse von Studien, die ein mobiles EEG-Setup verwenden, vertrauen kann, solange es sich um robuste Effekte handelt und ausreichend auf die Identifikation und Entfernung von Bewegungsartefakten geachtet wird. Außerdem konnten wir mit unseren Daten dazu beitragen die Zusammenhänge zwischen Vection und visuell-induzierter Bewegungskrankheit besser zu verstehen. Unsere Experimente versuchen die Brücke zu schlagen zwischen den jeweils für sich gesehen hoch relevanten Forschungsfeldern rund um die visuelle Bewegung-swahrnehmung und den Sense of Agency. Diese Felder zusammenzubringen wird eine essenzielle Rolle spielen, sowohl um das volle Potential von VR-Applikationen zu entfalten als auch um Lokomotion und Navigation umfassender zu begreifen. Die Fähigkeit Eigenbewegung von Bewegungen in der Umgebung anhand von visuellen Informationen zu unterscheiden, ist entscheidend um in der komplexen, dynamischen Umwelt unseres täglichen Lebens erfolgreich agieren und navigieren zu können. Diese Fähigkeit ist ein schönes Beispiel für die dynamische Koppelung von Handlung und Wahrnehmung zum Erreichen unserer Ziele und vermutlich eine der fundamentalsten Fähigkeiten nicht nur für Menschen, sondern auch im übrigen Tierreich. Möglicherweise so fundamental, dass sie die evolutionäre Basis für die spätere Entwicklung des menschlichen Gehirns in all seiner Komplexität und Schönheit, gelegt haben könnte (Godfrey-Smith 2016)

    NASA Tech Briefs, June 1992

    Get PDF
    Topics covered include: New Product Ideas; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences

    GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework

    Full text link
    In the last few decades, a variety of assistive technologies (AT) have been developed to improve the quality of life of visually impaired people. These include providing an independent means of travel and thus better access to education and places of work. There is, however, no metric for comparing and benchmarking these technologies, especially multimodal systems. In this dissertation, we propose GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation, a framework which allows for developers and consumers to assess their technologies in a functional and objective manner. This framework is based on three foundations: multimodality, gamification, and virtual reality. It facilitates fuller and more controlled data collection, rapid prototyping and testing of multimodal ATs, benchmarking heterogeneous ATs, and conversion of these evaluation tools into simulation or training tools. Our contributions include: (1) a unified evaluation framework: via developing an evaluative approach for multimodal visual ATs; (2) a sustainable evaluation: by employing virtual environments and gamification techniques to create engaging games for users, while collecting experimental data for analysis; (3) a novel psychophysics evaluation: enabling researchers to conduct psychophysics evaluation despite the experiment being a navigational task; and (4) a novel collaborative environment: enabling developers to rapid prototype and test their ATs with users in an early stakeholder involvement that fosters communication between developers and users. This dissertation first provides a background in assistive technologies and motivation for the framework. This is followed by detailed description of the GIVE-ME Framework, with particular attention to its user interfaces, foundations, and components. Then four applications are presented that describe how the framework is applied. Results and discussions are also presented for each application. Finally, both conclusions and a few directions for future work are presented in the last chapter
    • …
    corecore