86 research outputs found
The role of multisensory feedback in the objective and subjective evaluations of fidelity in virtual reality environments.
The use of virtual reality in academic and industrial research has been rapidly expanding in recent years therefore evaluations of the quality and effectiveness of virtual environments are required. The assessment process is usually done through user evaluation that is being measured whilst the user engages with the system. The limitations of this method in terms of its variability and user bias of pre and post-experience have been recognised in the research literature. Therefore, there is a need to design more objective measures of system effectiveness that could complement subjective measures and provide a conceptual framework for the fidelity assessment in VR. There are many technological and perceptual factors that can influence the overall experience in virtual environments. The focus of this thesis was to investigate how multisensory feedback, provided during VR exposure, can modulate a user’s qualitative and quantitative experience in the virtual environment. In a series of experimental studies, the role of visual, audio, haptic and motion cues on objective and subjective evaluations of fidelity in VR was investigated. In all studies, objective measures of performance were collected and compared to the subjective measures of user perception. The results showed that the explicit evaluation of environmental and perceptual factors available within VR environments modulated user experience. In particular, the results shown that a user’s postural responses can be used as a basis for the objective measure of fidelity. Additionally, the role of augmented sensory cues was investigated during a manual assembly task. By recording and analysing the objective and subjective measures it was shown that augmented multisensory feedback modulated the user’s acceptability of the virtual environment in a positive manner and increased overall task performance. Furthermore, the presence of augmented cues mitigated the negative effects of inaccurate motion tracking and simulation sickness. In the follow up study, the beneficial effects of virtual training with augmented sensory cues were observed in the transfer of learning when the same task was performed in a real environment. Similarly, when the effects of 6 degrees of freedom motion cuing on user experience were investigated in a high fidelity flight simulator, the consistent findings between objective and subjective data were recorded. By measuring the pilot’s accuracy to follow the desired path during a slalom manoeuvre while perceived task demand was increased, it was shown that motion cuing is related to effective task performance and modulates the levels of workload, sickness and presence. The overall findings revealed that multisensory feedback plays an important role in the overall perception and fidelity evaluations of VR systems and as such user experience needs to be included when investigating the effectiveness of sensory feedback signals. Throughout this thesis it was consistently shown that subjective measures of user perception in VR are directly comparable to the objective measures of performance and therefore both should be used in order to obtain a robust results when investigating the effectiveness of VR systems. This conceptual framework can provide an effective method to study human perception, which can in turn provide a deeper understanding of the environmental and cognitive factors that can influence the overall user experience, in terms of fidelity requirements, in virtual reality environments
Mediated Physicality: Inducing Illusory Physicality of Virtual Humans via Their Interactions with Physical Objects
The term virtual human (VH) generally refers to a human-like entity comprised of computer graphics and/or physical body. In the associated research literature, a VH can be further classified as an avatar - a human-controlled VH, or an agent - a computer-controlled VH. Because of the resemblance with humans, people naturally distinguish them from non-human objects, and often treat them in ways similar to real humans. Sometimes people develop a sense of co-presence or social presence with the VH - a phenomenon that is often exploited for training simulations where the VH assumes the role of a human. Prior research associated with VHs has primarily focused on the realism of various visual traits, e.g., appearance, shape, and gestures. However, our sense of the presence of other humans is also affected by other physical sensations conveyed through nearby space or physical objects. For example, we humans can perceive the presence of other individuals via the sound or tactile sensation of approaching footsteps, or by the presence of complementary or opposing forces when carrying a physical box with another person. In my research, I exploit the fact that these sensations, when correlated with events in the shared space, affect one\u27s feeling of social/co-presence with another person. In this dissertation, I introduce novel methods for utilizing direct and indirect physical-virtual interactions with VHs to increase the sense of social/co-presence with the VHs - an approach I refer to as mediated physicality. I present results from controlled user studies, in various virtual environment settings, that support the idea that mediated physicality can increase a user\u27s sense of social/co-presence with the VH, and/or induced realistic social behavior. I discuss relationships to prior research, possible explanations for my findings, and areas for future research
Virtual reality and body rotation: 2 flight experiences in comparison
Embodied interfaces, represented by devices that incorporate bodily motion and proprioceptive stimulation, are promising for Virtual Reality (VR) because they can improve immersion and user experience while at the same time reducing simulator sickness compared to more traditional handheld interfaces (e.g.,gamepads). The aim of the study is to evaluate a novel embodied interface called VitruvianVR. The machine is composed of two separate rings that allow its users to bodily rotate onto three different axes. The suitability of the VitruvianVR was tested in a Virtual Reality flight scenario. In order to reach the goal we compared the VitruvianVR to a gamepad using perfomance measures (i.e., accuracy, fails), head movements and position of the body. Furthermore, a series of data coming from questionnaires about sense of presence, user experience, cognitive load, usability and cybersickness was retrieved.Embodied interfaces, represented by devices that incorporate bodily motion and proprioceptive stimulation, are promising for Virtual Reality (VR) because they can improve immersion and user experience while at the same time reducing simulator sickness compared to more traditional handheld interfaces (e.g.,gamepads). The aim of the study is to evaluate a novel embodied interface called VitruvianVR. The machine is composed of two separate rings that allow its users to bodily rotate onto three different axes. The suitability of the VitruvianVR was tested in a Virtual Reality flight scenario. In order to reach the goal we compared the VitruvianVR to a gamepad using perfomance measures (i.e., accuracy, fails), head movements and position of the body. Furthermore, a series of data coming from questionnaires about sense of presence, user experience, cognitive load, usability and cybersickness was retrieved
A perspective review on integrating VR/AR with haptics into STEM education for multi-sensory learning
As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.European Union through the Erasmus+ Program under Grant 2020-1-NO01-KA203-076540, project title Integrating virtual and AUGMENTED reality with WEARable technology into engineering EDUcation (AugmentedWearEdu), https://augmentedwearedu.uia.no/ [34] (accessed on 27 March 2022). This work was also supported by the Top Research Centre Mechatronics (TRCM), University of Agder (UiA), Norwa
A Perspective Review on Integrating VR/AR with Haptics into STEM Education for Multi-Sensory Learning
As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.publishedVersio
An enactive approach to perceptual augmentation in mobility
Event predictions are an important constituent of situation awareness, which is a key objective for many applications in human-machine interaction, in particular in driver assistance. This work focuses on facilitating event predictions in dynamic environments. Its primary contributions are 1) the theoretical development of an approach for enabling people to expand their sampling and understanding of spatiotemporal information, 2) the introduction of exemplary systems that are guided by this approach, 3) the empirical investigation of effects functional prototypes of these systems have on human behavior and safety in a range of simulated road traffic scenarios, and 4) a connection of the investigated approach to work on cooperative human-machine systems. More specific contents of this work are summarized as follows:
The first part introduces several challenges for the formation of situation awareness as a requirement for safe traffic participation. It reviews existing work on these challenges in the domain of driver assistance, resulting in an identification of the need to better inform drivers about dynamically changing aspects of a scene, including event probabilities, spatial and temporal distances, as well as a suggestion to expand the scope of assistance systems to start informing drivers about relevant scene elements at an early stage. Novel forms of assistance can be guided by different fundamental approaches that target either replacement, distribution, or augmentation of driver competencies. A subsequent differentiation of these approaches concludes that an augmentation-guided paradigm, characterized by an integration of machine capabilities into human feedback loops, can be advantageous for tasks that rely on active user engagement, the preservation of awareness and competence, and the minimization of complexity in human- machine interaction. Consequently, findings and theories about human sensorimotor processes are connected to develop an enactive approach that is consistent with an augmentation perspective on human-machine interaction. The approach is characterized by enabling drivers to exercise new sensorimotor processes through which safety-relevant spatiotemporal information may be sampled.
In the second part of this work, a concept and functional prototype for augmenting the perception of traffic dynamics is introduced as a first example for applying principles of this enactive approach. As a loose expression of functional biomimicry, the prototype utilizes a tactile inter- face that communicates temporal distances to potential hazards continuously through stimulus intensity. In a driving simulator study, participants quickly gained an intuitive understanding of the assistance without instructions and demonstrated higher driving safety in safety-critical highway scenarios. But this study also raised new questions such as whether benefits are due to a continuous time-intensity encoding and whether utility generalizes to intersection scenarios or highway driving with low criticality events. Effects of an expanded assistance prototype with lane-independent risk assessment and an option for binary signaling were thus investigated in a separate driving simulator study. Subjective responses confirmed quick signal understanding and a perception of spatial and temporal stimulus characteristics. Surprisingly, even for a binary assistance variant with a constant intensity level, participants reported perceiving a danger-dependent variation in stimulus intensity. They further felt supported by the system in the driving task, especially in difficult situations. But in contrast to the first study, this support was not expressed by changes in driving safety, suggesting that perceptual demands of the low criticality scenarios could be satisfied by existing driver capabilities. But what happens if such basic capabilities are impaired, e.g., due to poor visibility conditions or other situations that introduce perceptual uncertainty? In a third driving simulator study, the driver assistance was employed specifically in such ambiguous situations and produced substantial safety advantages over unassisted driving. Additionally, an assistance variant that adds an encoding of spatial uncertainty was investigated in these scenarios. Participants had no difficulties to understand and utilize this added signal dimension to improve safety. Despite being inherently less informative than spatially precise signals, users rated uncertainty-encoding signals as equally useful and satisfying. This appreciation for transparency of variable assistance reliability is a promising indicator for the feasibility of an adaptive trust calibration in human-machine interaction and marks one step towards a closer integration of driver and vehicle capabilities.
A complementary step on the driver side would be to increase transparency about the driver’s mental states and thus allow for mutual adaptation. The final part of this work discusses how such prerequisites of cooperation may be achieved by monitoring mental state correlates observable in human behavior, especially in eye movements. Furthermore, the outlook for an addition of cooperative features also raises new questions about the bounds of identity as well as practical consequences of human-machine systems in which co-adapting agents may exercise sensorimotor processes through one another.Die Vorhersage von Ereignissen ist ein Bestandteil des Situationsbewusstseins, dessen Unterstützung ein wesentliches Ziel diverser Anwendungen im Bereich Mensch-Maschine Interaktion ist, insbesondere in der Fahrerassistenz. Diese Arbeit zeigt Möglichkeiten auf, Menschen bei Vorhersagen in dynamischen Situationen im Straßenverkehr zu unterstützen. Zentrale Beiträge der Arbeit sind 1) eine theoretische Auseinandersetzung mit der Aufgabe, die menschliche Wahrnehmung und das Verständnis von raum-zeitlichen Informationen im Straßenverkehr zu erweitern, 2) die Einführung beispielhafter Systeme, die aus dieser Betrachtung hervorgehen, 3) die empirische Untersuchung der Auswirkungen dieser Systeme auf das Nutzerverhalten und die Fahrsicherheit in simulierten Verkehrssituationen und 4) die Verknüpfung der untersuchten Ansätze mit Arbeiten an kooperativen Mensch-Maschine Systemen. Die Arbeit ist in drei Teile gegliedert:
Der erste Teil stellt einige Herausforderungen bei der Bildung von Situationsbewusstsein vor, welches für die sichere Teilnahme am Straßenverkehr notwendig ist. Aus einem Vergleich dieses Überblicks mit früheren Arbeiten zeigt sich, dass eine Notwendigkeit besteht, Fahrer besser über dynamische Aspekte von Fahrsituationen zu informieren. Dies umfasst unter anderem Ereigniswahrscheinlichkeiten, räumliche und zeitliche Distanzen, sowie eine frühere Signalisierung relevanter Elemente in der Umgebung.
Neue Formen der Assistenz können sich an verschiedenen grundlegenden Ansätzen der Mensch-Maschine Interaktion orientieren, die entweder auf einen Ersatz, eine Verteilung oder eine Erweiterung von Fahrerkompetenzen abzielen. Die Differenzierung dieser Ansätze legt den Schluss nahe, dass ein von Kompetenzerweiterung geleiteter Ansatz für die Bewältigung jener Aufgaben von Vorteil ist, bei denen aktiver Nutzereinsatz, die Erhaltung bestehender Kompetenzen und Situationsbewusstsein gefordert sind. Im Anschluss werden Erkenntnisse und Theorien über menschliche sensomotorische Prozesse verknüpft, um einen enaktiven Ansatz der Mensch-Maschine Interaktion zu entwickeln, der einer erweiterungsgeleiteten Perspektive Rechnung trägt. Dieser Ansatz soll es Fahrern ermöglichen, sicherheitsrelevante raum-zeitliche Informationen über neue sensomotorische Prozesse zu erfassen.
Im zweiten Teil der Arbeit wird ein Konzept und funktioneller Prototyp zur Erweiterung der Wahrnehmung von Verkehrsdynamik als ein erstes Beispiel zur Anwendung der Prinzipien dieses enaktiven Ansatzes vorgestellt. Dieser Prototyp nutzt vibrotaktile Aktuatoren zur Kommunikation von Richtungen und zeitlichen Distanzen zu möglichen Gefahrenquellen über die Aktuatorposition und -intensität. Teilnehmer einer Fahrsimulationsstudie waren in der Lage, in kurzer Zeit ein intuitives Verständnis dieser Assistenz zu entwickeln, ohne vorher über die Funktionalität unterrichtet worden zu sein. Sie zeigten zudem ein erhöhtes Maß an Fahrsicherheit in kritischen Verkehrssituationen. Doch diese Studie wirft auch neue Fragen auf, beispielsweise, ob der Sicherheitsgewinn auf kontinuierliche Distanzkodierung zurückzuführen ist und ob ein Nutzen auch in weiteren Szenarien vorliegen würde, etwa bei Kreuzungen und weniger kritischem longitudinalen Verkehr. Um diesen Fragen nachzugehen, wurden Effekte eines erweiterten Prototypen mit spurunabhängiger Kollisionsprädiktion, sowie einer Option zur binären Kommunikation möglicher Kollisionsrichtungen in einer weiteren Fahrsimulatorstudie untersucht. Auch in dieser Studie bestätigen die subjektiven Bewertungen ein schnelles Verständnis der Signale und eine Wahrnehmung räumlicher und zeitlicher Signalkomponenten. Überraschenderweise berichteten Teilnehmer größtenteils auch nach der Nutzung einer binären Assistenzvariante, dass sie eine gefahrabhängige Variation in der Intensität von taktilen Stimuli wahrgenommen hätten. Die Teilnehmer fühlten sich mit beiden Varianten in der Fahraufgabe unterstützt, besonders in Situationen, die von ihnen als kritisch eingeschätzt wurden. Im Gegensatz zur ersten Studie hat sich diese gefühlte Unterstützung nur geringfügig in einer messbaren Sicherheitsveränderung widergespiegelt. Dieses Ergebnis deutet darauf hin, dass die Wahrnehmungsanforderungen der Szenarien mit geringer Kritikalität mit den vorhandenen Fahrerkapazitäten erfüllt werden konnten.
Doch was passiert, wenn diese Fähigkeiten eingeschränkt werden, beispielsweise durch schlechte Sichtbedingungen oder Situationen mit erhöhter Ambiguität? In einer dritten Fahrsimulatorstudie wurde das Assistenzsystem in speziell solchen Situationen eingesetzt, was zu substantiellen Sicherheitsvorteilen gegenüber unassistiertem Fahren geführt hat. Zusätzlich zu der vorher eingeführten Form wurde eine neue Variante des Prototyps untersucht, welche räumliche Unsicherheiten der Fahrzeugwahrnehmung in taktilen Signalen kodiert. Studienteilnehmer hatten keine Schwierigkeiten, diese zusätzliche Signaldimension zu verstehen und die Information zur Verbesserung der Fahrsicherheit zu nutzen. Obwohl sie inherent weniger informativ sind als räumlich präzise Signale, bewerteten die Teilnehmer die Signale, die die Unsicherheit übermitteln, als ebenso nützlich und zufriedenstellend. Solch eine Wertschätzung für die Transparenz variabler Informationsreliabilität ist ein vielversprechendes Indiz für die Möglichkeit einer adaptiven Vertrauenskalibrierung in der Mensch-Maschine Interaktion. Dies ist ein Schritt hin zur einer engeren Integration der Fähigkeiten von Fahrer und Fahrzeug.
Ein komplementärer Schritt wäre eine Erweiterung der Transparenz mentaler Zustände des Fahrers, wodurch eine wechselseitige Anpassung von Mensch und Maschine möglich wäre.
Der letzte Teil dieser Arbeit diskutiert, wie diese Transparenz und weitere Voraussetzungen von Mensch-Maschine Kooperation erfüllt werden könnten, indem etwa Korrelate mentaler Zustände, insbesondere über das Blickverhalten, überwacht werden. Des Weiteren ergeben sich mit Blick auf zusätzliche kooperative Fähigkeiten neue Fragen über die Definition von Identität, sowie über die praktischen Konsequenzen von Mensch-Maschine Systemen, in denen ko-adaptive Agenten sensomotorische Prozesse vermittels einander ausüben können
Measuring user experience for virtual reality
In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an Popularität gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von Realität und Virtualität kombinieren. Während die Technologie sowohl für Eingabe- als auch für Ausgabegeräte marktreif ist, existieren nur wenige Lösungen für den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen über Leistung und Benutzerpräferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten Benutzeroberflächen für VR zu einer großen Herausforderung. Diese Arbeit beschäftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingeführt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und Benutzerpräferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und Menüsteuerung im Kontext des täglichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst
m- and e-Health applications in diagnosis and rehabilitation of balance disorders - Εφαρμογές m- και e-health για τη διάγνωση και αποκατάσταση διαταραχών ισορροπίας
Υπόβαθρο: Η ισορροπία είναι μια αρχέγονη ανθρώπινη αίσθηση που απαιτεί πολυαισθητηριακή ολοκλήρωση από το αιθουσαίο, το οπτικό και το ιδιοδεκτικό σύστημα και συμμετοχή της παρεγκεφαλίδας και αρκετών άλλων νευρωνικών κυκλωμάτων. Επιπλέον, εμπλέκονται ένας αριθμός αντανακλαστικών, όπως το αιθουσοοφθαλμικό και το αιθουσονωτιαίο αντανακλαστικό, μαζί με πολλές άλλες ανώτερες εγκεφαλικές λειτουργίες. Υπό συγκεκριμένες συνθήκες, μια περιφερική ή κεντρική βλάβη μπορεί να συμβεί στο σύστημα οδηγώντας σε αστάθεια και συμπτωματολογία ιλίγγου. Από τη 1940, όταν οι Cooksey και Cawthorne ξεκίνησαν να διερευνούν την αποκατάσταση μετά από τέτοιες βλάβες, πολλά πράγματα έχουν αλλάξει στο πεδίο της αποκατάστασης ισορροπίας και κατά τις τελευταίες δεκαετίες οι νέες τεχνολογίες έχουν ενσωματωθεί σε αυτή την προσπάθεια. Σήμερα, ένας μεγάλος αριθμός από εφαρμογές mHealth, eHealth και εικονικής πραγματικότητας έχον αναπτυχθεί με σκοπό να συνεισφέρουν στη διάγνωση ή/και αποκατάσταση ασθενών με αιθουσαίες διαταραχές.
Μεθοδολογία: Η ηλεκτρονική βάση δεδομένων MEDLINE διερευνήθηκε για σχετικές εργασίες από την 1η Ιανουαρίου 2015 έως την 15η Απριλίου 2021. Οι συμπεριληφθείσες στην ανασκόπηση εργασίες καθορίστηκαν βάσει συγκεκριμένων κριτηρίων ένταξης και αποκλεισμού.
Αποτελέσματα: Ένας συνολικός αριθμός από 187 εργασίες προέκυψε μετά την αρχική στρατηγική αναζήτησης, από τις οποίες 43 κρίθηκαν επιλέξιμες και συμπεριλήφθηκαν σε αυτή την ανασκόπηση. Χωρίστηκαν σε 5 μείζονες κατηγορίες και συζητήθηκαν περαιτέρω.
Συζήτηση: Οι κονσόλες παιχνιδιών, όπως το Nintendo Wii, το Nintendo Wii Fit και το Sony PlayStation 2 EyeToy, και οι εφαρμογές Internet έχουν χρησιμοποιηθεί τα τελευταία χρόνια για να συνδράμουν στη διάγνωση και αποκατάσταση ασθενών με διαταραχές ισορροπίας. Καθώς αναδύονται νέες τεχνολογίες και τα smartphones γίνονται βασικό μέρος της καθημερινότητας μας, η αιθουσαία διάγνωση και αποκατάσταση θα βασίζονται όλο και περισσότερο σε εφαρμογές για γυαλιά εικονικής πραγματικότητας, για smartphones και για εξελιγμένες πλατφόρμες.Background: Balance is a primary human sense which requires multisensory integration from the vestibular, the visual and the proprioceptive systems and involvement of the cerebellum and several other neural circuits. Additionally, a number of reflexes, such as the vestibuloocular and the vestibulospinal reflexes, along with many other higher cerebral functions are engaged. Under certain circumstances, a peripheral or central lesion can occur to the system leading to instability and vertigo symptomatology. Since the 1940s, when Cooksey and Cawthorne began to investigate rehabilitation following such lesions, many things have changed in the field of balance rehabilitation and during the last decades modern technologies have been incorporated in this effort. Nowadays, a great amount of mHealth, eHealth and Virtual Reality applications have been developed aiming to contribute in diagnosis or/and rehabilitation of patients with vestibular disorders.
Methods: The electronic database MEDLINE was searched for relevant studies from January 1, 2015 up to April 15, 2021. The papers included in this review were determined according to certain inclusion and exclusion criteria.
Results: A total number of 187 studies occurred after the initial search strategy, out of which 43 were considered eligible and included in this review. They were subdivided into 5 major categories and further discussed.
Discussion: Gaming consoles, such as the Nintendo Wii, Nintendo Wii Fit and Sony PlayStation 2 EyeToy and Internet-based applications have been implemented during the last years to assist in the diagnosis and rehabilitation of patients with balance disorders. As novel technologies emerge and smartphones become an essential part of our everyday lives, vestibular diagnosis and rehabilitation will rely more and more on head-mounted display, mobile phone and sophisticated platform applications
- …