23 research outputs found

    Machine learning methods for the study of cybersickness: a systematic review

    Get PDF
    This systematic review offers a world-first critical analysis of machine learning methods and systems, along with future directions for the study of cybersickness induced by virtual reality (VR). VR is becoming increasingly popular and is an important part of current advances in human training, therapies, entertainment, and access to the metaverse. Usage of this technology is limited by cybersickness, a common debilitating condition experienced upon VR immersion. Cybersickness is accompanied by a mix of symptoms including nausea, dizziness, fatigue and oculomotor disturbances. Machine learning can be used to identify cybersickness and is a step towards overcoming these physiological limitations. Practical implementation of this is possible with optimised data collection from wearable devices and appropriate algorithms that incorporate advanced machine learning approaches. The present systematic review focuses on 26 selected studies. These concern machine learning of biometric and neuro-physiological signals obtained from wearable devices for the automatic identification of cybersickness. The methods, data processing and machine learning architecture, as well as suggestions for future exploration on detection and prediction of cybersickness are explored. A wide range of immersion environments, participant activity, features and machine learning architectures were identified. Although models for cybersickness detection have been developed, literature still lacks a model for the prediction of first-instance events. Future research is pointed towards goal-oriented data selection and labelling, as well as the use of brain-inspired spiking neural network models to achieve better accuracy and understanding of complex spatio-temporal brain processes related to cybersickness

    VR.net: A Real-world Dataset for Virtual Reality Motion Sickness Research

    Full text link
    Researchers have used machine learning approaches to identify motion sickness in VR experience. These approaches demand an accurately-labeled, real-world, and diverse dataset for high accuracy and generalizability. As a starting point to address this need, we introduce `VR.net', a dataset offering approximately 12-hour gameplay videos from ten real-world games in 10 diverse genres. For each video frame, a rich set of motion sickness-related labels, such as camera/object movement, depth field, and motion flow, are accurately assigned. Building such a dataset is challenging since manual labeling would require an infeasible amount of time. Instead, we utilize a tool to automatically and precisely extract ground truth data from 3D engines' rendering pipelines without accessing VR games' source code. We illustrate the utility of VR.net through several applications, such as risk factor detection and sickness level prediction. We continuously expand VR.net and envision its next version offering 10X more data than the current form. We believe that the scale, accuracy, and diversity of VR.net can offer unparalleled opportunities for VR motion sickness research and beyond

    Toward Predicting Motion Sickness Using Virtual Reality and a Moving Platform Assessing Brain, Muscles, and Heart Signals.

    Get PDF
    To access publisher's full text version of this article, please click on the hyperlink in Additional Links field or click on the hyperlink at the top of the page marked DownloadMotion sickness (MS) and postural control (PC) conditions are common complaints among those who passively travel. Many theories explaining a probable cause for MS have been proposed but the most prominent is the sensory conflict theory, stating that a mismatch between vestibular and visual signals causes MS. Few measurements have been made to understand and quantify the interplay between muscle activation, brain activity, and heart behavior during this condition. We introduce here a novel multimetric system called BioVRSea based on virtual reality (VR), a mechanical platform and several biomedical sensors to study the physiology associated with MS and seasickness. This study reports the results from 28 individuals: the subjects stand on the platform wearing VR goggles, a 64-channel EEG dry-electrode cap, two EMG sensors on the gastrocnemius muscles, and a sensor on the chest that captures the heart rate (HR). The virtual environment shows a boat surrounded by waves whose frequency and amplitude are synchronized with the platform movement. Three measurement protocols are performed by each subject, after each of which they answer the Motion Sickness Susceptibility Questionnaire. Nineteen parameters are extracted from the biomedical sensors (5 from EEG, 12 from EMG and, 2 from HR) and 13 from the questionnaire. Eight binary indexes are computed to quantify the symptoms combining all of them in the Motion Sickness Index (I MS ). These parameters create the MS database composed of 83 measurements. All indexes undergo univariate statistical analysis, with EMG parameters being most significant, in contrast to EEG parameters. Machine learning (ML) gives good results in the classification of the binary indexes, finding random forest to be the best algorithm (accuracy of 74.7 for I MS ). The feature importance analysis showed that muscle parameters are the most relevant, and for EEG analysis, beta wave results were the most important. The present work serves as the first step in identifying the key physiological factors that differentiate those who suffer from MS from those who do not using the novel BioVRSea system. Coupled with ML, BioVRSea is of value in the evaluation of PC disruptions, which are among the most disturbing and costly health conditions affecting humans.Landspitali University Hospital, Reykjavi

    Advancing clinical evaluation and diagnostics with artificial intelligence technologies

    Get PDF
    Machine Learning (ML) is extensively used in diverse healthcare applications to aid physicians in diagnosing and identifying associations, sometimes hidden, between dif- ferent biomedical parameters. This PhD thesis investigates the interplay of medical images and biosignals to study the mechanisms of aging, knee cartilage degeneration, and Motion Sickness (MS). The first study shows the predictive power of soft tissue radiodensitometric parameters from mid-thigh CT scans. We used data from the AGES-Reykjavik study, correlating soft tissue numerical profiles from 3,000 subjects with cardiac pathophysiologies, hy- pertension, and diabetes. The results show the role of fat, muscle, and connective tissue in the evaluation of healthy aging. Moreover, we classify patients experiencing gait symptoms, neurological deficits, and a history of stroke in a Korean population, reveal- ing the significant impact of cognitive dual-gait analysis when coupled with single-gait. The second study establishes new paradigms for knee cartilage assessment, correlating 2D and 3D medical image features obtained from CT and MRI scans. In the frame of the EU-project RESTORE we were able to classify degenerative, traumatic, and healthy cartilages based on their bone and cartilage features, as well as we determine the basis for the development of a patient-specific cartilage profile. Finally, in the MS study, based on a virtual reality simulation synchronized with a moving platform and EEG, heart rate, and EMG, we extracted over 3,000 features and analyzed their importance in predicting MS symptoms, concussion in female ath- letes, and lifestyle influence. The MS features are extracted from the brain, muscle, heart, and from the movement of the center of pressure during the experiment and demonstrate their potential value to advance quantitative evaluation of postural con- trol response. This work demonstrates, through various studies, the importance of ML technologies in improving clinical evaluation and diagnosis contributing to advance our understanding of the mechanisms associated with pathological conditions.Tölvulærdómur (Machine Learning eða ML) er algjörlega viðurkennt og nýtt í ýmsum heilbrigðisþjónustuviðskiptum til að hjálpa læknunum við að greina og finna tengsl milli mismunandi líffærafræðilegra gilda, stundum dulinna. Þessi doktorsritgerð fjallar um samspil læknisfræðilegra mynda og lífsmerkja til að skoða eðli aldrunar, niðurbrot hnéhringjar og hreyfikerfissjúkdóms (Motion Sickness eða MS). Fyrsta rannsóknin sýnir spárkraft midjubeins-CT-skanna í því að fullyrða staðfest- ar meðalþyngdarlíkön, þar sem gögn úr AGES-Reykjavik-rannsókninni eru tengd við hjarta- og æðafræðilega sjúkdóma, blóðþrýstingsveikindi og sykursýki hjá 3.000 þátt- takendum. Niðurstöðurnar sýna hlutverk fitu, vöðva og tengikjarna í mati á heilbrigð- um öldrun. Þar að auki flokkum við sjúklinga sem upplifa gangvandamál, taugaein- kenni og sögu af heilablóðfalli í kóreanskri þjóð, þar sem einstök gangtaksskoðun er tengd saman við tvískoðun. Önnur rannsóknin setur upp ný tölfræðisfræðileg umhverfisviðmið til matar á hnéhringju með samhengi 2D og 3D mynda sem aflað er úr CT og MRI-skömmtum. Í rauninni höfum við getuð flokkað niðurbrots-, slys- og heilbrigðar hnéhringjur á grundvelli bein- og brjóskmerkja með raun að sækja niðurstöður í umfjöllun um sjúklingar eftir réttu einkasniði. Að lokum, í MS-rannsókninni, notum við myndræn tilraun samþættaða með hreyfan- legan grundvöll og EEG, hjartslátt, EMG þar sem yfir 3.000 aðgerðir eru útfránn og greindir til að átta sig á áhrifum MS, höfuðárás hjá konum sem eru íþróttamenn, lífs- stíl og fleira. Einkenni MS eru aflöguð úr heilanum, vöðvum, hjarta og frá hreyfingum þyngdupunktsins á meðan tilraunin stendur og sýna mög

    Understanding user interactivity for the next-generation immersive communication: design, optimisation, and behavioural analysis

    Get PDF
    Recent technological advances have opened the gate to a novel way to communicate remotely still feeling connected. In these immersive communications, humans are at the centre of virtual or augmented reality with a full sense of immersion and the possibility to interact with the new environment as well as other humans virtually present. These next-generation communication systems hide a huge potential that can invest in major economic sectors. However, they also posed many new technical challenges, mainly due to the new role of the final user: from merely passive to fully active in requesting and interacting with the content. Thus, we need to go beyond the traditional quality of experience research and develop user-centric solutions, in which the whole multimedia experience is tailored to the final interactive user. With this goal in mind, a better understanding of how people interact with immersive content is needed and it is the focus of this thesis. In this thesis, we study the behaviour of interactive users in immersive experiences and its impact on the next-generation multimedia systems. The thesis covers a deep literature review on immersive services and user centric solutions, before develop- ing three main research strands. First, we implement novel tools for behavioural analysis of users navigating in a 3-DoF Virtual Reality (VR) system. In detail, we study behavioural similarities among users by proposing a novel clustering algorithm. We also introduce information-theoretic metrics for quantifying similarities for the same viewer across contents. As second direction, we show the impact and advantages of taking into account user behaviour in immersive systems. Specifically, we formulate optimal user centric solutions i) from a server-side perspective and ii) a navigation aware adaptation logic for VR streaming platforms. We conclude by exploiting the aforementioned behavioural studies towards a more in- interactive immersive technology: a 6-DoF VR. Overall in this thesis, experimental results based on real navigation trajectories show key advantages of understanding any hidden patterns of user interactivity to be eventually exploited in engineering user centric solutions for immersive systems

    Naturalistic depth perception and binocular vision

    Get PDF
    Humans continuously move both their eyes to redirect their foveae to objects at new depths. To correctly execute these complex combinations of saccades, vergence eye movements and accommodation changes, the visual system makes use of multiple sources of depth information, including binocular disparity and defocus. Furthermore, during development, both fine-tuning of oculomotor control as well as correct eye growth are likely driven by complex interactions between eye movements, accommodation, and the distributions of defocus and depth information across the retina. I have employed photographs of natural scenes taken with a commercial plenoptic camera to examine depth perception while varying perspective, blur and binocular disparity. Using a gaze contingent display with these natural images, I have shown that disparity and peripheral blur interact to modify eye movements and facilitate binocular fusion. By decoupling visual feedback for each eye, I have found it possible to induces both conjugate and disconjugate changes in saccadic adaptation, which helps us understand to what degree the eyes can be individually controlled. To understand the aetiology of myopia, I have developed geometric models of emmetropic and myopic eye shape, from which I have derived psychophysically testable predictions about visual function. I have then tested the myopic against the emmetropic visual system and have found that some aspects of visual function decrease in the periphery at a faster rate in best-corrected myopic observers than in emmetropes. To study the effects of different depth cues on visual development, I have investigated accommodation response and sensitivity to blur in normal and myopic subjects. This body of work furthers our understanding of oculomotor control and 3D perception, has applied implications regarding discomfort in the use of virtual reality, and provides clinically relevant insights regarding the development of refractive error and potential approaches to prevent incorrect emmetropization

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
    corecore