26 research outputs found
Virtual reality interfaces for seamless interaction with the physical reality
In recent years head-mounted displays (HMDs) for virtual reality (VR) have made the transition from research to consumer product, and are increasingly used for productive purposes such as 3D modeling in the automotive industry and teleconferencing. VR allows users to create and experience real-world like models of products; and enables users to have an immersive social interaction with distant colleagues. These solutions are a promising alternative to physical prototypes and meetings, as they require less investment in time and material.
VR uses our visual dominance to deliver these experiences, making users believe that they are in another reality. However, while their mind is present in VR their body is in the physical reality. From the user’s perspective, this brings considerable uncertainty to the interaction. Currently, they are forced to take off their HMD in order to, for example, see who is observing them and to understand whether their physical integrity is at risk. This disrupts their interaction in VR, leading to a loss of presence – a main quality measure for the success of VR experiences. In this thesis, I address this uncertainty by developing interfaces that enable users to stay in VR while supporting their awareness of the physical reality. They maintain this awareness without having to take off the headset – which I refer to as seamless interaction with the physical reality. The overarching research vision that guides this thesis is, therefore, to reduce this disconnect between the virtual and physical reality.
My research is motivated by a preliminary exploration of user uncertainty towards using VR in co-located, public places. This exploration revealed three main foci: (a) security and privacy, (b) communication with physical collaborators, and (c) managing presence in both the physical and virtual reality. Each theme represents a section in my dissertation, in which I identify central challenges and give directions towards overcoming them as have emerged from the work presented here.
First, I investigate security and privacy in co-located situations by revealing to what extent bystanders are able to observe general tasks. In this context, I explicitly investigate the security considerations of authentication mechanisms. I review how existing authentication mechanisms can be transferred to VR and present novel approaches that are more usable and secure than existing solutions from prior work.
Second, to support communication between VR users and physical collaborators, I add to the field design implications for VR interactions that enable observers to choose opportune moments to interrupt HMD users. Moreover, I contribute methods for displaying interruptions in VR and discuss their effect on presence and performance. I also found that different virtual presentations of co-located collaborators have an effect on social presence, performance and trust.
Third, I close my thesis by investigating methods to manage presence in both the physical and virtual realities. I propose systems and interfaces for transitioning between them that empower users to decide how much they want to be aware of the other reality. Finally, I discuss the opportunity to systematically allocate senses to these two realities: the visual one for VR and the auditory and haptic one for the physical reality. Moreover, I provide specific design guidelines on how to use these findings to alert VR users about physical borders and obstacles.In den letzten Jahren haben Head-Mounted-Displays (HMDs) für virtuelle Realität (VR) den Übergang von der Forschung zum Konsumprodukt vollzogen und werden zunehmend für produktive Zwecke, wie 3D-Modellierung in der Automobilindustrie oder Telekonferenzen, eingesetzt. VR ermöglicht es den Benutzern, schnell und kostengünstig, Prototypen zu erstellen und erlaubt eine immersive soziale Interaktion mit entfernten Kollegen. VR nutzt unsere visuelle Dominanz, um diese Erfahrungen zu vermitteln und gibt Benutzern das Gefühl sich in einer anderen Realität zu befinden.
Während der Nutzer jedoch in der virtuellen Realität mental präsent ist, befindet sich der Körper weiterhin in der physischen Realität. Aus der Perspektive des Benutzers bringt dies erhebliche Unsicherheit in die Nutzung von HMDs. Aktuell sind Nutzer gezwungen, ihr HMD abzunehmen, um zu sehen, wer sie beobachtet und zu verstehen, ob ihr körperliches Wohlbefinden gefährdet ist. Dadurch wird ihre Interaktion in der VR gestört, was zu einem Verlust der Präsenz führt - ein Hauptqualitätsmaß für den Erfolg von VR-Erfahrungen. In dieser Arbeit befasse ich mich mit dieser Unsicherheit, indem ich Schnittstellen entwickle, die es den Nutzern ermöglichen, in VR zu bleiben und gleichzeitig unterstützen sie die Wahrnehmung für die physische Realität. Sie behalten diese Wahrnehmung für die physische Realität bei, ohne das Headset abnehmen zu müssen - was ich als nahtlose Interaktion mit der physischen Realität bezeichne. Daher ist eine übergeordenete Vision von meiner Forschung diese Trennung von virtueller und physicher Realität zu reduzieren.
Meine Forschung basiert auf einer einleitenden Untersuchung, die sich mit der Unsicherheit der Nutzer gegenüber der Verwendung von VR an öffentlichen, geteilten Orten befasst. Im Kontext meiner Arbeit werden Räume oder Flächen, die mit anderen ortsgleichen Menschen geteilt werden, als geteilte Orte bezeichnet. Diese Untersuchung ergab drei Hauptschwerpunkte: (1) Sicherheit und Privatsphäre, (2) Kommunikation mit physischen Kollaborateuren, und (3) Umgang mit der Präsenz, sowohl in der physischen als auch in der virtuellen Realität. Jedes Thema stellt einen Fokus in meiner Dissertation dar, in dem ich zentrale Herausforderungen identifiziere und Lösungsansätze vorstelle.
Erstens, untersuche ich Sicherheit und Privatsphäre an öffentlichen, geteilten Orten, indem ich aufdecke, inwieweit Umstehende in der Lage sind, allgemeine Aufgaben zu beobachten. In diesem Zusammenhang untersuche ich explizit die Gestaltung von Authentifizierungsmechanismen. Ich untersuche, wie bestehende Authentifizierungsmechanismen auf VR übertragen werden können, und stelle neue Ansätze vor, die nutzbar und sicher sind.
Zweitens, um die Kommunikation zwischen HMD-Nutzern und Umstehenden zu unterstützen, erweitere ich das Forschungsfeld um VR-Interaktionen, die es Beobachtern ermöglichen, günstige Momente für die Unterbrechung von HMD-Nutzern zu wählen. Darüber hinaus steuere ich Methoden zur Darstellung von Unterbrechungen in VR bei und diskutiere ihre Auswirkungen auf Präsenz und Leistung von Nutzern. Meine Arbeit brachte auch hervor, dass verschiedene virtuelle Präsentationen von ortsgleichen Kollaborateuren einen Effekt auf die soziale Präsenz, Leistung und Vertrauen haben.
Drittens, schließe ich meine Dissertation mit der Untersuchung von Methoden zur Verwaltung der Präsenz, sowohl in der physischen als auch in der virtuellen Realität ab. Ich schlage Systeme und Schnittstellen für den Übergang zwischen den Realitäten vor, die die Benutzer in die Lage versetzen zu entscheiden, inwieweit sie sich der anderen Realität bewusst sein wollen. Schließlich diskutiere ich die Möglichkeit, diesen beiden Realitäten systematisch Sinne zuzuordnen: die visuelle für VR und die auditive und haptische für die physische Realität. Darüber hinaus stelle ich spezifische Design-Richtlinien zur Verfügung, wie diese Erkenntnisse genutzt werden können, um VR-Anwender auf physische Grenzen und Hindernisse aufmerksam zu machen
Virtual reality interfaces for seamless interaction with the physical reality
In recent years head-mounted displays (HMDs) for virtual reality (VR) have made the transition from research to consumer product, and are increasingly used for productive purposes such as 3D modeling in the automotive industry and teleconferencing. VR allows users to create and experience real-world like models of products; and enables users to have an immersive social interaction with distant colleagues. These solutions are a promising alternative to physical prototypes and meetings, as they require less investment in time and material.
VR uses our visual dominance to deliver these experiences, making users believe that they are in another reality. However, while their mind is present in VR their body is in the physical reality. From the user’s perspective, this brings considerable uncertainty to the interaction. Currently, they are forced to take off their HMD in order to, for example, see who is observing them and to understand whether their physical integrity is at risk. This disrupts their interaction in VR, leading to a loss of presence – a main quality measure for the success of VR experiences. In this thesis, I address this uncertainty by developing interfaces that enable users to stay in VR while supporting their awareness of the physical reality. They maintain this awareness without having to take off the headset – which I refer to as seamless interaction with the physical reality. The overarching research vision that guides this thesis is, therefore, to reduce this disconnect between the virtual and physical reality.
My research is motivated by a preliminary exploration of user uncertainty towards using VR in co-located, public places. This exploration revealed three main foci: (a) security and privacy, (b) communication with physical collaborators, and (c) managing presence in both the physical and virtual reality. Each theme represents a section in my dissertation, in which I identify central challenges and give directions towards overcoming them as have emerged from the work presented here.
First, I investigate security and privacy in co-located situations by revealing to what extent bystanders are able to observe general tasks. In this context, I explicitly investigate the security considerations of authentication mechanisms. I review how existing authentication mechanisms can be transferred to VR and present novel approaches that are more usable and secure than existing solutions from prior work.
Second, to support communication between VR users and physical collaborators, I add to the field design implications for VR interactions that enable observers to choose opportune moments to interrupt HMD users. Moreover, I contribute methods for displaying interruptions in VR and discuss their effect on presence and performance. I also found that different virtual presentations of co-located collaborators have an effect on social presence, performance and trust.
Third, I close my thesis by investigating methods to manage presence in both the physical and virtual realities. I propose systems and interfaces for transitioning between them that empower users to decide how much they want to be aware of the other reality. Finally, I discuss the opportunity to systematically allocate senses to these two realities: the visual one for VR and the auditory and haptic one for the physical reality. Moreover, I provide specific design guidelines on how to use these findings to alert VR users about physical borders and obstacles.In den letzten Jahren haben Head-Mounted-Displays (HMDs) für virtuelle Realität (VR) den Übergang von der Forschung zum Konsumprodukt vollzogen und werden zunehmend für produktive Zwecke, wie 3D-Modellierung in der Automobilindustrie oder Telekonferenzen, eingesetzt. VR ermöglicht es den Benutzern, schnell und kostengünstig, Prototypen zu erstellen und erlaubt eine immersive soziale Interaktion mit entfernten Kollegen. VR nutzt unsere visuelle Dominanz, um diese Erfahrungen zu vermitteln und gibt Benutzern das Gefühl sich in einer anderen Realität zu befinden.
Während der Nutzer jedoch in der virtuellen Realität mental präsent ist, befindet sich der Körper weiterhin in der physischen Realität. Aus der Perspektive des Benutzers bringt dies erhebliche Unsicherheit in die Nutzung von HMDs. Aktuell sind Nutzer gezwungen, ihr HMD abzunehmen, um zu sehen, wer sie beobachtet und zu verstehen, ob ihr körperliches Wohlbefinden gefährdet ist. Dadurch wird ihre Interaktion in der VR gestört, was zu einem Verlust der Präsenz führt - ein Hauptqualitätsmaß für den Erfolg von VR-Erfahrungen. In dieser Arbeit befasse ich mich mit dieser Unsicherheit, indem ich Schnittstellen entwickle, die es den Nutzern ermöglichen, in VR zu bleiben und gleichzeitig unterstützen sie die Wahrnehmung für die physische Realität. Sie behalten diese Wahrnehmung für die physische Realität bei, ohne das Headset abnehmen zu müssen - was ich als nahtlose Interaktion mit der physischen Realität bezeichne. Daher ist eine übergeordenete Vision von meiner Forschung diese Trennung von virtueller und physicher Realität zu reduzieren.
Meine Forschung basiert auf einer einleitenden Untersuchung, die sich mit der Unsicherheit der Nutzer gegenüber der Verwendung von VR an öffentlichen, geteilten Orten befasst. Im Kontext meiner Arbeit werden Räume oder Flächen, die mit anderen ortsgleichen Menschen geteilt werden, als geteilte Orte bezeichnet. Diese Untersuchung ergab drei Hauptschwerpunkte: (1) Sicherheit und Privatsphäre, (2) Kommunikation mit physischen Kollaborateuren, und (3) Umgang mit der Präsenz, sowohl in der physischen als auch in der virtuellen Realität. Jedes Thema stellt einen Fokus in meiner Dissertation dar, in dem ich zentrale Herausforderungen identifiziere und Lösungsansätze vorstelle.
Erstens, untersuche ich Sicherheit und Privatsphäre an öffentlichen, geteilten Orten, indem ich aufdecke, inwieweit Umstehende in der Lage sind, allgemeine Aufgaben zu beobachten. In diesem Zusammenhang untersuche ich explizit die Gestaltung von Authentifizierungsmechanismen. Ich untersuche, wie bestehende Authentifizierungsmechanismen auf VR übertragen werden können, und stelle neue Ansätze vor, die nutzbar und sicher sind.
Zweitens, um die Kommunikation zwischen HMD-Nutzern und Umstehenden zu unterstützen, erweitere ich das Forschungsfeld um VR-Interaktionen, die es Beobachtern ermöglichen, günstige Momente für die Unterbrechung von HMD-Nutzern zu wählen. Darüber hinaus steuere ich Methoden zur Darstellung von Unterbrechungen in VR bei und diskutiere ihre Auswirkungen auf Präsenz und Leistung von Nutzern. Meine Arbeit brachte auch hervor, dass verschiedene virtuelle Präsentationen von ortsgleichen Kollaborateuren einen Effekt auf die soziale Präsenz, Leistung und Vertrauen haben.
Drittens, schließe ich meine Dissertation mit der Untersuchung von Methoden zur Verwaltung der Präsenz, sowohl in der physischen als auch in der virtuellen Realität ab. Ich schlage Systeme und Schnittstellen für den Übergang zwischen den Realitäten vor, die die Benutzer in die Lage versetzen zu entscheiden, inwieweit sie sich der anderen Realität bewusst sein wollen. Schließlich diskutiere ich die Möglichkeit, diesen beiden Realitäten systematisch Sinne zuzuordnen: die visuelle für VR und die auditive und haptische für die physische Realität. Darüber hinaus stelle ich spezifische Design-Richtlinien zur Verfügung, wie diese Erkenntnisse genutzt werden können, um VR-Anwender auf physische Grenzen und Hindernisse aufmerksam zu machen
Assisting Navigation and Object Selection with Vibrotactile Cues
Our lives have been drastically altered by information technology in the last
decades, leading to evolutionary mismatches between human traits and the
modern environment. One particular mismatch occurs when visually
demanding information technology overloads the perceptual, cognitive or
motor capabilities of the human nervous system. This information overload
could be partly alleviated by complementing visual interaction with haptics.
The primary aim of this thesis was to investigate how to assist movement
control with vibrotactile cues. Vibrotactile cues refer to technologymediated
vibrotactile signals that notify users of perceptual events, propose
users to make decisions, and give users feedback from actions. To explore
vibrotactile cues, we carried out five experiments in two contexts of
movement control: navigation and object selection. The goal was to find
ways to reduce information load in these tasks, thus helping users to
accomplish the tasks more effectively. We employed measurements such as
reaction times, error rates, and task completion times. We also used
subjective rating scales, short interviews, and free-form participant
comments to assess the vibrotactile assisted interactive systems.
The findings of this thesis can be summarized as follows. First, if the context
of movement control allows the use of both feedback and feedforward cues,
feedback cues are a reasonable first option. Second, when using vibrotactile
feedforward cues, using low-level abstractions and supporting the
interaction with other modalities can keep the information load as low as
possible. Third, the temple area is a feasible actuation location for
vibrotactile cues in movement control, including navigation cues and object
selection cues with head turns. However, the usability of the area depends
on contextual factors such as spatial congruency, the actuation device, and
the pace of the interaction task
Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◦ Video
EXtended Reality systems have resurged as a medium for work and entertainment. While
360o video has been characterized as less immersive than computer-generated VR, its
realism, ease of use and affordability mean it is in widespread commercial use. Based
on the prevalence and potential of the 360o video format, this research is focused on
improving and augmenting the user experience of watching 360o video. By leveraging
knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI),
this research addresses two issues affecting user experience in 360o video: Attention
Guidance and Visually Induced Motion Sickness (VIMS).
This research work relies on the construction of multiple artifacts to answer the de-
fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe-
riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as
well as enabling the collection and analysis of captured metrics emerging from the user
experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including
optical flow and visual SLAM among others) that control parameters for visual modi-
fications such as a restricted Field of View (FoV). These artifacts are accompanied by
evaluation studies targeting the defined research questions. Through Cue Control, this
research shows that non-diegetic music can be spatialized to act as orientation for users.
A partial spatialization of music was deemed ineffective when used for orientation. Addi-
tionally, our results also demonstrate that diegetic sounds are used for notification rather
than orientation. Through VIMS mitigation pipeline, this research shows that dynamic
restricted FoV is statistically significant in mitigating VIMS, while mantaining desired
levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a
Research through Design (RtD) approach, where the IVRUX artifact is the product of de-
sign knowledge and gave direction to research. The research presented in this thesis is
of interest to practitioners and researchers working on 360o video and helps delineate
future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra-
balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo
menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili-
dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na
prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e
aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento
de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta
pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o:
Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS).
Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res-
ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de
experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação
de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas
capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS,
uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que
controla parâmetros para modificações visuais como o campo de visão restringido. Estes
artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas
de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não-
diegética pode ser espacializada para servir como orientação para os utilizadores. Uma
espacialização parcial da música foi considerada ineficaz quando usada para a orientação.
Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para
notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa
mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar
VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para
a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD),
onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa.
A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra-
balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um
espaço de design rico para a interação e narrativa
Recommended from our members
Exploring Engineering Applications of Visual Analytics in Virtual Reality
Recent advancements and technological breakthroughs in the development of so-called immersive interfaces, such as augmented (AR), mixed (MR), and virtual reality (VR), coupled with the growing mass-market adoption of such devices has started to attract attention from academia and industry alike. Out of these technologies, VR offers the most mature option in terms of both hardware and software, as well as the best available range of different off-the-shelf offerings. VR is a term interchangeably used to denote both head-mounted displays (HMDs) and fully immersive, bespoke 3D environments which these devices transport their users to. With modern devices, developers can leverage a range of different interaction modalities, including visual, audio, and even haptic feedback, in the creation of these virtual worlds. With such a rich interaction space it is thus natural to think of VR as a well-suited environment for interactive visualisation and analytical reasoning of complex multidimensional data.
Research in \textit{visual analytics} (VA) combines these two themes, spanning the last one and a half decades, and has revealed a number of research findings. This includes a range of new advanced and effective visualisation and analysis tools for even more complex, more noisy and larger data sets. Furthermore, the extension of this research and the use of immersive interfaces to facilitate visual analytics has spun-off a new field of research: \textit{immersive analytics} (IA). Immersive analytics leverages the potential bestowed by immersive interfaces to aid the user in swift and effective data analysis.
Some of the most promising application domains of such immersive interfaces in the industry are various branches of engineering, including aerospace design and in civil engineering. The range of potential applications is vast and growing as new stakeholders are adopting these immersive tools. However, the use of these technologies brings its own challenges. One such difficulty is the design of appropriate interaction techniques. There is no optimal choice, instead such a choice varies depending on available hardware, the user’s prior experience, their task at hand, and the nature of the dataset.
To this end, my PhD work has focused on designing and analysing various interactive, VR-based immersive systems for engineering visual analytics. One of the key elements of such an immersive system is the selection of an adequate interaction method. In a series of both qualitative and quantitative studies, I have explored the potential of various interaction techniques that can be used to support the user in swift and effective data analysis.
Here, I have investigated the feasibility of using techniques such as hand-held controllers, gaze-tracking and hand-tracking input methods used solo or in combination in various challenging use cases and scenarios. For instance, I developed and verified the usability and effectiveness of the AeroVR system for aerospace design in VR. This research has allowed me to trim the very large design space of such systems that have been not sufficiently explored thus far. Moreover, building on top of this work, I have designed, developed, and tested a system for digital twin assessment in aerospace that coupled gaze-tracking and hand-tracking, achieved via an additional sensor attached to the front of the VR headset, with no need for the user to hold a controller. The analysis of the results obtained from a qualitative study with domain experts allowed me to distill and propose design implications when developing similar systems. Furthermore, I worked towards designing an effective VR-based visualisation of complex, multidimensional abstract datasets. Here, I developed and evaluated the immersive version of the well-known Parallel Coordinates Plots (IPCP) visualisation technique. The results of the series of qualitative user studies allowed me to obtain a list of design suggestions for IPCP, as well as provide tentative evidence that the IPCP can be an effective tool for multidimensional data analysis. Lastly, I also worked on the design, development, and verification of the system allowing its users to capture information in the context of conducting engineering surveys in VR.
Furthermore, conducting a meaningful evaluation of immersive analytics interfaces remains an open problem. It is difficult and often not feasible to use traditional A/B comparisons in controlled experiments as the aim of immersive analytics is to provide its users with new insights into their data rather than focusing on more quantifying factors. To this end, I developed a generative process for synthesising clustered datasets for VR analytics experiments that can be used in the process of interface evaluation. I further validated this approach by designing and carrying out two user studies. The statistical analysis of the gathered data revealed that this generative process for synthesising clustered datasets did indeed result in datasets that can be used in experiments without the datasets themselves being the dominant contributor of the variability between conditions.Engineering and Physical Sciences Research Council (EPSRC-1788814); Trinity Hall and Cambridge Commonwealth, European & International Trust; Cambridge Philosophical Societ
東北大学電気通信研究所研究活動報告 第29号(2022年度)
紀要類(bulletin)departmental bulletin pape
Explaining Self-Motion Perception using Virtual Reality in Patients with Ocular Disease
Safe mobility requires accurate object and self-motion perception. This involves processing retinal motion generated by optic flow (which change with eye and head movements) and correctly integrating this with vestibular and proprioceptive cues. Poor sensory feedback of self-motion can lead to increased risks of accidents which impacts quality of life. This is further problematic for those with visual deficits, such as central or peripheral vision loss or impaired binocular vision. The expansion of healthcare into using virtual reality (VR) has allowed the assessment of sensory and motor performance in a safe environment. An advantage of VR is its ability to generate vection (perceived illusory self-motion) and presence (sense of being ‘there’). However, a limitation is the potential to develop cybersickness.
Initially, the project examined how binocular vision influences vection in a virtual environment. Observers with or without stereopsis (ability to judge depth binocularly) were asked to compare their perceptual experiences based on psychophysical judgements of magnitude estimation. The findings suggest that the absence of stereopsis impairs accurate judgement of self-motion and reduces perceived presence, however, it was protective for cybersickness.
The project then examined the impact of central and peripheral vision loss on self-motion perception by comparing those with age-related macular degeneration (AMD) and glaucoma respectively. Effects of these visual deficits on sensory conflicts involving visual-vestibular interactions was then assessed. Sensory conflict was imposed by altering the gain of simulated head linear head position and angular orientation to be either compatible or incompatible with head movement in two separate experiments. Fixation was used to control gaze during changes in angular head orientation. Vection and presence was higher in those with AMD, compared with those with glaucoma, indicating the importance of regional specificity in visual deficits on self-motion perception.
Across studies, vection and presence were predominantly visually mediated despite changes in visual-vestibular sensory conflict. The vestibular system, however, appeared to play a larger role in developing cybersickness. The altered perception of self-motion may worsen mobility, particularly with disease progression. We therefore provide a framework and recommendations for a multidisciplinary patient-centric model of care to maximise quality of life