1,272 research outputs found

    Assessing the feasibility of online SSVEP decoding in human walking using a consumer EEG headset.

    Get PDF
    BackgroundBridging the gap between laboratory brain-computer interface (BCI) demonstrations and real-life applications has gained increasing attention nowadays in translational neuroscience. An urgent need is to explore the feasibility of using a low-cost, ease-of-use electroencephalogram (EEG) headset for monitoring individuals' EEG signals in their natural head/body positions and movements. This study aimed to assess the feasibility of using a consumer-level EEG headset to realize an online steady-state visual-evoked potential (SSVEP)-based BCI during human walking.MethodsThis study adopted a 14-channel Emotiv EEG headset to implement a four-target online SSVEP decoding system, and included treadmill walking at the speeds of 0.45, 0.89, and 1.34 meters per second (m/s) to initiate the walking locomotion. Seventeen participants were instructed to perform the online BCI tasks while standing or walking on the treadmill. To maintain a constant viewing distance to the visual targets, participants held the hand-grip of the treadmill during the experiment. Along with online BCI performance, the concurrent SSVEP signals were recorded for offline assessment.ResultsDespite walking-related attenuation of SSVEPs, the online BCI obtained an information transfer rate (ITR) over 12 bits/min during slow walking (below 0.89 m/s).ConclusionsSSVEP-based BCI systems are deployable to users in treadmill walking that mimics natural walking rather than in highly-controlled laboratory settings. This study considerably promotes the use of a consumer-level EEG headset towards the real-life BCI applications

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    An Omnidirectional System for Navigation in Virtual Environments

    Get PDF
    Part 8: Robotics and ManufacturingInternational audienceVirtual Reality (VR) is one of the newest technological domains with revolutionary applicability for the tomorrow’s Future Internet, including visions of the Internet of Things. The sensation of total immersion in Virtual Environment (VE) is still unresolved. Therefore, our work proposes a new omnidirectional locomotion interface for navigation in VEs. The novel interface was built from an ordinary unidirectional treadmill, a new mechanical device, a motion capturing system to track the human walking and a control method using artificial intelligence techniques. A neural network is used to predict the motion of the new interface based on user’s body motion and information about VE. The feasibility of the proposed system is verified through experiments and the preliminary results suggest that the new interface performs very well in a simplest VE based on our control method

    Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

    Get PDF
    Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger für eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell präsentiert, was den Zugang für blinde und sehbehinderte Menschen einschränkt. Die grundlegendste Barriere ist oft die elementare Orientierung und Mobilität (und folglich die soziale Mobilität), einschließlich der Erlangung von Kenntnissen über unbekannte Gebäude vor deren Besuch. Um solche Barrieren zu überbrücken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugänglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen Realität (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte Interaktivität, die Aktualisierbarkeit und die Möglichkeit, virtuelle Räume und Modelle als Abbilder von realen Räumen zu erkunden, ohne dass reale Gefahren und die begrenzte Verfügbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle Repräsentationen realer Gebäude im Kontext von Orientierung und Mobilität berührbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. Zusätzlich bietet diese Arbeit einen Überblick über die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung für blinde und sehbehinderte Nutzer und der Weg dorthin durch die Einführung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des Geräts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. Begründet und motiviert werden die folgenden Kapitel durch explorative Ansätze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel führen empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der Hände mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden geräteunabhängige technologische Möglichkeiten und auch Herausforderungen für weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und räumlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    Virtual reality-based assessment and rehabilitation of functional mobility

    Get PDF
    The advent of virtual reality (VR) as a tool for real-world training dates back to the mid-twentieth century and the early years of driving and flight simulators. These simulation environments, while far below the quality of today’s visual displays, proved to be advantageous to the learner due to the safe training environments the simulations provided. More recently, these training environments have proven beneficial in the transfer of user-learned skills from the simulated environment to the real world [5, 31, 48, 51, 57]. Of course the VR technology of today has come a long way. Contemporary displays boast high-resolution, wide-angle fields of view and increased portability. This has led to the evolution of new VR research and training applications in many different arenas, several of which are covered in other chapters of this book. This is true of clinical assessment and rehabilitation as well, as the field has recognized the potential advantages of incorporating VR technologies into patient training for almost 20 years [7, 10, 18, 45, 78]

    Locomotion in virtual reality in full space environments

    Get PDF
    Virtual Reality is a technology that allows the user to explore and interact with a virtual environment in real time as if they were there. It is used in various fields such as entertainment, education, and medicine due to its immersion and ability to represent reality. Still, there are problems such as virtual simulation sickness and lack of realism that make this technology less appealing. Locomotion in virtual environments is one of the main factors responsible for an immersive and enjoyable virtual reality experience. Several methods of locomotion have been proposed, however, these have flaws that end up negatively influencing the experience. This study compares natural locomotion in complete spaces with joystick locomotion and natural locomotion in impossible spaces through three tests in order to identify the best locomotion method in terms of immersion, realism, usability, spatial knowledge acquisition and level of virtual simulation sickness. The results show that natural locomotion is the method that most positively influences the experience when compared to the other locomotion methods.A Realidade Virual é uma tecnologia que permite ao utilizador explorar e interagir com um ambiente virtual em tempo real como se lá estivesse presente. E utilizada em diversas áreas como o entretenimento, educação e medicina devido à sua imersão e capacidade de representar a realidade. Ainda assim, existem problemas como o enjoo por simulação virtual e a falta de realismo que tornam esta tecnologia menos apelativa. A locomoção em ambientes virtuais é um dos principais fatores responsáveis por uma experiência em realidade virtual imersiva e agradável. Vários métodos de locomoção foram propostos, no entanto, estes têm falhas que acabam por influenciar negativamente a experiência. Este estudo compara a locomoção natural em espaços completos com a locomoção por joystick e a locomoção natural em espaços impossíveis através de três testes de forma a identificar qual o melhor método de locomoção a nível de imersão, realismo, usabilidade, aquisição de conhecimento espacial e nível de enjoo por simulação virtual. Os resultados mostram que a locomoção natural é o método que mais influencia positivamente a experiência quando comparado com os outros métodos de locomoção
    corecore