332 research outputs found

    An expandable walking in place platform

    Get PDF
    The control of locomotion in 3D virtual environments should be an ordinary task, from the user point-of-view. Several navigation metaphors have been explored to control locomotion naturally, such as: real walking, the use of simulators, and walking in place. These have proven that the more natural the approach used to control locomotion, the more immerse the user will feel inside the virtual environment. Overcoming the high cost and complexity for the use of most approaches in the field, we introduce a walking in place platform that is able to identify orientation, speed for displacement, as well as lateral steps, of a person mimicking walking pattern. The detection of this information is made without use of additional sensors attached to user body. Our device is simple to mount, inexpensive and allows almost natural use, with lazy steps, thus releasing the hands for other uses. Also, we explore and test a passive, tactile surface for safe use of our platform. The platform was conceived to be utilized as an interface to control navigation in virtual environments, and augmented reality. Extending our device and techniques, we have elaborated a redirection walking metaphor, to be used together with a cave automatic virtual environment. Another metaphor allowed the use of our technique for navigating in point clouds for tagging of data. We tested the use of our technique associated with two different navigation modes: human walking and vehicle driving. In the human walking approach, the virtual orientation inhibits the displacement when sharp turns are made by the user. In vehicle mode, the virtual orientation and displacement occur together, more similar to a vehicle driving approach. We applied tests to detect preferences of navigation mode and ability to use our device to 52 subjects. We identified a preference for the vehicle driving mode of navigation. The use of statistics revealed that users learned easily the use of our technique for navigation. Users were faster walking in vehicle mode; but human mode allowed precise walking in the virtual test environment. The tactile platform proved to allow safe use of our device, being an effective and simple solution for the field. More than 200 people tested our device: UFRGS Portas Abertas in 2013 and 2014, which was a event to present to local community academic works; during 3DUI 2014, where our work was utilized together with a tool for point cloud manipulation. The main contributions of our work are a new approach for detection of walking in place, which allows simple use, with naturalness of movements, expandable for utilization in large areas (such as public spaces), and that efficiently supply orientation and speed to use in virtual environments or augmented reality, with inexpensive hardware.O controle da locomoção em ambientes virtuais 3D deveria ser uma tarefa simples, do ponto de vista do usuário. Durante os anos, metáforas para navegação têm sido exploradas para permitir o controle da locomoção naturalmente, tais como: caminhada real; uso de simuladores e imitação de caminhada. Estas técnicas provaram que, quanto mais natural à abordagem utilizada para controlar a locomoção, mais imerso o usuário vai se sentir dentro do ambiente virtual. Superando o alto custo e complexidade de uso da maioria das abordagens na área, introduzimos uma plataforma para caminhada no lugar, (usualmente reportado como wal king in place), que é capaz de identificar orientação, velocidade de deslocamento, bem como passos laterais, de uma pessoa imitando a caminhada. A detecção desta informação é feita sem o uso de sensores presos no corpo dos usuários, apenas utilizando a plataforma. Nosso dispositivo é simples de montar, barato e permite seu uso por pessoas comuns de forma quase natural, com passos pequenos, assim deixando as mãos livres para outras tarefas. Nós também exploramos e testamos uma superfície táctil passiva para utilização segura de nossa plataforma. A plataforma foi concebida para ser utilizada como uma interface para navegação em ambientes virtuais. Estendendo o uso de nossa técnica e dis positivo, nós elaboramos uma metáfora para caminhada redirecionada, para ser utilizada em conjunto com cavernas de projeção, (usualmente reportado como Cave automatic vir tual environment (CAVE)). Criamos também uma segunda metáfora para navegação, a qual permitiu o uso de nossa técnica para navegação em nuvem de pontos, auxiliando no processo de etiquetagem destes, como parte da competição para o 3D User Interface que ocorreu em Minessota, nos Estados Unidos, em 2014. Nós testamos o uso da técnica e dispositivos associada com duas nuances de navegação: caminhada humana e controle de veiculo. Na abordagem caminhada humana, a taxa de mudança da orientação gerada pelo usuário ao utilizar nosso dispositivo, inibia o deslocamento quando curvas agudas eram efetuadas. No modo veículo, a orientação e o deslocamento ocorriam conjuntamente quando o usuário utilizava nosso dispositivo e técnicas, similarmente ao processo de controle de direção de um veículo. Nós aplicamos testes para determinar o modo de navegação de preferencia para uti lização de nosso dispositivo, em 52 sujeitos. Identificamos uma preferencia pelo modo de uso que se assimila a condução de um veículo. Testes estatísticos revelaram que os usuários aprenderam facilmente a usar nossa técnica para navegar em ambientes virtuais. Os usuários foram mais rápidos utilizando o modo veículo, mas o modo humano garantiu maior precisão no deslocamento no ambiente virtual. A plataforma táctil provou permi tir o uso seguro de nosso dispositivo, sendo uma solução efetiva e simples para a área. Mais de 200 pessoas testaram nosso dispositivo e técnicas: no evento Portas Abertas da UFRGS em 2013 e 2014, um evento onde são apresentados para a comunidade local os trabalhos executados na universidade; e no 3D User Interface, onde nossa técnica e dis positivos foram utilizados em conjunto com uma ferramenta de seleção de pontos numa competição. As principais contribuições do nosso trabalho são: uma nova abordagem para de tecção de imitação de caminhada, a qual permite um uso simples, com naturalidade de movimentos, expansível para utilização em áreas grandes, como espaços públicos e que efetivamente captura informações de uso e fornece orientação e velocidade para uso em ambientes virtuais ou de realidade aumentada, com uso de hardware barato

    Erg-O: ergonomic optimization of immersive virtual environments

    Get PDF
    Interaction in VR involves large body movements, easily inducing fatigue and discomfort. We propose Erg-O, a manipulation technique that leverages visual dominance to maintain the visual location of the elements in VR, while making them accessible from more comfortable locations. Our solution works in an open-ended fashion (no prior knowledge of the object the user wants to touch), can be used with multiple objects, and still allows interaction with any other point within user's reach. We use optimization approaches to compute the best physical location to interact with each visual element, and space partitioning techniques to distort the visual and physical spaces based on those mappings and allow multi-object retargeting. In this paper we describe the Erg-O technique, propose two retargeting strategies and report the results from a user study on 3D selection under different conditions, elaborating on their potential and application to specific usage scenarios

    Erg-O: Ergonomic Optimization of Immersive Virtual Environments

    Get PDF
    Interaction in VR involves large body movements, easily inducing fatigue and discomfort. We propose Erg-O, a manipulation technique that leverages visual dominance to maintain the visual location of the elements in VR, while making them accessible from more comfortable locations. Our solution works in an open-ended fashion (no prior knowledge of the object the user wants to touch), can be used with multiple objects, and still allows interaction with any other point within user's reach. We use optimization approaches to compute the best physical location to interact with each visual element, and space partitioning techniques to distort the visual and physical spaces based on those mappings and allow multi-object retargeting. In this paper we describe the Erg-O technique, propose two retargeting strategies and report the results from a user study on 3D selection under different conditions, elaborating on their potential and application to specific usage scenarios

    LoCoMoTe – a framework for classification of natural locomotion in VR by task, technique and modality

    Get PDF
    Virtual reality (VR) research has provided overviews of locomotion techniques, how they work, their strengths and overall user experience. Considerable research has investigated new methodologies, particularly machine learning to develop redirection algorithms. To best support the development of redirection algorithms through machine learning, we must understand how best to replicate human navigation and behaviour in VR, which can be supported by the accumulation of results produced through live-user experiments. However, it can be difficult to identify, select and compare relevant research without a pre-existing framework in an ever-growing research field. Therefore, this work aimed to facilitate the ongoing structuring and comparison of the VR-based natural walking literature by providing a standardised framework for researchers to utilise. We applied thematic analysis to study methodology descriptions from 140 VR-based papers that contained live-user experiments. From this analysis, we developed the LoCoMoTe framework with three themes: navigational decisions, technique implementation, and modalities. The LoCoMoTe framework provides a standardised approach to structuring and comparing experimental conditions. The framework should be continually updated to categorise and systematise knowledge and aid in identifying research gaps and discussions

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    Assisted Viewpoint Interaction for 3D Visualization

    Get PDF
    Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences

    Monte-Carlo Redirected Walking: Gain Selection Through Simulated Walks

    Get PDF
    We present Monte-Carlo Redirected Walking (MCRDW), a gain selection algorithm for redirected walking. MCRDW applies the Monte-Carlo method to redirected walking by simulating a large number of simple virtual walks, then inversely applying redirection to the virtual paths. Different gain levels and directions are applied, producing differing physical paths. Each physical path is scored and the results used to select the best gain level and direction. We provide a simple example implementation and a simulation-based study for validation. In our study, when compared with the next best technique, MCRDW reduced incidence of boundary collisions by over 50% while reducing total rotation and position gain

    The OctaVis: a VR-device for rehabilitation and diagnostics of visuospatial impairments

    Get PDF
    Dyck E. The OctaVis: a VR-device for rehabilitation and diagnostics of visuospatial impairments. Bielefeld: Universität Bielefeld; 2017
    corecore