17 research outputs found

    Acceleration-level control of the CyberCarpet

    Full text link
    The CyberCarpet is an actuated platform that allows unconstrained locomotion of a walking user for VR exploration. The platform has two actuating devices (linear and angular) and the motion control problem is dual to that of nonholonomic wheeled mobile robots. The main control objective is to keep the walker close to the platform center. We first recall global kinematic control schemes developed at the velocity level, i.e., with the linear and angular velocities of the platform as input commands. Then, we use backstepping techniques and the theory of cascaded systems to move the design to control laws at the acceleration level. Acceleration control is more suitable to take into account the limitations imposed to the platform motion by the actuation system and/or the physiological bounds on the human walker. In particular, the availability of platform accelerations allows the analytical computation of the apparent accelerations felt by the user

    FEEDBACK/FEEDFORWARD SCHEMES FOR MOTION CONTROL OF THE CYBERCARPET

    Full text link

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    Calibrating Dynamic Pedestrian Route Choice with an Extended Range Telepresence System

    Get PDF
    In this contribution we present the results of a pilot study in which an Extended Range Telepresence System is used to calibrate parameters of a pedestrian model for simulation. The parameters control a model element that is intended to make simulated agents walk in the direction of the esti- mated smallest remaining travel time. We use this to, first, show that that an Extended Range Telepresence System can serve for such a task in general and second to actually find simulation parameters that yield realistic results

    An expandable walking in place platform

    Get PDF
    The control of locomotion in 3D virtual environments should be an ordinary task, from the user point-of-view. Several navigation metaphors have been explored to control locomotion naturally, such as: real walking, the use of simulators, and walking in place. These have proven that the more natural the approach used to control locomotion, the more immerse the user will feel inside the virtual environment. Overcoming the high cost and complexity for the use of most approaches in the field, we introduce a walking in place platform that is able to identify orientation, speed for displacement, as well as lateral steps, of a person mimicking walking pattern. The detection of this information is made without use of additional sensors attached to user body. Our device is simple to mount, inexpensive and allows almost natural use, with lazy steps, thus releasing the hands for other uses. Also, we explore and test a passive, tactile surface for safe use of our platform. The platform was conceived to be utilized as an interface to control navigation in virtual environments, and augmented reality. Extending our device and techniques, we have elaborated a redirection walking metaphor, to be used together with a cave automatic virtual environment. Another metaphor allowed the use of our technique for navigating in point clouds for tagging of data. We tested the use of our technique associated with two different navigation modes: human walking and vehicle driving. In the human walking approach, the virtual orientation inhibits the displacement when sharp turns are made by the user. In vehicle mode, the virtual orientation and displacement occur together, more similar to a vehicle driving approach. We applied tests to detect preferences of navigation mode and ability to use our device to 52 subjects. We identified a preference for the vehicle driving mode of navigation. The use of statistics revealed that users learned easily the use of our technique for navigation. Users were faster walking in vehicle mode; but human mode allowed precise walking in the virtual test environment. The tactile platform proved to allow safe use of our device, being an effective and simple solution for the field. More than 200 people tested our device: UFRGS Portas Abertas in 2013 and 2014, which was a event to present to local community academic works; during 3DUI 2014, where our work was utilized together with a tool for point cloud manipulation. The main contributions of our work are a new approach for detection of walking in place, which allows simple use, with naturalness of movements, expandable for utilization in large areas (such as public spaces), and that efficiently supply orientation and speed to use in virtual environments or augmented reality, with inexpensive hardware.O controle da locomoção em ambientes virtuais 3D deveria ser uma tarefa simples, do ponto de vista do usuário. Durante os anos, metáforas para navegação têm sido exploradas para permitir o controle da locomoção naturalmente, tais como: caminhada real; uso de simuladores e imitação de caminhada. Estas técnicas provaram que, quanto mais natural à abordagem utilizada para controlar a locomoção, mais imerso o usuário vai se sentir dentro do ambiente virtual. Superando o alto custo e complexidade de uso da maioria das abordagens na área, introduzimos uma plataforma para caminhada no lugar, (usualmente reportado como wal king in place), que é capaz de identificar orientação, velocidade de deslocamento, bem como passos laterais, de uma pessoa imitando a caminhada. A detecção desta informação é feita sem o uso de sensores presos no corpo dos usuários, apenas utilizando a plataforma. Nosso dispositivo é simples de montar, barato e permite seu uso por pessoas comuns de forma quase natural, com passos pequenos, assim deixando as mãos livres para outras tarefas. Nós também exploramos e testamos uma superfície táctil passiva para utilização segura de nossa plataforma. A plataforma foi concebida para ser utilizada como uma interface para navegação em ambientes virtuais. Estendendo o uso de nossa técnica e dis positivo, nós elaboramos uma metáfora para caminhada redirecionada, para ser utilizada em conjunto com cavernas de projeção, (usualmente reportado como Cave automatic vir tual environment (CAVE)). Criamos também uma segunda metáfora para navegação, a qual permitiu o uso de nossa técnica para navegação em nuvem de pontos, auxiliando no processo de etiquetagem destes, como parte da competição para o 3D User Interface que ocorreu em Minessota, nos Estados Unidos, em 2014. Nós testamos o uso da técnica e dispositivos associada com duas nuances de navegação: caminhada humana e controle de veiculo. Na abordagem caminhada humana, a taxa de mudança da orientação gerada pelo usuário ao utilizar nosso dispositivo, inibia o deslocamento quando curvas agudas eram efetuadas. No modo veículo, a orientação e o deslocamento ocorriam conjuntamente quando o usuário utilizava nosso dispositivo e técnicas, similarmente ao processo de controle de direção de um veículo. Nós aplicamos testes para determinar o modo de navegação de preferencia para uti lização de nosso dispositivo, em 52 sujeitos. Identificamos uma preferencia pelo modo de uso que se assimila a condução de um veículo. Testes estatísticos revelaram que os usuários aprenderam facilmente a usar nossa técnica para navegar em ambientes virtuais. Os usuários foram mais rápidos utilizando o modo veículo, mas o modo humano garantiu maior precisão no deslocamento no ambiente virtual. A plataforma táctil provou permi tir o uso seguro de nosso dispositivo, sendo uma solução efetiva e simples para a área. Mais de 200 pessoas testaram nosso dispositivo e técnicas: no evento Portas Abertas da UFRGS em 2013 e 2014, um evento onde são apresentados para a comunidade local os trabalhos executados na universidade; e no 3D User Interface, onde nossa técnica e dis positivos foram utilizados em conjunto com uma ferramenta de seleção de pontos numa competição. As principais contribuições do nosso trabalho são: uma nova abordagem para de tecção de imitação de caminhada, a qual permite um uso simples, com naturalidade de movimentos, expansível para utilização em áreas grandes, como espaços públicos e que efetivamente captura informações de uso e fornece orientação e velocidade para uso em ambientes virtuais ou de realidade aumentada, com uso de hardware barato

    Interaction between real and virtual humans during walking: perceptual evaluation of a simple device

    Get PDF
    International audienceValidating that a real user can correctly perceive the motion of a virtual human is first required to enable realistic interactions between real and virtual humans during navigation tasks through virtual reality equipment. In this paper we focus on collision avoidance tasks. Previous works stated that real humans are able to accurately estimate others' motion and to avoid collisions with anticipation. Our main contribution is to propose a perceptual evaluation of a simple virtual reality system. The goal is to assess whether real humans are also able to accurately estimate a virtual human motion before collision avoidance. Results show that, even through a simple system, users are able to correctly evaluate the situation of an interaction on the qualitative point of view. Especially, in comparison with real interactions, users accurately decide whether they should give way to the virtual human or not. However, on the quantitative point of view, it is not easy for users to determine whether they will collide with virtual humans or not. On one hand, deciding to give way or not is a two-choice problem. On the other hand, detecting future collision requires to determine whether some visual variables belong some interval or not. We discuss this problem in terms of bearing angle

    Contributions to shared control and coordination of single and multiple robots

    Get PDF
    L’ensemble des travaux présentés dans cette habilitation traite de l'interface entre un d'un opérateur humain avec un ou plusieurs robots semi-autonomes aussi connu comme le problème du « contrôle partagé ».Le premier chapitre traite de la possibilité de fournir des repères visuels / vestibulaires à un opérateur humain pour la commande à distance de robots mobiles.Le second chapitre aborde le problème, plus classique, de la mise à disposition à l’opérateur d’indices visuels ou de retour haptique pour la commande d’un ou plusieurs robots mobiles (en particulier pour les drones quadri-rotors).Le troisième chapitre se concentre sur certains des défis algorithmiques rencontrés lors de l'élaboration de techniques de coordination multi-robots.Le quatrième chapitre introduit une nouvelle conception mécanique pour un drone quadrirotor sur-actionné avec pour objectif de pouvoir, à terme, avoir 6 degrés de liberté sur une plateforme quadrirotor classique (mais sous-actionné).Enfin, le cinquième chapitre présente une cadre général pour la vision active permettant, en optimisant les mouvements de la caméra, l’optimisation en ligne des performances (en terme de vitesse de convergence et de précision finale) de processus d’estimation « basés vision »

    Design of variable-friction devices for shoe-floor contact

    Get PDF
    In rehabilitation training, high-fidelity simulation environments are needed for reproducing the effects of slippery surfaces, in which potential balance failure conditions can be reproduced on demand. Motivated by these requirements, this article considers the design of variable-friction devices for use in the context of human walking on surfaces in which the coefficient of friction can be controlled dynamically. Various designs are described, aiming at rendering low-friction shoe-floor contact, associated with slippery sur- faces such as ice, as well as higher-friction values more typical of surfaces such as pebbles, sand, or snow. These designs include an array of omnidirectional rolling elements, a combination of low- and high- friction coverings whose contact pressure distribution is controlled, and modulation of low-frequency vi- bration normal to the surface. Our experimentation investigated the static coefficient of friction attainable with each of these designs. Rolling elements were found to be the most slippery, providing a coefficient of friction as low as 0.03, but with significant drawbacks from the perspective of our design objectives. A controlled pressure distribution of low- and high-friction coverings allowed for a minimum coefficient of friction of 0.06. The effects of vibration amplitude and frequency on sliding velocity were also explored. Increases in amplitude resulted in higher velocities, but vibration frequencies greater than 25 Hz reduced sliding velocities. To meet our design objectives, a novel approach involving a friction-variation mecha- nism, embedded in a shoe sole, is proposed
    corecore