206,979 research outputs found

    Two-Handed Gesture Recognition

    Get PDF
    Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recognition for Human-Computer Interaction deals with one-handed gestures. But two-handed gestures can provide more efficient and easy to interact with user interfaces. It is particularly the case with two-handed gestures we do in the physical world, such as gestures to manipulate objects. It would be very valuable to permit to the user to interact with virtual objects in the same way that he/she interacts with physical ones. This paper presents a two-handed gesture database to manipulate virtual objects on the screen (mostly rotations) and some recognition experiment using Hidden Markov Models (HMMs). The results obtained with this state-of-the-art algorithm are really encouraging. These gestures would improve the interaction performance between the user and virtual reality applications

    Real‐time interaction of virtual and physical objects in mixed reality applications

    Get PDF
    We present a real-time method for computing the mechanical interaction between real and virtual objects in an augmented reality environment. Using model order reduction methods we are able to estimate the physical behavior of deformable objects in real time, with the precision of a high-fidelity solver but working at the speed of a video sequence. We merge tools of machine learning, computer vision, and computer graphics in a single application to describe the behavior of deformable virtual objects allowing the user to interact with them in a natural way. Three examples are provided to test the performance of the method.Ministerio de Ciencia e Innovación, Grant/Award Number: CICYT-DPI2017-85139-C2-1-

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    Virtual Reality-Based Interface for Advanced Assisted Mobile Robot Teleoperation

    Full text link
    [EN] This work proposes a new interface for the teleoperation of mobile robots based on virtual reality that allows a natural and intuitive interaction and cooperation between the human and the robot, which is useful for many situations, such as inspection tasks, the mapping of complex environments, etc. Contrary to previous works, the proposed interface does not seek the realism of the virtual environment but provides all the minimum necessary elements that allow the user to carry out the teleoperation task in a more natural and intuitive way. The teleoperation is carried out in such a way that the human user and the mobile robot cooperate in a synergistic way to properly accomplish the task: the user guides the robot through the environment in order to benefit from the intelligence and adaptability of the human, whereas the robot is able to automatically avoid collisions with the objects in the environment in order to benefit from its fast response. The latter is carried out using the well-known potential field-based navigation method. The efficacy of the proposed method is demonstrated through experimentation with the Turtlebot3 Burger mobile robot in both simulation and real-world scenarios. In addition, usability and presence questionnaires were also conducted with users of different ages and backgrounds to demonstrate the benefits of the proposed approach. In particular, the results of these questionnaires show that the proposed virtual reality based interface is intuitive, ergonomic and easy to use.This research was funded by the Spanish Government (Grant PID2020-117421RB-C21 funded byMCIN/AEI/10.13039/501100011033) and by the Generalitat Valenciana (Grant GV/2021/181).Solanes, JE.; Muñoz García, A.; Gracia Calandin, LI.; Tornero Montserrat, J. (2022). Virtual Reality-Based Interface for Advanced Assisted Mobile Robot Teleoperation. Applied Sciences. 12(12):1-22. https://doi.org/10.3390/app12126071122121

    A Taxonomy of Freehand Grasping Patterns in Virtual Reality

    Get PDF
    Grasping is the most natural and primary interaction paradigm people perform every day, which allows us to pick up and manipulate objects around us such as drinking a cup of coffee or writing with a pen. Grasping has been highly explored in real environments, to understand and structure the way people grasp and interact with objects by presenting categories, models and theories for grasping approach. Due to the complexity of the human hand, classifying grasping knowledge to provide meaningful insights is a challenging task, which led to researchers developing grasp taxonomies to provide guidelines for emerging grasping work (such as in anthropology, robotics and hand surgery) in a systematic way. While this body of work exists for real grasping, the nuances of grasping transfer in virtual environments is unexplored. The emerging development of robust hand tracking sensors for virtual devices now allow the development of grasp models that enable VR to simulate real grasping interactions. However, present work has not yet explored the differences and nuances that are present in virtual grasping compared to real object grasping, which means that virtual systems that create grasping models based on real grasping knowledge, might make assumptions which are yet to be proven true or untrue around the way users intuitively grasp and interact with virtual objects. To address this, this thesis presents the first user elicitation studies to explore grasping patterns directly in VR. The first study presents main similarities and differences between real and virtual object grasping, the second study furthers this by exploring how virtual object shape influences grasping patterns, the third study focuses on visual thermal cues and how this influences grasp metrics, and the fourth study focuses on understanding other object characteristics such as stability and complexity and how they influence grasps in VR. To provide structured insights on grasping interactions in VR, the results are synthesized in the first VR Taxonomy of Grasp Types, developed following current methods for developing grasping and HCI taxonomies and re-iterated to present an updated and more complete taxonomy. Results show that users appear to mimic real grasping behaviour in VR, however they also illustrate that users present issues around object size estimation and generally a lower variability in grasp types is used. The taxonomy shows that only five grasps account for the majority of grasp data in VR, which can be used for computer systems aiming to achieve natural and intuitive interactions at lower computational cost. Further, findings show that virtual object characteristics such as shape, stability and complexity as well as visual cues for temperature influence grasp metrics such as aperture, category, type, location and dimension. These changes in grasping patterns together with virtual object categorisation methods can be used to inform design decisions when developing intuitive interactions and virtual objects and environments and therefore taking a step forward in achieving natural grasping interaction in VR

    Addressing the problem of Interaction in fully immersive Virtual Environments: from raw sensor data to effective devices

    Get PDF
    Immersion into Virtual Reality is a perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system with images, sound or other stimuli that provide an engrossing total environment. The use of technological devices such as stereoscopic cameras, head-mounted displays, tracking systems and haptic interfaces allows for user experiences providing a physical feeling of being in a realistic world, and the term “immersion” is a metaphoric use of the experience of submersion applied to representation, fiction or simulation. One of the main peculiarity of fully immersive virtual reality is the enhancing of the simple passive viewing of a virtual environment with the ability to manipulate virtual objects inside it. This Thesis project investigates such interfaces and metaphors for the interaction and the manipulation tasks. In particular, the research activity conducted allowed the design of a thimble-like interface that can be used to recognize in real-time the human hand’s orientation and infer a simplified but effective model of the relative hand’s motion and gesture. Inside the virtual environment, users provided with the developed systems will be therefore able to operate with natural hand gestures in order to interact with the scene; for example, they could perform positioning task by moving, rotating and resizing existent objects, or create new ones from scratch. This approach is particularly suitable when there is the need for the user to operate in a natural way, performing smooth and precise movements. Possible applications of the system to the industry are the immersive design in which the user can perform Computer- Aided Design (CAD) totally immersed in a virtual environment, and the operators training, in which the user can be trained on a 3D model in assembling or disassembling complex mechanical machineries, following predefined sequences. The thesis has been organized around the following project plan: - Collection of the relevant State Of The Art - Evaluation of design choices and alternatives for the interaction hardware - Development of the necessary embedded firmware - Integration of the resulting devices in a complex interaction test-bed - Development of demonstrative applications implementing the device - Implementation of advanced haptic feedbac

    Interação natural em ambientes virtuais para a plataforma WEB

    Get PDF
    Embora muitos dispositivos de interação tenham sido desenvolvidos para aprimorar a maneira com que interagimos com ambientes virtuais, a Interação Natural (IN) ainda é considerada a mais eficiente. A comunicação humana é naturalmente composta por gestos, expressões e movimentos. Desta forma, o processo de interação torna-se mais intuitivo, pois elimina a necessidade de esforço cognitivo no aprendizado de novos comandos na realização de tarefas. As aplicações World Wide Web (WEB) correspondem à plataforma mais utilizada no momento para a criação de novas soluções, no entanto, sua principal forma de interação ainda ocorre por meio de dispositivos tradicionais, como teclado e mouse. Assim, este trabalho tem como objetivo apresentar um experimento de Interação Natural em ambientes virtuais para a plataforma WEB, em um ambiente interativo, utilizando a tecnologia Web Graphics Library (WebGL) e uma câmera digital. Através de técnicas destinadas a computação visual, são realizadas a leitura e interpretação dos movimentos dos usuários em tempo real. A validação dessa solução envolveu alunos dos cursos de computação de uma instituição de ensino superior. Durante os testes, os alunos interagiram através de movimentos naturais, manipulando objetos em um ambiente de Realidade Virtual (RV) em um navegador WEB. Foram avaliados critérios como: processamento dos gestos, desempenho da aplicação no cliente, precisão dos movimentos e usabilidade.Although many interaction devices have been designed to enhance the way we interact with virtual environments, Natural Interaction (IN) is still considered the most efficient way of interaction. Human communication is naturally composed of gestures, expressions and movements. In this case, the interaction process becomes more intuitive because it eliminates the need for cognitive effort in learning new commands to perform tasks. World Wide Web (WEB) applications correspond to the most used platform for creating new solutions, however, their main form of interaction still occurs through traditional devices such as keyboard and mouse. Therefore, this paper aims to present a proposal of Natural Interaction in virtual environments for the WEB platform in an interactive environment, using Web Graphics Library (WebGL) technology and a digital camera. Through techniques for visual computing, reading and interpretation of users' movements are performed in real time. The validation of this solution involved students from the computing courses of an educational institution. During the tests, students interacted through natural movements, manipulating objects in a Virtual Reality (VR) environment in a WEB browser. These criteria were evaluated: gesture processing, client application performance, movement accuracy and usability

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Maps, agents and dialogue for exploring a virtual world

    Get PDF
    In previous years we have been involved in several projects in which users (or visitors) had to find their way in information-rich virtual environments. 'Information-rich' means that the users do not know beforehand what is available in the environment, where to go in the environment to find the information and, moreover, users or visitors do not necessarily know exactly what they are looking for. Information-rich means also that the information may change during time. A second visit to the same environment will require different behavior of the visitor in order for him or her to obtain similar information than was available during a previous visit. In this paper we report about two projects and discuss our attempts to generalize from the different approaches and application domains to obtain a library of methods and tools to design and implement intelligent agents that inhabit virtual environments and where the agents support the navigation of the user/visitor
    corecore