3,194 research outputs found

    Demo of Gaze Controlled Flying

    Get PDF
    Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV

    Human-Robot Interaction Based on Gaze Gestures for the Drone Teleoperation

    Get PDF
    Teleoperation has been widely used to perform tasks in dangerous and unreachable environments by replacing humans with controlled agents. The idea of human-robot interaction (HRI) is very important in teleoperation. Conventional HRI input devices include keyboard, mouse and joystick, etc. However, they are not suitable for handicapped users or people with disabilities. These devices also increase the mental workload of normal users due to simultaneous operation of multiple HRI input devices by hand. Hence, HRI based on gaze tracking with an eye tracker is presented in this study. The selection of objects is of great importance and occurs at a high frequency during HRI control. This paper introduces gaze gestures as an object selection strategy into HRI for drone teleoperation. In order to test and validate the performance of gaze gestures selection strategy, we evaluate objective and subjective measurements, respectively. Drone control performance, including mean task completion time and mean error rate, are the objective measurements. The subjective measurement is the analysis of participant perception. The results showed gaze gestures selection strategy has a great potential as an additional HRI for use in agent teleoperation

    Manoeuvring drone (Tello Talent) using eye gaze and or fingers gestures

    Get PDF
    The project aims to combine hands and eyes to control a Tello Talent drone based on computer vision, machine learning and an eye tracking device for gaze detection and interaction. The main purpose of this project is gaming, experimental and educational for next coming generation, in addition it is very useful for the peoples who cannot use their hands, they can maneuver the drone by their eyes movement, and hopefully this will bring them some fun. The idea of this project is inspired by the progress and development in the innovative technologies such as machine learning, computer vision and object detection that offer a large field of applications which can be used in diverse domains, there are many researcher are improving, instructing and innovating the new intelligent manner for controlling the drones by combining computer vision, machine learning, artificial intelligent, etc. This project can help anyone even the people who they donÂżt have any prior knowledge of programming or Computer Vision or theory of eye tracking system, they learn the basic knowledge of drone concept, object detection, programing, and integrating different hardware and software involved, then playing. As a final objective, they can able to build simple application that can control the drones by using movements of hands, eyes or both, during the practice they should take in consideration the operating condition and safety required by the manufacturers of drones and eye tracking device. The concept of Tello Talent drone is based on a series of features, functions and scripts which are already been developed, embedded in autopilot memories and are accessible by users via an SDK protocol. The SDK is used as an easy guide to developing simple and complex applications; it allows the user to develop several flying mission programs. There are different experiments were studied for checking which scenario is better in detecting the hands movement and exploring the keys points in real-time with low computing power computer. As a result, I find that the Google artificial intelligent research group offers an open source platform dedicated for developing this application; the platform is called MediaPipe based on customizable machine learning solution for live streaming video. In this project the MediaPipe and the eye tracking module are the fundamental tools for developing and realizing the application

    Effects of Character Guide in Immersive Virtual Reality Stories

    Get PDF
    Bringing cinematic experiences from traditional film screens into Virtual Reality (VR) has become an increasingly popular form of entertainment in recent years. VR provides viewers unprecedented film experience that allows them to freely explore around the environment and even interact with virtual props and characters. For the audience, this kind of experience raises their sense of presence in a different world, and may even stimulate their full immersion in story scenarios. However, different from traditional film-making, where the audience is completely passive in following along director’s decisions of storytelling, more freedom in VR might cause viewers to get lost on halfway watching a series of events that build up a story. Therefore, striking a balance between user interaction and narrative progression is a big challenge for filmmakers. To assist in organizing the research space, we presented a media review and the resulting framework to characterize the primary differences among different variations of film, media, games, and VR storytelling. The evaluation in particular provided us with knowledge that were closely associated with story-progression strategies and gaze redirection methods for interactive content in the commercial domain. Following the existing VR storytelling framework, we then approached the problem of guiding the audience through the major events of a story by introducing a virtual character as a travel companion who provides assistance in directing the viewer’s focus to the target scenes. The presented research explored a new technique that allowed a separate virtual character to be overlaid on top of an existing 360-degree video such that the added character react based on the head-tracking data to help indicate to the viewer the core focal content of the story. The motivation behind this research is to assist directors in using a virtual guiding character to increase the effectiveness of VR storytelling, assuring that viewers fully understand the story through completing a sequence of events, and possibly realize a rich literary experience. To assess the effectiveness of this technique, we performed a controlled experiment by applying the method in three immersive narrative experiences, each with a control condition that was free ii from guidance. The experiment compared three variations of the character guide: 1) no guide; 2) a guide with an art style similar to the style of the video design; and 3) a character guide with a dissimilar style. All participants viewed the narrative experiences to test whether a similar art style led to better gaze behaviors that had higher likelihood of falling on the intended focus regions of the 360-degree range of the Virtual Environment (VE). By the end of the experiment, we concluded that adding a virtual character that was independent from the narrative had limited effects on users’ gaze performances when watching an interactive story in VR. Furthermore, the implemented character’s art style made very few difference to users’ gaze performance as well as their level of viewing satisfaction. The primary reason could be due to limitation of the implementation design. Besides this, the guiding body language designed for an animal character caused certain confusion for numerous participants viewing the stories. In the end, the character guide approaches still provided insights for future directors and designers into how to draw the viewers’ attention to a target point within a narrative VE, including what can work well and what should be avoide

    Populating 3D Cities: a True Challenge

    Full text link
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    Populating 3D Cities: A True Challenge

    Get PDF
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    Exploitation of Novel Multiplayer Gesture-based Interaction and Virtual Puppetry for Digital Storytelling to Develop Children’s Narrative Skills

    Get PDF
    In recent years, digital storytelling has demonstrated powerful pedagogical functions by improving creativity, collaboration and intimacy among young children. Saturated with digital media technologies in their daily lives, the young generation demands natural interactive learning environments which offer multimodalities of feedback and meaningful immersive learning experiences. Virtual puppetry assisted storytelling system for young children, which utilises depth motion sensing technology and gesture control as the Human-Computer Interaction (HCI) method, has been proved to provide natural interactive learning experience for single player. In this paper, we designed and developed a novel system that allows multiple players to narrate, and most importantly, to interact with other characters and interactive virtual items in the virtual environment. We have conducted one user experiment with four young children for pedagogical evaluation and another user experiment with five postgraduate students for system evaluation. Our user study shows this novel digital storytelling system has great potential to stimulate learning abilities of young children through collaboration tasks

    The Cord Weekly (March 8, 1995)

    Get PDF
    • …
    corecore