4 research outputs found
Real-Time Head Gesture Recognition on Head-Mounted Displays using Cascaded Hidden Markov Models
Head gesture is a natural means of face-to-face communication between people
but the recognition of head gestures in the context of virtual reality and use
of head gesture as an interface for interacting with virtual avatars and
virtual environments have been rarely investigated. In the current study, we
present an approach for real-time head gesture recognition on head-mounted
displays using Cascaded Hidden Markov Models. We conducted two experiments to
evaluate our proposed approach. In experiment 1, we trained the Cascaded Hidden
Markov Models and assessed the offline classification performance using
collected head motion data. In experiment 2, we characterized the real-time
performance of the approach by estimating the latency to recognize a head
gesture with recorded real-time classification data. Our results show that the
proposed approach is effective in recognizing head gestures. The method can be
integrated into a virtual reality system as a head gesture interface for
interacting with virtual worlds
Oculus Rift Application for Training Drone Pilots
The research described in this paper, focuses on a virtual reality headset system that integrates the Oculus Rift VR headset with a low cost Unmanned Aerial Vehicle (UAV) to allow for drone teleoperation and telepresence using the Robot Operating System (ROS). We developed a system that allows the pilot to fly an AR Drone through natural head movements translated to a set of flight commands. The system is designed to be easy to use for the purposes of training drone pilots. The user simply has to move their head and these movements are translated to the quadrotor which then turns in that direction. Altitude control is implemented using a Wii Nunchuck joystick for altitude adjustment. The users use the Oculus Rift headset a 2D video stream from the AR Drone, which is then turned into a 3D image stream and presented to them on the headset
An augmented reality interface for multi-robot tele-operation and control
This thesis presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semi-autonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor\u27s ability to control multiple robots. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. A sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks