3,328 research outputs found
Portallax:bringing 3D displays capabilities to handhelds
We present Portallax, a clip-on technology to retrofit mobile devices with 3D display capabilities. Available technologies (e.g. Nintendo 3DS or LG Optimus) and clip-on solutions (e.g. 3DeeSlide and Grilli3D) force users to have a fixed head and device positions. This is contradictory to the nature of a mobile scenario, and limits the usage of interaction techniques such as tilting the device to control a game. Portallax uses an actuated parallax barrier and face tracking to realign the barrier's position to the user's position. This allows us to provide stereo, motion parallax and perspective correction cues in 60 degrees in front of the device. Our optimized design of the barrier minimizes colour distortion, maximizes resolution and produces bigger view-zones, which support ~81% of adults' interpupillary distances and allow eye tracking implemented with the front camera. We present a reference implementation, evaluate its key features and provide example applications illustrating the potential of Portallax
Effects of 3D Audio and Video in Video Games
Our study was carried out in order to improve our understanding of the relationship between 3D audio and video, and user experience in video games. In order to determine the best way to attempt to measure these effects, we researched several methods of 3D video and 3D audio delivery. We decided to use two different games to gauge the effectiveness of 3D video, Mario Kart 7 and Crysis 2. Due to a small sample size, we were unable to conclude strongly in either way about many of the factors we believed that 3D video and audio would effect, but were able to see an increase in enjoyment and perceived ability from our surveys
Evaluating The Benefits Of 3d Stereo In Modern Video Games
We present a study that investigates user performance benefits of 3D stereo in modern video games. Based on an analysis of several video games that are best suited for use with commercial 3D stereo drivers and vision systems, we chose five modern titles focusing on racing, first person shooter, third person shooter, and sports game genres. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (3D stereo display) than in the control group (2D display). A game experience pre-questionnaire was used to classify participants into beginner, intermediate, and advanced gameplay categories to ensure prior game experience did not bias the experiment. Our results indicate that even though participants preferred playing in 3D stereo, for the games we tested, it does not provide any significant advantage in overall user performance. In addition, users‟ learning rates were comparable in the 3D stereo display and 2D display case
Master of Science
thesisAnimated avatars are becoming increasingly prevalent in three-dimensional virtual environments due to modern motion tracking hardware and their falling cost. As this opens up new possibilities and ways of interaction within such virtual worlds, an important question that arises is how does the presence of an avatar alter the perception and performance of an action in a virtual environment when a user interacts with an object in the virtual environment through their avatar. This research attempts to answer this question by studying the effects of presence of an animated self-avatar in an object manipulation task in a virtual environment. Two experiments were conducted as part of this research. In Experiment 1, the feasibility of an interaction system involving animated self-avatars to manipulate objects in a virtual environment was examined. It was observed that the presence of self-avatars had an affect on the performance of a subset of subjects. Male subjects with gaming experience performed similarly across both visual feedback conditions while female subjects who also had low gaming experience performed better in the condition with avatar feedback than in the condition without avatar feedback. In Experiment 2, we further analyzed the effect of presence of self-avatar visual feedback by looking at the effect of visual immersion in the virtual environment, task difficulty, and individual difference factors such as spatial ability and gaming experience. It was observed that difficult trials were completed significantly faster by subjects in the avatar feedback condition while in the case of the easy trials, there was no significant difference between performance of subjects in the avatar and sphere feedback conditions. No significant interaction was observed between visual feedback condition and either immersiveness or individual difference factors
Stereoscopic bimanual interaction for 3D visualization
Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems.
To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques
Recommended from our members
Towards a Smart Drone Cinematographer for Filming Human Motion
Affordable consumer drones have made capturing aerial footage more convenient and accessible. However, shooting cinematic motion videos using a drone is challenging because it requires users to analyze dynamic scenarios while operating the controller. In this thesis, our task is to develop an autonomous drone cinematography system to capture cinematic videos of human motion. We understand the system's filming performance to be influenced by three key components: 1) video quality metric, which measures the aesthetic quality -- the angle, the distance, the image composition -- of the captured video, 2) visual feature, which encapsulates the visual elements that influence the filming style, and 3) camera planning, which is a decision-making model that predicts the next best movement. By analyzing these three components, we designed two autonomous drone cinematography systems using both heuristic-based methods and learning-based methods.For the first system, we designed an Autonomous CinemaTography system -- "ACT" by proposing a viewpoint quality metric focusing on the visibility of the 3D human skeleton of the subject. We expanded the application of human motion analysis and simplified manual control by assisting viewpoint selection using a through-the-lens method. For the second system, we designed an imitation-based system that learns the artistic intention of the cameramen through watching professional aerial videos. We designed a camera planner that analyzes the video contents and previous camera motion to predict future camera motion. Furthermore, we propose a planning framework, which can imitate a filming style by ``seeing" only one single demonstration video of such style. We named it ``one-shot imitation filming." To the best of our knowledge, this is the first work that extends imitation learning to autonomous filming. Experimental results in both simulation and field test exhibit significant improvements over existing techniques and our approach managed to help inexperienced pilots capture cinematic videos
- …