109 research outputs found

    Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review

    Get PDF
    3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces and displays. There is no well defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays

    3D-Stereoscopic Immersive Analytics Projects at Monash University and University of Konstanz

    Get PDF
    Immersive Analytics investigates how novel interaction and display technologies may support analytical reasoning and decision making. The Immersive Analytics initiative of Monash University started early 2014. Over the last few years, a number of projects have been developed or extended in this context to meet the requirements of semi- or full-immersive stereoscopic environments. Different technologies are used for this purpose: CAVE2â„¢ (a 330 degree large-scale visualization environment which can be used for educative and scientific group presentations, analyses and discussions), stereoscopic Powerwalls (miniCAVEs, representing a segment of the CAVE2 and used for development and communication), Fishtanks, and/or HMDs (such as Oculus, VIVE, and mobile HMD approaches). Apart from CAVE2â„¢ all systems are or will be employed on both the Monash University and the University of Konstanz side, especially to investigate collaborative Immersive Analytics. In addition, sensiLab extends most of the previous approaches by involving all senses, 3D visualization is combined with multi-sensory feedback, 3D printing, robotics in a scientific-artistic-creative environment

    Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review

    Get PDF
    3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces and displays. There is no well defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays

    Freehand Gestural Text Entry for Interactive TV

    Get PDF

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    Midair Gestural Techniques for Translation Tasks in Large-Display Interaction

    Get PDF
    Midair gestural interaction has gained a lot of attention over the past decades, with numerous attempts to apply midair gestural interfaces with large displays (and TVs), interactive walls, and smart meeting rooms. These attempts, reviewed in numerous studies, utilized differing gestural techniques for the same action making them inherently incomparable, which further makes it difficult to summarize recommendations for the development of midair gestural interaction applications. Therefore, the aim was to take a closer look at one common action, translation, that is defined as dragging (or moving) an entity to a predefined target position while retaining the entity’s size and rotation. We compared performance and subjective experiences (participants = 30) of four midair gestural techniques (i.e., by fist, palm, pinch, and sideways) in the repetitive translation of 2D objects to short and long distances with a large display. The results showed statistically significant differences in movement time and error rate favoring translation by palm over pinch and sideways at both distances. Further, fist and sideways gestural techniques showed good performances, especially at short and long distances correspondingly. We summarize the implications of the results for the design of midair gestural interfaces, which would be useful for interaction designers and gesture recognition researchers.publishedVersionPeer reviewe

    Design of a Scenario-Based Immersive Experience Room

    Get PDF
    open1noopenKlopfenstein, Cuno LorenzKlopfenstein, CUNO LOREN

    Usability of immersive virtual reality input devices

    Get PDF
    This research conducts a usability analysis of human interface devices within an Immersive Virtual Reality Environment. The analysis is carried out for two different interface devices, a commercially available Intersense © Wand and a home built pinch glove and wireless receiver. Users were asked to carry out a series of minor tasks involving placement of shaped blocks into corresponding holes within an Immersive Virtual Reality Environment. Performance was evaluated in terms of speed, accuracy and precision via the collection of completion times, errors made and the precision of motion during the experiment

    Content creation for seamless augmented experiences with projection mapping

    Get PDF
    This dissertation explores systems and methods for creating projection mapping content that seamlessly merges virtual and physical. Most virtual reality and augmented reality technologies rely on screens for display and interaction, where a mobile device or head mounted display mediates the user's experience. In contrast, projection mapping uses off-the-shelf video projectors to augment the appearance of physical objects, and with projection mapping there is no screen to mediate the experience. The physical world simply becomes the display. Projection mapping can provide users with a seamless augmented experience, where virtual and physical become indistinguishable in an apparently unmediated way. Projection mapping is an old concept dating to Disney's 1969 Haunted Mansion. The core technical foundations were laid back in 1999 with UNC's Office of the Future and Shader Lamps projects. Since then, projectors have gotten brighter, higher resolution, and drastically decreased in price. Yet projection mapping has not crossed the chasm into mainstream use. The largest remaining challenge for projection mapping is that content creation is very difficult and time consuming. Content for projection mapping is still created via a tedious manual process by warping a 2D video file onto a 3D physical object using existing tools (e.g. Adobe Photoshop) which are not made for defining animated interactive effects on 3D object surfaces. With existing tools, content must be created for each specific display object, and cannot be re-used across experiences. For each object the artist wants to animate, the artist must manually create a custom texture for that specific object, and warp the texture to the physical object. This limits projection mapped experiences to controlled environments and static scenes. If the artist wants to project onto a different object from the original, they must start from scratch creating custom content for that object. This manual content creation process is time consuming, expensive and doesn't scale. This thesis explores new methods for creating projection mapping content. Our goal is to make projection mapping easier, cheaper and more scalable. We explore methods for adaptive projection mapping, which enables artists to create content once, and that content adapts based on the color and geometry of the display surface. Content can be created once, and re-used on any surface. This thesis is composed of three proof-of-concept prototypes, exploring new methods for content creation for projection mapping. IllumiRoom expands video game content beyond the television screen and into the physical world using a standard video projector to surround a television with projected light. IllumiRoom works in any living room, the projected content dynamically adapts based on the color and geometry of the room. RoomAlive expands on this idea, using multiple projectors to cover an entire living room in input/output pixels and dynamically adapts gaming experiences to fill an entire room. Finally, Projectibles focuses on the physical aspect of projection mapping. Projectibles optimizes the display surface color to increase the contrast and resolution of the overall experience, enabling artists to design the physical object along with the virtual content. The proof-of-concept prototypes presented in this thesis are aimed at the not-to-distant future. The projects in this thesis are not theoretical concepts, but fully working prototype systems that demonstrate the practicality of projection mapping to create immersive experiences. It is the sincere hope of the author that these experiences quickly move of the lab and into the real world
    • …
    corecore