75,837 research outputs found

    Bare-handed 3D drawing in augmented reality

    Get PDF
    Head-mounted augmented reality (AR) enables embodied in situ drawing in three dimensions (3D).We explore 3D drawing interactions based on uninstrumented, unencumbered (bare) hands that preserve the user’s ability to freely navigate and interact with the physical environment. We derive three alternative interaction techniques supporting bare-handed drawing in AR from the literature and by analysing several envisaged use cases. The three interaction techniques are evaluated in a controlled user study examining three distinct drawing tasks: planar drawing, path description, and 3D object reconstruction. The results indicate that continuous freehand drawing supports faster line creation than the control point-based alternatives, although with reduced accuracy. User preferences for the different techniques are mixed and vary considerably between the different tasks, highlighting the value of diverse and flexible interactions. The combined effectiveness of these three drawing techniques is illustrated in an example application of 3D AR drawing

    Interactive natural user interfaces

    Get PDF
    For many years, science fiction entertainment has showcased holographic technology and futuristic user interfaces that have stimulated the world\u27s imagination. Movies such as Star Wars and Minority Report portray characters interacting with free-floating 3D displays and manipulating virtual objects as though they were tangible. While these futuristic concepts are intriguing, it\u27s difficult to locate a commercial, interactive holographic video solution in an everyday electronics store. As used in this work, it should be noted that the term holography refers to artificially created, free-floating objects whereas the traditional term refers to the recording and reconstruction of 3D image data from 2D mediums. This research addresses the need for a feasible technological solution that allows users to work with projected, interactive and touch-sensitive 3D virtual environments. This research will aim to construct an interactive holographic user interface system by consolidating existing commodity hardware and interaction algorithms. In addition, this work studies the best design practices for human-centric factors related to 3D user interfaces. The problem of 3D user interfaces has been well-researched. When portrayed in science fiction, futuristic user interfaces usually consist of a holographic display, interaction controls and feedback mechanisms. In reality, holographic displays are usually represented by volumetric or multi-parallax technology. In this work, a novel holographic display is presented which leverages a mini-projector to produce a free-floating image onto a fog-like surface. The holographic user interface system will consist of a display component: to project a free-floating image; a tracking component: to allow the user to interact with the 3D display via gestures; and a software component: which drives the complete hardware system. After examining this research, readers will be well-informed on how to build an intuitive, eye-catching holographic user interface system for various application arenas

    The Tale of the Roman Theater of Philadelphia, Amman. Representative and experiential methodology of the theatrical space

    Get PDF
    The project aims to present the 3D reconstruction of the Roman Theater of Amman, the ancient Philadelphia of the Palestinian Decapolis, through rigorous representative models. The outcomes will be part of a future exhibition providing site-specific installations and user experience artifacts based on digital interaction and tactile models. The paper illustrates the multidisciplinary approach associated with using 3D virtual reconstruction and game engine tools to reflect on the practice of representing ancient monuments and digital museology. Travelers of the 18th and 19th centuries drafted the fascination of discovery as an experience in their notebooks. At the same time, their written records could address contemporary visitors to an extensive cultural knowledge of places and buildings, the historia of Philadelphia. Investigations are shifting scientific models towards a dynamic cultural experience representative of cultural heritage, including intangible heritage, stories, and new technological paradigms, increasingly rapidly making it possible to duplicate art and heritage. This shift is pinpointing the role of representation for cultural studies and humanities, experimenting with practices and tools to drill methodologies, and producing models for interaction design, socialization, gaming, and museum experience

    Prototyping X-ray tomographic reconstruction pipelines with FleXbox

    Get PDF
    Computer Tomography (CT) scanners for research applications are often designed to facilitate flexible acquisition geometries. Making full use of such CT scanners requires advanced reconstruction software that can (i) deal with a broad range of geometrical scanning settings, (ii) allows for customization of processing algorithms, and (iii) has the capability to process large amounts of data. FleXbox is a Python-based tomographic reconstruction toolbox focused on these three functionalities. It is built to bridge the gap between low-level tomographic reconstruction packages (e.g. ASTRA toolbox) and high-level distributed systems (e.g. Livermore Tomography Tools). FleXbox allows to model arbitrary source, detector and object trajectories. The modular architecture of FleXbox allows to design an optimal reconstruction approach for a single CT dataset. When multiple datasets of an object are acquired (either different spatial regions or different snapshots in time), they can be combined into a larger high resolution volume or a time series of volumes. The software allows to then create a computational reconstruction pipeline that can run without user interaction and enables efficient computation on large-scale 3D volumes on a single workstation

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2019 (ECMLPKDD 2019
    • …
    corecore