148 research outputs found

    Novel Interaction Techniques for Mobile Augmented Reality applications. A Systematic Literature Review

    Get PDF
    This study reviews the research on interaction techniques and methods that could be applied in mobile augmented reality scenarios. The review is focused on themost recent advances and considers especially the use of head-mounted displays. Inthe review process, we have followed a systematic approach, which makes the reviewtransparent, repeatable, and less prone to human errors than if it was conducted in amore traditional manner. The main research subjects covered in the review are headorientation and gaze-tracking, gestures and body part-tracking, and multimodality– as far as the subjects are related to human-computer interaction. Besides these,also a number of other areas of interest will be discussed.Siirretty Doriast

    CobotTouch: AR-based Interface with Fingertip-worn Tactile Display for Immersive Operation/Control of Collaborative Robots

    Full text link
    Complex robotic tasks require human collaboration to benefit from their high dexterity. Frequent human-robot interaction is mentally demanding and time-consuming. Intuitive and easy-to-use robot control interfaces reduce the negative influence on workers, especially inexperienced users. In this paper, we present CobotTouch, a novel intuitive robot control interface with fingertip haptic feedback. The proposed interface consists of a projected Graphical User Interface on the robotic arm to control the position of the robot end-effector based on gesture recognition, and a wearable haptic interface to deliver tactile feedback on the user's fingertips. We evaluated the user's perception of the designed tactile patterns presented by the haptic interface and the intuitiveness of the proposed system for robot control in a use case. The results revealed a high average recognition rate of 75.25\% for the tactile patterns. An average NASA Task Load Index (TLX) indicated small mental and temporal demands proving a high level of the intuitiveness of CobotTouch for interaction with collaborative robots.Comment: 12 pages, 11 figures, Accepted paper in AsiaHaptics 202

    Novel Interaction Techniques for Mobile Augmented Reality Applications – A Systematic Literature Review

    Get PDF
    This study reviews the research on interaction techniques and methods that could be applied in mobile augmented reality scenarios. The review is focused on the most recent advances and considers especially the use of head-mounted displays. In the review process, we have followed a systematic approach, which makes the review transparent, repeatable, and less prone to human errors than if it was conducted in a more traditional manner. The main research subjects covered in the review are head orientation and gaze tracking, gestures and body part tracking, and multimodality&ndash; as far as the subjects are related to human-computer interaction. Besides these, also a number of other areas of interest will be discussed.</p

    Spatial Augmented Reality Using Structured Light Illumination

    Get PDF
    Spatial augmented reality is a particular kind of augmented reality technique that uses projector to blend the real objects with virtual contents. Coincidentally, as a means of 3D shape measurement, structured light illumination makes use of projector as part of its system as well. It uses the projector to generate important clues to establish the correspondence between the 2D image coordinate system and the 3D world coordinate system. So it is appealing to build a system that can carry out the functionalities of both spatial augmented reality and structured light illumination. In this dissertation, we present all the hardware platforms we developed and their related applications in spatial augmented reality and structured light illumination. Firstly, it is a dual-projector structured light 3D scanning system that has two synchronized projectors operate simultaneously, consequently it outperforms the traditional structured light 3D scanning system which only include one projector in terms of the quality of 3D reconstructions. Secondly, we introduce a modified dual-projector structured light 3D scanning system aiming at detecting and solving the multi-path interference. Thirdly, we propose an augmented reality face paint system which detects human face in a scene and paints the face with any favorite colors by projection. Additionally, the system incorporates a second camera to realize the 3D space position tracking by exploiting the principle of structured light illumination. At last, a structured light 3D scanning system with its own built-in machine vision camera is presented as the future work. So far the standalone camera has been completed from the a bare CMOS sensor. With this customized camera, we can achieve high dynamic range imaging and better synchronization between the camera and projector. But the full-blown system that includes HDMI transmitter, structured light pattern generator and synchronization logic has yet to be done due to the lack of a well designed high speed PCB

    Augmenting spaces and creating interactive experiences using video camera networks

    Get PDF
    This research addresses the problem of creating interactive experiences to encourage people to explore spaces. Besides the obvious spaces to visit, such as museums or art galleries, spaces that people visit can be, for example, a supermarket or a restaurant. As technology evolves, people become more demanding in the way they use it and expect better forms of interaction with the space that surrounds them. Interaction with the space allows information to be transmitted to the visitors in a friendly way, leading visitors to explore it and gain knowledge. Systems to provide better experiences while exploring spaces demand hardware and software that is not in the reach of every space owner either because of the cost or inconvenience of the installation, that can damage artefacts or the space environment. We propose a system adaptable to the spaces, that uses a video camera network and a wi-fi network present at the space (or that can be installed) to provide means to support interactive experiences using the visitor’s mobile device. The system is composed of an infrastructure (called vuSpot), a language grammar used to describe interactions at a space (called XploreDescription), a visual tool used to design interactive experiences (called XploreBuilder) and a tool used to create interactive experiences (called urSpace). By using XploreBuilder, a tool built of top of vuSpot, a user with little or no experience in programming can define a space and design interactive experiences. This tool generates a description of the space and of the interactions at that space (that complies with the XploreDescription grammar). These descriptions can be given to urSpace, another tool built of top of vuSpot, that creates the interactive experience application. With this system we explore new forms of interaction and use mobile devices and pico projectors to deliver additional information to the users leading to the creation of interactive experiences. The several components are presented as well as the results of the respective user tests, which were positive. The design and implementation becomes cheaper, faster, more flexible and, since it does not depend on the knowledge of a programming language, accessible for the general public.NOVA Laboratory for Computer Science and Informatics (NOVA LINCS), Multimodal Systems, Departamento de Informática (DI), Faculdade de Ciências e Tecnologia (FCT), Universidade Nova de Lisboa (UNL) and Escola Superior de Tecnologia de Setúbal (EST Setúbal), Instituto Politécnico de Setúbal (IPS)

    Compact and kinetic projected augmented reality interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 143-150).For quite some time, researchers and designers in the field of human computer interaction have strived to better integrate information interfaces into our physical environment. They envisioned a future where computing and interface components would be integrated into the physical environment, creating a seamless experience that uses all our senses. One possible approach to this problem employs projected augmented reality. Such systems project digital information and interfaces onto the physical world and are typically implemented using interactive projector-camera systems. This thesis work is centered on design and implementation of a new form factor for computing, a system we call LuminAR. LuminAR is a compact and kinetic projected augmented reality interface embodied in familiar everyday objects, namely a light bulb and a task light. It allows users to dynamically augment physical surfaces and objects with superimposed digital information using gestural and multi-touch interfaces. This thesis documents LuminAR's design process, hardware and software implementation and interaction techniques. The work is motivated through a set of applications that explore scenarios for interactive and kinetic projected augmented reality interfaces. It also opens the door for further explorations of kinetic interaction and promotes the adoption of projected augmented reality as a commonplace user interface modality. This thesis work was partially supported by a research grant from Intel Corporation.Supported by a research grant from Intel Corporationby Natan Linder.S.M

    Focus-plus-context techniques for picoprojection-based interaction

    Get PDF
    In this paper, we report on novel zooming interface methods that deploy a small handheld projector. Using mobile projections to visualize object/environment-related information on real objects introduces new aspects for zooming interfaces. Different approaches are investigated that focus on maintaining a level of context while exploring detailed information. Doing so, we propose methods that provide alternative contextual cues within a single projector and deploy the potential of zoom lenses to support a multilevel zooming approach. Furthermore, we look into the correlation between pixel density, distance to the target, and projection size. Alongside these techniques, we report on multiple user studies, in which we quantified the projection limitations and validated various interactive visualization approaches. Thereby, we focused on solving issues related to pixel density, brightness, and contrast that affect the design of more effective legible zooming interfaces for handheld projectors

    SHOW ME WHAT YOU MEAN: Gestures and drawings on physical objects as means for remote collaboration and guidance

    Get PDF
    This thesis presents findings based on the study of remote projected interaction and guidance on physical objects. First, the results are based on the study of literature and previous research in the fields of ubiqutious computing and environment, augmented reality, remote collaboration and guidance. Second, the results are based on findings through testing projector technology in remote interaction and guidance with users with the help of prototype. Previous studies indicate that guidance on physical objects is seen as valuable and in such interaction, the focus should be shifted to the actual object. This thesis contributes to previous research and suggest better integration of hand gestures and drawings into remote collaboration and guidance. Projected interaction model, described in this thesis, enhances the feeling of togetherness between remote users (expert and novice), and provides critical help in conversational grounding in remote collaboration and guidance with physical objects

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    Get PDF
    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications
    • …
    corecore