425 research outputs found

    LuminAR: Portable robotic augmented reality interface design and prototype

    Get PDF
    In this paper we introduce LuminAR: a prototype for a new portable and compact projector-camera system designed to use the traditional incandescent bulb interface as a power source, and a robotic desk lamp that carries it, enabling it with dynamic motion capabilities. We are exploring how the LuminAR system embodied in a familiar form factor of a classic Angle Poise lamp may evolve into a new class of robotic, digital information devices.Intel CorporationMicroVision (Firm

    The ASPECTA toolkit : affordable Full Coverage Displays

    Get PDF
    Full Coverage Displays (FCDs) cover the interior surface of an entire room with pixels. FCDs make possible many new kinds of immersive display experiences - but current technology for building FCDs is expensive and complex, and software support for developing full-coverage applications is limited. To address these problems, we introduce ASPECTA, a hardware configuration and software toolkit that provide a low-cost and easy-to-use solution for creating full coverage systems. We outline ASPECTA’s (minimal) hardware requirements and describe the toolkit’s architecture, development API, server implementation, and configuration tool; we also provide a full example of how the toolkit can be used. We performed two evaluations of the toolkit: a case study of a research system built with ASPECTA, and a laboratory study that tested the effectiveness of the API. Our evaluations, as well as multiple examples of ASPECTA in use, show how ASPECTA can simplify configuration and development while still dramatically reducing the cost for creators of applications that take advantage of full-coverage displays.Postprin

    Method And Apparatus For Optically Digitizing A Three-dimensional Object

    Get PDF
    An apparatus and method for digitizing an object for creating a three-dimensional digital model of the object comprises a turntable for rotating the object about a rotation axis, at least first and second light sources positioned and oriented for directing a thin sheet of light toward the object along an illumination plane substantially parallel to and substantially intersecting with the rotation axis, a first detector positioned to one side of the illumination plane and oriented for detecting light reflected along a first detection plane from the object for creating a plurality of first side contours as the object rotates, a second detector positioned to a side of the illumination plane, opposite the one side, for detecting light reflected along a second detection plane from the object for creating a plurality of second side contours as the object rotates, a third detector for capturing illumination on-axis contours in the form of a vertical straight line to derive an instantaneous color of the object's surface as a function of the height of the object, and a combining and evaluating computer for combining the first side contours, the second side contours, and the illumination on-axis contours for generating a plurality of composite contours and for evaluating the composite contours for creating a three-dimensional digital model of the object.Georgia Tech Research Corporatio

    Laser Pointer Tracking in Projector-Augmented Architectural Environments

    Get PDF
    We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying

    A Precise Controllable Projection System for Projected Virtual Characters and Its Calibration

    Get PDF
    In this paper we describe a system to project virtual characters that shall live with us in the same environment. In order to project the characters' visual representations onto room surfaces we use a controllable projector

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    THE UNIVERSAL MEDIA BOOK

    Get PDF
    We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume

    Interactive ubiquitous displays based on steerable projection

    Get PDF
    The ongoing miniaturization of computers and their embedding into the physical environment require new means of visual output. In the area of Ubiquitous Computing, flexible and adaptable display options are needed in order to enable the presentation of visual content in the physical environment. In this dissertation, we introduce the concepts of Display Continuum and Virtual Displays as new means of human-computer interaction. In this context, we present a realization of a Display Continuum based on steerable projection, and we describe a number of different interaction methods for manipulating this Display Continuum and the Virtual Displays placed on it.Mit zunehmender Miniaturisierung der Computer und ihrer Einbettung in der physikalischen Umgebung werden neue Arten der visuellen Ausgabe notwendig. Im Bereich des Ubiquitous Computing (Rechnerallgegenwart) werden flexible und anpassungsfĂ€hige Displays benötigt, um eine Anzeige von visuellen Inhalten unmittelbar in der physikalischen Umgebung zu ermöglichen. In dieser Dissertation fĂŒhren wir das Konzept des Display-Kontinuums und der Virtuellen Displays als Instrument der Mensch-Maschine-Interaktion ein. In diesem Zusammenhang prĂ€sentieren wir eine mögliche Display-Kontinuum-Realisierung, die auf der Verwendung steuerbarer Projektion basiert, und wir beschreiben mehrere verschiedene Interaktionsmethoden, mit denen man das Display-Kontinuum und die darauf platzierten Virtuellen Displays steuern kann

    Gaze, Posture and Gesture Recognition to Minimize Focus Shifts for Intelligent Operating Rooms in a Collaborative Support System

    Get PDF
    This paper describes the design of intelligent, collaborative operating rooms based on highly intuitive, natural and multimodal interaction. Intelligent operating rooms minimize surgeon’s focus shifts by minimizing both the focus spatial offset (distance moved by surgeon’s head or gaze to the new target) and the movement spatial offset (distance surgeon covers physically). These spatio-temporal measures have an impact on the surgeon’s performance in the operating room. I describe how machine vision techniques are used to extract spatio-temporal measures and to interact with the system, and how computer graphics techniques can be used to display visual medical information effectively and rapidly. Design considerations are discussed and examples showing the feasibility of the different approaches are presented
    • 

    corecore