125 research outputs found

    Designing a marker set for vertical tangible user interfaces

    Get PDF
    Tangible User Interfaces (TUI)s extend the domain of reality-based human-computer interaction by providing users the ability to manipulate digital data using physical objects which embody representational significance. Whilst various advancements have been registered over the past years through the development and availability of TUI toolkits, these have mostly converged towards the deployment of tabletop TUI architectures. In this context, markers used in current toolkits can only be placed underneath the tangible objects to provide recognition. Albeit being effective in various literature studies, the limitations and challenges of deploying tabletop architectures have significantly hindered the proliferation of TUI technology due to the limited audience reach such systems can provide. Furthermore, available marker sets restrict the placement and use of tangible objects since if placed on top of the tangible object, the marker will interfere with the shape and texture of the object limiting the effect the TUI has on the end-user. To this end, this paper proposes the design and development of an innovative tangible marker set specifically designed towards the development of vertical TUIs. The proposed marker set design was optimized through a genetic algorithms to ensure robustness in scale invariance, the capability of being successfully detected with distances of up to 3.5 meters and a true occlusion resistance of up to 25%, where the marker is recognized and not tracked. Open-source versions of the marker set are provided through research license on www.geoffslab.com/tangiboard_marker_set

    Mobile Motion Capture

    Get PDF
    As augmented reality becomes a major research interest in robotics for communicating data, it is increasingly important that its localization challenges be addressed. This project aims to add an alternative tracking and localization solution using the Google Project Tango device. Our goal was to replace the typical motion capture lab with a mobile system that has theoretically infinite capture volume. We accomplished this using various image processing techniques and robotic software tools. After benchmark testing, we showed that our system could track within 3.2 degrees in orientation and 4 cm in position. Finally, we implemented a robotic following application based on this system that also incorporated a pan-tilt turret for the camera, all of which is mounted on a mobile robot

    Accurate 3D-reconstruction and -navigation for high-precision minimal-invasive interventions

    Get PDF
    The current lateral skull base surgery is largely invasive since it requires wide exposure and direct visualization of anatomical landmarks to avoid damaging critical structures. A multi-port approach aiming to reduce such invasiveness has been recently investigated. Thereby three canals are drilled from the skull surface to the surgical region of interest: the first canal for the instrument, the second for the endoscope, and the third for material removal or an additional instrument. The transition to minimal invasive approaches in the lateral skull base surgery requires sub-millimeter accuracy and high outcome predictability, which results in high requirements for the image acquisition as well as for the navigation. Computed tomography (CT) is a non-invasive imaging technique allowing the visualization of the internal patient organs. Planning optimal drill channels based on patient-specific models requires high-accurate three-dimensional (3D) CT images. This thesis focuses on the reconstruction of high quality CT volumes. Therefore, two conventional imaging systems are investigated: spiral CT scanners and C-arm cone-beam CT (CBCT) systems. Spiral CT scanners acquire volumes with typically anisotropic resolution, i.e. the voxel spacing in the slice-selection-direction is larger than the in-the-plane spacing. A new super-resolution reconstruction approach is proposed to recover images with high isotropic resolution from two orthogonal low-resolution CT volumes. C-arm CBCT systems offers CT-like 3D imaging capabilities while being appropriate for interventional suites. A main drawback of these systems is the commonly encountered CT artifacts due to several limitations in the imaging system, such as the mechanical inaccuracies. This thesis contributes new methods to enhance the CBCT reconstruction quality by addressing two main reconstruction artifacts: the misalignment artifacts caused by mechanical inaccuracies, and the metal-artifacts caused by the presence of metal objects in the scanned region. CBCT scanners are appropriate for intra-operative image-guided navigation. For instance, they can be used to control the drill process based on intra-operatively acquired 2D fluoroscopic images. For a successful navigation, accurate estimate of C-arm pose relative to the patient anatomy and the associated surgical plan is required. A new algorithm has been developed to fulfill this task with high-precision. The performance of the introduced methods is demonstrated on simulated and real data

    Progress toward multi‐robot reconnaissance and the MAGIC 2010 competition

    Full text link
    Tasks like search‐and‐rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges, including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human‐robot interfaces. This paper describes our 14‐robot team, which won the MAGIC 2010 competition. It was designed to perform urban reconnaissance missions. In the paper, we describe a variety of autonomous systems that require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, which is essential for autonomous planning and for giving humans situational awareness, required the development of fast loop‐closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. We will describe technical contributions throughout our system that played a significant role in its performance. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain. © 2012 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93532/1/21426_ftp.pd

    SPATIO-TEMPORAL REGISTRATION IN AUGMENTED REALITY

    Get PDF
    The overarching goal of Augmented Reality (AR) is to provide users with the illusion that virtual and real objects coexist indistinguishably in the same space. An effective persistent illusion requires accurate registration between the real and the virtual objects, registration that is spatially and temporally coherent. However, visible misregistration can be caused by many inherent error sources, such as errors in calibration, tracking, and modeling, and system delay. This dissertation focuses on new methods that could be considered part of "the last mile" of spatio-temporal registration in AR: closed-loop spatial registration and low-latency temporal registration: 1. For spatial registration, the primary insight is that calibration, tracking and modeling are means to an end---the ultimate goal is registration. In this spirit I present a novel pixel-wise closed-loop registration approach that can automatically minimize registration errors using a reference model comprised of the real scene model and the desired virtual augmentations. Registration errors are minimized in both global world space via camera pose refinement, and local screen space via pixel-wise adjustments. This approach is presented in the context of Video See-Through AR (VST-AR) and projector-based Spatial AR (SAR), where registration results are measurable using a commodity color camera. 2. For temporal registration, the primary insight is that the real-virtual relationships are evolving throughout the tracking, rendering, scanout, and display steps, and registration can be improved by leveraging fine-grained processing and display mechanisms. In this spirit I introduce a general end-to-end system pipeline with low latency, and propose an algorithm for minimizing latency in displays (DLP DMD projectors in particular). This approach is presented in the context of Optical See-Through AR (OST-AR), where system delay is the most detrimental source of error. I also discuss future steps that may further improve spatio-temporal registration. Particularly, I discuss possibilities for using custom virtual or physical-virtual fiducials for closed-loop registration in SAR. The custom fiducials can be designed to elicit desirable optical signals that directly indicate any error in the relative pose between the physical and projected virtual objects.Doctor of Philosoph

    A Survey on Augmented Reality Challenges and Tracking

    Get PDF
    This survey paper presents a classification of different challenges and tracking techniques in the field of augmented reality. The challenges in augmented reality are categorized into performance challenges, alignment challenges, interaction challenges, mobility/portability challenges and visualization challenges. Augmented reality tracking techniques are mainly divided into sensor-based tracking, visionbased tracking and hybrid tracking. The sensor-based tracking is further divided into optical tracking, magnetic tracking, acoustic tracking, inertial tracking or any combination of these to form hybrid sensors tracking. Similarly, the vision-based tracking is divided into marker-based tracking and markerless tracking. Each tracking technique has its advantages and limitations. Hybrid tracking provides a robust and accurate tracking but it involves financial and tehnical difficulties

    Search Methods for Mobile Manipulator Performance Measurement

    Get PDF
    Mobile manipulators are a potential solution to the increasing need for additional flexibility and mobility in industrial robotics applications. However, they tend to lack the accuracy and precision achieved by fixed manipulators, especially in scenarios where both the manipulator and the autonomous vehicle move simultaneously. This thesis analyzes the problem of dynamically evaluating the positioning error of mobile manipulators. In particular, it investigates the use of Bayesian methods to predict the position of the end-effector in the presence of uncertainty propagated from the mobile platform. Simulations and real-world experiments are carried out to test the proposed method against a deterministic approach. These experiments are carried out on two mobile manipulators - a proof-of-concept research platform and an industrial mobile manipulator - using ROS and Gazebo. The precision of the mobile manipulator is evaluated through its ability to intercept retroreflective markers using a photoelectric sensor attached to the end-effector. Compared to the deterministic search approach, we observed improved interception capability with comparable search times, thereby enabling the effective performance measurement of the mobile manipulator
    • 

    corecore