270 research outputs found

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    A Collaborative Augmented Reality Framework Based on Distributed Visual Slam

    Get PDF
    Visual Simultaneous Localization and Mapping (SLAM) has been used for markerless tracking in augmented reality applications. Distributed SLAM helps multiple agents to collaboratively explore and build a global map of the environment while estimating their locations in it. One of the main challenges in Distributed SLAM is to identify local map overlaps of these agents, especially when their initial relative positions are not known. We developed a collaborative AR framework with freely moving agents having no knowledge of their initial relative positions. Each agent in our framework uses a camera as the only input device for its SLAM process. Furthermore, the framework identifies map overlaps of agents using an appearance-based method

    An Experimental Distributed Framework for Distributed Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is widely used in applications such as rescue, navigation, semantic mapping, augmented reality and home entertainment applications. Most of these applications would do better if multiple devices are used in a distributed setting. The distributed SLAM research would benefit if there is a framework where the complexities of network communication is already handled. In this paper we introduce such framework utilizing open source Robot Operating System (ROS) and VirtualBox virtualization software. Furthermore, we describe a way to measure communication statistics of the distributed SLAM system

    Distributed Monocular SLAM for Indoor Map Building

    Get PDF
    Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment

    Distributed monocular visual SLAM as a basis for a collaborative augmented reality framework

    Get PDF
    Visual Simultaneous Localization and Mapping (SLAM) has been used for markerless tracking in augmented reality applications. Distributed SLAM helps multiple agents to collaboratively explore and build a global map of the environment while estimating their locations in it. One of the main challenges in distributed SLAM is to identify local map overlaps of these agents, especially when their initial relative positions are not known. We developed a collaborative AR framework with freely moving agents having no knowledge of their initial relative positions. Each agent in our framework uses a camera as the only input device for its SLAM process. Furthermore, the framework identifies map overlaps of agents using an appearance-based method. We also proposed a quality measure to determine the best keypoint detector/descriptor combination for our framework

    e-DOTS: AN INDOOR TRACKING SOLUTION

    Get PDF
    poster abstractAccurately tracking an object as its moves in a large indoor area is at-tractive due to its applicability to a wide range of domains. For example, a typical healthcare setup may benefit from tracking their assets, such as spe-cialized equipment, in real-time and thus optimize their usage. Existing techniques, such as the GPS, that focus on outdoor tracking do not provide accurate estimations of location within the confines of an indoor setup. Prev-alent approaches that attempt to provide the ability to perform indoor track-ing primarily focus on a homogenous type of sensor when providing an esti-mation of an object’s location. Such a homogeneous view is neither benefi-cial nor sufficient due to specific characteristics of single type of sensors. This research aims to create a distributed tracking system composed out of many different kinds of inexpensive and off-the-shelf sensors to address this challenge. Specifically, the proposed system, called Enhanced Distributed Object Tracking System (e-DOTS), will incorporate sensors such as web cameras, publically available wireless access points, and inexpensive RFID tracking tags to achieve accurate tracking over a large indoor area in real-time. As an object, in addition to moving in a known indoor setup, may move through an unknown confined area, the e-DOTS needs to incorporate opportunistic discovery of available sensors, select a proper subset of them, and fuse their readings in real-time to achieve an accurate estimation of the current position of that object. A preliminary prototype of e-DOTS has been created and experimented with. The results of these validations are promis-ing and suggest the possibility of e-DOTS achieving its desired goals. Further research is aimed at incorporating different kinds of sensors, different fusion techniques (e.g., Federated Kalman Filtering) and various discovery mecha-nisms to improve the tracking accuracy and the associated response time

    Spatial calibration of an optical see-through head-mounted display

    Get PDF
    We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the~HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry

    A distributed framework for monocular visual SLAM

    Get PDF
    In Distributed Simultaneous Localization and Mapping (SLAM), multiple agents generate a global map of the environment while each performing its local SLAM operation. One of the main challenges is to identify overlapping maps, especially when agents do not know their relative starting positions. In this paper we are introducing a distributed framework which uses an appearance based method to identify map overlaps. Our framework generates a global semi-dense map using multiple monocular visual SLAM agents, each localizing itself in this map

    Poster: Infusing Trust in Indoor Tracking

    Get PDF
    An indoor tracking system is inherently an asynchronous and distributed system that contains various types (e.g., detection, selection, and fusion) of events. One of the key challenges with regards to indoor tracking is an efficient selection and arrangement of sensor devices in the environment. Selecting the "right" subset of these sensors for tracking an object as it traverses an indoor environment is the necessary precondition to achieving accurate indoor tracking. With the recent proliferation of mobile devices, specifically those with many onboard sensors, this challenge has increased in both complexity and scale. No longer can one assume that the sensor infrastructure is static, but rather indoor tracking systems must consider and properly plan for a wide variety of sensors, both static and mobile, to be present. In such a dynamic setup, sensors need to be properly selected using an opportunistic approach. This opportunistic tracking allows for a new dimension of indoor tracking that previously was often infeasible or unpractical due to logistic or financial constraints of most entities. In this paper, we are proposing a selection technique that uses trust as manifested by its a quality-of-service (QoS) feature, accuracy, in a sensor selection function. We first outline how classification of sensors is achieved in a dynamic manner and then how the accuracy can be discerned from this classification in an effort to properly identify the trust of a tracking sensor and then use this information to improve the sensor selection process. We conclude this paper with a discussion of results of this implementation on a prototype indoor tracking system in an effort to demonstrate the overall effectiveness of this selection technique

    Smart Video Systems in Police Cars

    Get PDF
    poster abstractThe use of video cameras in police cars has been found to have significant value and the number of such installed systems has been increasing. In addition to recording the events in routine traffic stops for later use in legal settings, in-car video cameras can be used to analyze in real-time or near real-time to detect critical events and notify police headquarters for help. This poster presents methods for detecting critical events in such police car videos. The specific critical events are person running out of a stopped car and officer falling down while approaching a stopped car. In the above situations, the aim is to alert the control center immediately for backup forces, especially in the last example when the officer is incapacitated. In order to implement real-time video processing so that a quick response can be generated without employing complex, slow, and brittle video processing algorithms, we use the reduced spatiotemporal representation (1D projection profile) and Hidden Markov Model to detect these events. The methods are tested on many video shots under various environmental and illumination conditions
    • …
    corecore