28 research outputs found

    Comparing fiducial marker systems in the presence of occlusion

    Get PDF
    © 2017 IEEE. A fiducial marker system is a system of unique 2D (planar) marker, which is placed in an environment and automatically will be detected with a camera with a help of a corresponding detection algorithm. Application areas of these markers include industrial systems, augmented reality, robots navigation, human-robot interaction and others. Marker system designed for such different applications must be robust to such factors as view angles, occlusions, changing distances, etc. This paper compares three existing systems of markers: ARTag, AprilTag, and CALTag. As a benchmark, we use their reliability and detection rate in presence of occlusions of various types and intensity. The paper presents experimental comparison of these markers. The marker detection was performed with a simple inexpensive Web camera

    Comparing Fiducial Marker Systems Occlusion Resilience through a Robot Eye

    Get PDF
    © 2017 IEEE. A fiducial marker is a system of unique planar markers, that are placed in an environment and should be automatically detected with a camera through marker-specific detection procedures. Their application varies greatly, while the most popular are industrial systems, augmented reality, and robot navigation. All these applications imply that a marker system must be robust to such factors as view angles, types of occlusions, distance and light condition variations etc. Our paper compares existing ARTag, AprilTag, and CALTag systems utilizing a high fidelity camera, which is a main vision sensor of a full-size Russian humanoid robot AR-601M. Our experimental comparison verified the three marker systems reliability and detection rate in occlusions of various types and intensities and a preferable for AR-601M robot applications marker system was selected

    An Efficient and Robust Mobile Augmented Reality Application

    Get PDF
    AR technology is perceived to be evolved from the bases of Virtual Reality (VR) technology. The ultimate goal of AR is to provide better management and ubiquitous access to information by using seamless techniques in which the interactive real world is combined with an interactive computer-generated world in one coherent environment. The direction of research in the field of AR has been shifted from traditional Desktop based mediums to the mobile devices such as the smartphones. However, image recognition on smartphones enforces many restrictions and challenges in the form of efficiency and robustness which are the general performance measurement of image recognition. Smart phones have limited processing capabilities as compared to the PC platform, hence the process of mobile AR application development and use of image recognition algorithm need to be emphasised. The processes of mobile AR application development include detection, description and matching. All the processes and algorithms need to be carefully selected in order to create an efficient and robust mobile AR application. The algorithm used in this work for detection, description and matching are AGAST, FREAK and Hamming distance respectively. The computation time, robustness towards rotation, scale and brightness are evaluated. The dataset used to evaluate the mobile AR application is the benchmark dataset; Mikolajczyk. The results showed that the mobile AR application is efficient with a computation time of 29.1ms. The robustness towards scale, rotation and brightness changes of the mobile AR application also obtained high accuracy which is 89.76%, 87.71% and 83.87% respectively. Hence, combination of algorithm AGAST, FREAK and Hamming distance are suitable to create an efficient and robust mobile AR application

    Particle filter-based camera tracker fusing marker- and feature point-based cues

    Get PDF
    This paper presents a video-based camera tracker that combines marker-based and feature point-based cues in a particle filter framework. The framework relies on their complementary performances. Marker-based trackers can robustly recover camera position and orientation when a reference (marker) is available, but fail once the reference becomes unavailable. On the other hand, filter-based camera trackers using feature point cues can still provide predicted estimates given the previous state. However, these tend to drift and usually fail to recover when the reference reappears. Therefore, we propose a fusion where the estimate of the filter is updated from the individual measurements of each cue. More precisely, the marker-based cue is selected when the reference is available whereas the feature point-based cue is selected otherwise. Evaluations on real cases show that the fusion of these two approaches outperforms the individual tracking results
    corecore