928 research outputs found

    Low-latency vision-based fiducial detection and localisation for object tracking

    Full text link
    Real-time vision systems are widely-used in construction and manufacturing industries. A significant proportion of computational resources of such systems is used in fiducial identification and localisation for motion tracking of moving targets. The requirement is to localise a pattern in an image captured by the vision system precisely, accurately, and with a minimum available computation time. As such, this paper presents a class of patterns and, accordingly, proposes an algorithm to fulfil the requirement. Here, the patterns are designed using circular patches of concentric circles to increase the probability of detection and reduce cases of false detection. In the detection algorithm, the image captured by the vision system is first scaled down for computationally-effective processing. The scaled image is then separated by filtering only the colour components, which are made up of outer circular patches in the proposed pattern. A blob detection algorithm is then implemented for identifying inner circular patches. The inner circles are then localised in the image by using the colour information obtained. Finally, the localised pattern, along with the camera and distortion matrix of the vision system, is applied in a perspective-n-point solving algorithm to estimate the marker orientation and position in the global coordinate system. Our system shows significant enhancement in performance of fiducial detection and identification and achieves the required latency of less than ten milliseconds. Thus, it can be used for infrastructure monitoring in many applications that involve high-speed real-time vision systems

    Automatic camera pose initialization, using scale, rotation and luminance invariant natural feature tracking

    Get PDF
    The solution to the camera registration and tracking problem serves Augmented Reality, in order to provide an enhancement to the user’s cognitive perception of the real world and his/her situational awareness. By analyzing the five most representative tracking and feature detection techniques, we have concluded that the Camera Pose Initialization (CPI) problem, a relevant sub-problem in the overall camera tracking problem, is still far from being solved using straightforward and non-intrusive methods. The assessed techniques often use user inputs (i.e. mouse clicking) or auxiliary artifacts (i.e. fiducial markers) to solve the CPI problem. This paper presents a novel approach to real-time scale, rotation and luminance invariant natural feature tracking, in order to solve the CPI problem using totally automatic procedures. The technique is applicable for the case of planar objects with arbitrary topologies and natural textures, and can be used in Augmented Reality. We also present a heuristic method for feature clustering, which has revealed to be efficient and reliable. The presented work uses this novel feature detection technique as a baseline for a real-time and robust planar texture tracking algorithm, which combines optical flow, backprojection and template matching techniques. The paper presents also performance and precision results of the proposed technique

    Mobile MoCap: Retroreflector Localization On-The-Go

    Full text link
    Motion capture (MoCap) through tracking retroreflectors obtains high precision pose estimation, which is frequently used in robotics. Unlike MoCap, fiducial marker-based tracking methods do not require a static camera setup to perform relative localization. Popular pose-estimating systems based on fiducial markers have lower localization accuracy than MoCap. As a solution, we propose Mobile MoCap, a system that employs inexpensive near-infrared cameras for precise relative localization in dynamic environments. We present a retroreflector feature detector that performs 6-DoF (six degrees-of-freedom) tracking and operates with minimal camera exposure times to reduce motion blur. To evaluate different localization techniques in a mobile robot setup, we mount our Mobile MoCap system, as well as a standard RGB camera, onto a precision-controlled linear rail for the purposes of retroreflective and fiducial marker tracking, respectively. We benchmark the two systems against each other, varying distance, marker viewing angle, and relative velocities. Our stereo-based Mobile MoCap approach obtains higher position and orientation accuracy than the fiducial approach. The code for Mobile MoCap is implemented in ROS 2 and made publicly available at https://github.com/RIVeR-Lab/mobile_mocap

    Designing a marker set for vertical tangible user interfaces

    Get PDF
    Tangible User Interfaces (TUI)s extend the domain of reality-based human-computer interaction by providing users the ability to manipulate digital data using physical objects which embody representational significance. Whilst various advancements have been registered over the past years through the development and availability of TUI toolkits, these have mostly converged towards the deployment of tabletop TUI architectures. In this context, markers used in current toolkits can only be placed underneath the tangible objects to provide recognition. Albeit being effective in various literature studies, the limitations and challenges of deploying tabletop architectures have significantly hindered the proliferation of TUI technology due to the limited audience reach such systems can provide. Furthermore, available marker sets restrict the placement and use of tangible objects since if placed on top of the tangible object, the marker will interfere with the shape and texture of the object limiting the effect the TUI has on the end-user. To this end, this paper proposes the design and development of an innovative tangible marker set specifically designed towards the development of vertical TUIs. The proposed marker set design was optimized through a genetic algorithms to ensure robustness in scale invariance, the capability of being successfully detected with distances of up to 3.5 meters and a true occlusion resistance of up to 25%, where the marker is recognized and not tracked. Open-source versions of the marker set are provided through research license on www.geoffslab.com/tangiboard_marker_set
    • …
    corecore