126 research outputs found

    QR-Tag: Angular Measurement and Tracking with a QR-Design Marker

    Full text link
    Directional information measurement has many applications in domains such as robotics, virtual and augmented reality, and industrial computer vision. Conventional methods either require pre-calibration or necessitate controlled environments. The state-of-the-art MoireTag approach exploits the Moire effect and QR-design to continuously track the angular shift precisely. However, it is still not a fully QR code design. To overcome the above challenges, we propose a novel snapshot method for discrete angular measurement and tracking with scannable QR-design patterns that are generated by binary structures printed on both sides of a glass plate. The QR codes, resulting from the parallax effect due to the geometry alignment between two layers, can be readily measured as angular information using a phone camera. The simulation results show that the proposed non-contact object tracking framework is computationally efficient with high accuracy

    Target Tracking Using Optical Markers for Remote Handling in ITER

    Get PDF
    The thesis focuses on the development of a vision system to be used in the remote handling systems of the International Thermonuclear Experimental Rector - ITER. It presents and discusses a realistic solution to estimate the pose of key operational targets, while taking into account the specific needs and restrictions of the application. The contributions to the state of the art are in two main fronts: 1) the development of optical markers that can withstand the extreme conditions in the environment; 2) the development of a robust marker detection and identification framework that can be effectively applied to different use cases. The markers’ locations and labels are used in computing the pose. In the first part of the work, a retro reflective marker made up ITER compliant materials, particularly, fused silica and stainless steel, is designed. A methodology is proposed to optimize the markers’ performance. Highly distinguishable markers are manufactured and tested. In the second part, a hybrid pipeline is proposed that detects uncoded markers in low resolution images using classical methods and identifies them using a machine learning approach. It is demonstrated that the proposed methodology effectively generalizes to different marker constellations and can successfully detect both retro reflective markers and laser engravings. Lastly, a methodology is developed to evaluate the end-to-end accuracy of the proposed solution using the feedback provided by an industrial robotic arm. Results are evaluated in a realistic test setup for two significantly different use cases. Results show that marker based tracking is a viable solution for the problem at hand and can provide superior performance to the earlier stereo matching based approaches. The developed solutions could be applied to other use cases and applications

    Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle

    Get PDF
    In recent years the use of unmanned air systems (UAS) has seen extreme growth. These small, often inexpensive platforms have been used to aid in tasks such as search and rescue, medicinal deliveries, disaster relief and more. In many use cases UAS work alongside unmanned ground vehicles (UGVs) to complete autonomous tasks. For end-to-end autonomous cooperation, the UAS needs to be able to autonomously take off and land on the UGV. Current autonomous landing solutions often use fiducial markers to aid in localizing the UGV relative to the UAS, an external ground computer to aid in computation, or gimbaled cameras on-board the UAS. This thesis seeks to demonstrate a vision-based autonomous landing system that does not rely on the use of fiducial markers, completes all computations on-board the UAS, and uses a fixed, non-gimbaled camera. Algorithms are tailored towards low size, weight, and power constraints as all compute and sensing components weigh less than 100 grams. The foundation of this thesis extends upon current efforts by localizing the UGV relative to the UAS using neural network object detection and camera intrinsic properties instead of common place fiducial markers. An object detection neural network is used to detect the UGV within an image captured by the camera on-board the UAS. Then a localization algorithm utilizes the UGV’s pixel position within the image to estimate the UGV’s position relative to the UAS. This estimated position of the UGV will be passed into a command generator that sends setpoints to the on-board PX4 flight control unit (FCU). This autonomous landing system was developed and validated within a high-fidelity simulation environment before conducting outdoor experiments

    Mobile MoCap: Retroreflector Localization On-The-Go

    Full text link
    Motion capture (MoCap) through tracking retroreflectors obtains high precision pose estimation, which is frequently used in robotics. Unlike MoCap, fiducial marker-based tracking methods do not require a static camera setup to perform relative localization. Popular pose-estimating systems based on fiducial markers have lower localization accuracy than MoCap. As a solution, we propose Mobile MoCap, a system that employs inexpensive near-infrared cameras for precise relative localization in dynamic environments. We present a retroreflector feature detector that performs 6-DoF (six degrees-of-freedom) tracking and operates with minimal camera exposure times to reduce motion blur. To evaluate different localization techniques in a mobile robot setup, we mount our Mobile MoCap system, as well as a standard RGB camera, onto a precision-controlled linear rail for the purposes of retroreflective and fiducial marker tracking, respectively. We benchmark the two systems against each other, varying distance, marker viewing angle, and relative velocities. Our stereo-based Mobile MoCap approach obtains higher position and orientation accuracy than the fiducial approach. The code for Mobile MoCap is implemented in ROS 2 and made publicly available at https://github.com/RIVeR-Lab/mobile_mocap

    Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

    Get PDF
    Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe

    Visual servo algorithm of robot arm simulation for dynamic tracking and grasping application

    Get PDF
    Health pandemics such as Covid-19 have drastically shifted the world economics and boosted the development of automation technologies in the industries for continuous operation without human intervention. This paper elaborates on an approach to dynamically track and grasp moving objects using a robot arm. The robot arm has an eye-in-hand (EIH) configuration, where a camera is installed on the robot arm’s end effector. The working principle of the robot arm in this paper is mainly dependent on the recognition of augmented reality markers, i.e., Aruco markers, placed on the dynamically moving target object with continuous tracking. Then, the proposed system updates the predicted location for the markers using the Kalman filter for performing grasping. The proposed approach identifies the Aruco marker on the target object and estimates the object’s location using previous states, and performs grasping at the exact predicted location. When extracted information is updated, the vision system also implements a feedback control system for stability and reliability. The proposed approach is tested using simulation of the dynamic moving object with different speeds and directions. The robot arm with the Kalman filter can track and grasp the dynamic object at a speed of 0.2m/s with a 100% success rate while obtaining an 80% success rate at a speed of 0.3m/s. In conclusion, the moving object’s speed is directly proportional to the grasping time until it reaches the threshold speed for the camera in identifying the Aruco markers. Future works are required to improve the dynamic visual servo algorithm with motion planning when obstacles are present in the path of robot grasping

    Development of perception module for bobotic manipulation tasks

    Get PDF
    Robots performing manipulation tasks require the accurate location and orientation of an object in space. Previously, at the Robotics Laboratory of IOC-UPC this data has been generated artificially. In order to automate the process, a perception module has been developed for providing task and motion planners with the localization and pose estimation of objects used in robot manipulation tasks. The Robot Operating System provided a great framework for incorporating vision provided by Microsoft Kinect V2 sensors and the presentation of obtained data to be used in the generation of Planning Domain Definition Language files, which define a robots environment. Localization and pose estimation was done using fiducial markers along with studying possible enhancements using deep learning methods. Perfectly calibrating hardware and setting up a system play a big role in enhancing perception accuracy and while fiducial markers provide a simple and robust solution in laboratory conditions, real world applications with varying lighting, viewing angles and partial occlusions should rely on AI visio

    A Indústria 4.0 como referência para modelagem e implementação de componentes Plug & Produce em uma rede industrial OPC UA

    Get PDF
    TCC (graduação) - Universidade Federal de Santa Catarina. Campus Joinville. Engenharia Mecatrônica.O modelo de referência de arquitetura da Indústria 4.0 recomenda o uso do padrão de comunicação OPC UA, que estabelece técnicas de representação virtual de componentes com base na abstração e descrição de suas funcionalidades. Um dos casos de uso é denominado fábricas adaptáveis, aquelas que dispensam a necessidade de reconfiguração dos processos ao introduzir ou modificar componentes na produção. Nesse contexto, este trabalho aborda a modelagem e implementação de componentes Plug & Produce interoperáveis e autônomos como solução para um sistema de produção auto-reconfigurável. O trabalho revisa as normas e técnicas de modelagem de componentes OPC UA e implementa um sistema demonstrativo caracterizado pela detecção de marcadores fiduciais utilizando dispositivos de câmera, componentes de processamento de imagem e composição automatizada das funcionalidades dos dispositivos, cuja abordagem é descrevê-las semanticamente como skills. O experimento mostra que tal sistema é interoperável e possui autonomia ao ser executado em diferentes servidores, contudo, mostra-se que, a partir de uma análise de uso de rede e de recursos operacionais, ainda carece de desenvolvimento para que atenda aos requisitos de sistemas distribuídos. Conclui-se que, apesar da complexidade resultante do uso de uma linguagem de baixo nível, a abordagem de descrição semântica de skills é eficaz e extensível para sistemas de produção autoadaptativos.The reference architectural model for Industry 4.0 recommends using the communication standard OPC UA, which establishes component virtual representation techniques based on the abstraction and on the description of their functionalities. One of the use cases is denominated adaptable factories, which are those that spare the need for reconfiguration in their processes when introducing or modifying production components. In this context, this coursework addresses the modelling and implementation of interoperable and autonomous Plug & Produce components as a solution to an auto (re)configurable production system. It reviews the norms and modelling techniques of OPC UA components and implements a demonstration system, characterized by the auto-detection of fiducial markers using camera devices, image processing components and automated composition of their functionalities, which is done by describing them using skill semantics. The experiment shows that such a system is interoperable and possesses autonomy when executed on different servers. However, by conducting a network and operational resources consumption analysis, the system still needs further development to fulfil the requirements of distributed systems. It is concluded that despite the high complexity resulting from the use of a low-level programming language, the approach of using skill semantics is effective and extensible to self-adaptive production systems
    • …
    corecore