3,167 research outputs found

    Mobile Robotic Painting of Texture

    Get PDF
    Robotic painting is well-established in controlled factory environments, but there is now potential for mobile robots to do functional painting tasks around the everyday world. An obvious first target for such robots is painting a uniform single color. A step further is the painting of textured images. Texture involves a varying appearance, and requires that paint is delivered accurately onto the physical surface to produce the desired effect. Robotic painting of texture is relevant for architecture and in themed environments. A key challenge for robotic painting of texture is to take a desired image as input, and to generate the paint commands to as closely as possible create the desired appearance, according to the robotic capabilities. This paper describes a deep learning approach to take an input ink map of a desired texture, and infer robotic paint commands to produce that texture. We analyze the trade-offs between quality of reconstructed appearance and ease of execution. Our method is general for different kinds of robotic paint delivery systems, but the emphasis here is on spray painting. More generally, the framework can be viewed as an approach for solving a specific class of inverse imaging problems

    Planar Object Tracking in the Wild: A Benchmark

    Full text link
    Planar object tracking is an actively studied problem in vision-based robotic applications. While several benchmarks have been constructed for evaluating state-of-the-art algorithms, there is a lack of video sequences captured in the wild rather than in constrained laboratory environment. In this paper, we present a carefully designed planar object tracking benchmark containing 210 videos of 30 planar objects sampled in the natural environment. In particular, for each object, we shoot seven videos involving various challenging factors, namely scale change, rotation, perspective distortion, motion blur, occlusion, out-of-view, and unconstrained. The ground truth is carefully annotated semi-manually to ensure the quality. Moreover, eleven state-of-the-art algorithms are evaluated on the benchmark using two evaluation metrics, with detailed analysis provided for the evaluation results. We expect the proposed benchmark to benefit future studies on planar object tracking.Comment: Accepted by ICRA 201

    Fast Graph-Based Object Segmentation for RGB-D Images

    Full text link
    Object segmentation is an important capability for robotic systems, in particular for grasping. We present a graph- based approach for the segmentation of simple objects from RGB-D images. We are interested in segmenting objects with large variety in appearance, from lack of texture to strong textures, for the task of robotic grasping. The algorithm does not rely on image features or machine learning. We propose a modified Canny edge detector for extracting robust edges by using depth information and two simple cost functions for combining color and depth cues. The cost functions are used to build an undirected graph, which is partitioned using the concept of internal and external differences between graph regions. The partitioning is fast with O(NlogN) complexity. We also discuss ways to deal with missing depth information. We test the approach on different publicly available RGB-D object datasets, such as the Rutgers APC RGB-D dataset and the RGB-D Object Dataset, and compare the results with other existing methods

    Active Estimation of Distance in a Robotic Vision System that Replicates Human Eye Movement

    Full text link
    Many visual cues, both binocular and monocular, provide 3D information. When an agent moves with respect to a scene, an important cue is the different motion of objects located at various distances. While a motion parallax is evident for large translations of the agent, in most head/eye systems a small parallax occurs also during rotations of the cameras. A similar parallax is present also in the human eye. During a relocation of gaze, the shift in the retinal projection of an object depends not only on the amplitude of the movement, but also on the distance of the object with respect to the observer. This study proposes a method for estimating distance on the basis of the parallax that emerges from rotations of a camera. A pan/tilt system specifically designed to reproduce the oculomotor parallax present in the human eye was used to replicate the oculomotor strategy by which humans scan visual scenes. We show that the oculomotor parallax provides accurate estimation of distance during sequences of eye movements. In a system that actively scans a visual scene, challenging tasks such as image segmentation and figure/ground segregation greatly benefit from this cue.National Science Foundation (BIC-0432104, CCF-0130851

    Simulation and Planning of a 3D Spray Painting Robotic System

    Get PDF
    Nesta dissertação é proposto um sistema robótico 3D de pintura com spray. Este sistema inclui uma simulação realista do spray com precisão suficiente para imitar pintura com spray real. Também inclui um algoritmo otimizado para geração de caminhos que é capaz de pintar projetos 3D não triviais. A simulação parte de CAD 3D ou peças digitalizadas em 3D e produz um efeito visual realista que permite analisar qualitativamente o produto pintado. Também é apresentada uma métrica de avaliação que pontua trajetória de pintura baseada na espessura, uniformidade, tempo e desperdício de tinta.In this dissertation a 3D spray painting robotic system is proposed. This system has realistic spray simulation with sufficient accuracy to mimic real spray painting. It also includes an optimized algorithm for path generation that is capable of painting non trivial 3D designs. The simulation has 3D CAD or 3D scanned input pieces and produces a realistic visual effect that allows qualitative analyses of the painted product. It is also presented an evaluation metric that scores the painting trajectory based on thickness, uniformity, time and waste of paint

    Deep robot sketching: an application of deep Q-learning networks for human-like sketching

    Get PDF
    © 2023 The Authors. Published by Elsevier B.V. This research has been financed by ALMA, ‘‘Human Centric Algebraic Machine Learning’’, H2020 RIA under EU grant agreement 952091; ROBOASSET, ‘‘Sistemas robóticos inteligentes de diagnóstico y rehabilitación de terapias de miembro superior’’, PID2020-113508RBI00, financed by AEI/10.13039/501100011033; ‘‘RoboCity2030-DIHCM, Madrid Robotics Digital Innovation Hub’’, S2018/NMT-4331, financed by ‘‘Programas de Actividades I+D en la Comunidad de Madrid’’; ‘‘iREHAB: AI-powered Robotic Personalized Rehabilitation’’, ISCIIIAES-2022/003041 financed by ISCIII and UE; and EU structural fundsThe current success of Reinforcement Learning algorithms for its performance in complex environments has inspired many recent theoretical approaches to cognitive science. Artistic environments are studied within the cognitive science community as rich, natural, multi-sensory, multi-cultural environments. In this work, we propose the introduction of Reinforcement Learning for improving the control of artistic robot applications. Deep Q-learning Neural Networks (DQN) is one of the most successful algorithms for the implementation of Reinforcement Learning in robotics. DQN methods generate complex control policies for the execution of complex robot applications in a wide set of environments. Current art painting robot applications use simple control laws that limits the adaptability of the frameworks to a set of simple environments. In this work, the introduction of DQN within an art painting robot application is proposed. The goal is to study how the introduction of a complex control policy impacts the performance of a basic art painting robot application. The main expected contribution of this work is to serve as a first baseline for future works introducing DQN methods for complex art painting robot frameworks. Experiments consist of real world executions of human drawn sketches using the DQN generated policy and TEO, the humanoid robot. Results are compared in terms of similarity and obtained reward with respect to the reference inputs.Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEUnión Europea. H2020Ministerio de Ciencia e Innovación (MICINN)/ AEI/10.13039/501100011033;Comunidad de MadridInstituto de Salud Carlos III (ISCIII)/UEROBOTICSLABpu
    • …
    corecore