135 research outputs found

    Robotic Harvesting of Fruiting Vegetables: A Simulation Approach in V-REP, ROS and MATLAB

    Get PDF
    In modern agriculture, there is a high demand to move from tedious manual harvesting to a continuously automated operation. This chapter reports on designing a simulation and control platform in V-REP, ROS, and MATLAB for experimenting with sensors and manipulators in robotic harvesting of sweet pepper. The objective was to provide a completely simulated environment for improvement of visual servoing task through easy testing and debugging of control algorithms with zero damage risk to the real robot and to the actual equipment. A simulated workspace, including an exact replica of different robot manipulators, sensing mechanisms, and sweet pepper plant, and fruit system was created in V-REP. Image moment method visual servoing with eye-in-hand configuration was implemented in MATLAB, and was tested on four robotic platforms including Fanuc LR Mate 200iD, NOVABOT, multiple linear actuators, and multiple SCARA arms. Data from simulation experiments were used as inputs of the control algorithm in MATLAB, whose outputs were sent back to the simulated workspace and to the actual robots. ROS was used for exchanging data between the simulated environment and the real workspace via its publish-and-subscribe architecture. Results provided a framework for experimenting with different sensing and acting scenarios, and verified the performance functionality of the simulator

    Indoor Navigation and Manipulation using a Segway RMP

    Get PDF
    This project dealt with a Segway RMP, utilizing it in an assistive-technology manner, encompassing navigation and manipulation aspects of robotics. First, background research was conducted to develop a blueprint for the robot. The hardware, software, and configuration of the RMP was updated, and a robotic arm was designed to extend the RMP’s capabilities. The robot was programmed to accomplish autonomous multi-floor navigation through the use of the navigation stack in ROS, image detection, and a GUI. The robot can navigate through the hallways of the building utilizing the elevator. The robotic arm was designed to accomplish tasks such as pressing a button and picking an object up off of a table. The Segway RMP is designed to be utilized and expanded upon as a robotics research platform

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    Intelligent collision avoidance system for industrial manipulators

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáThe new paradigm of Industry 4.0 demand the collaboration between robot and humans. They could help (human and robot) and collaborate each other without any additional security, unlike other conventional manipulators. For this, the robot should have the ability of acquire the environment and plan (or re-plan) on-the-fly the movement avoiding the obstacles and people. This work proposes a system that acquires the space of the environment, based on a Kinect sensor, verifies the free spaces generated by a Point Cloud and executes the trajectory of manipulators in these free spaces. The simulation system should perform the path planning of a UR5 manipulator for pick-and-place tasks, while avoiding the objects around it, based on the point cloud from Kinect. And due to the results obtained in the simulation, it was possible to apply this system in real situations. The basic structure of the system is the ROS software, which facilitates robotic applications with a powerful set of libraries and tools. The MoveIt! and Rviz are examples of these tools, with them it was possible to carry out simulations and obtain planning results. The results are reported through logs files, indicating whether the robot motion plain was successful and how many manipulator poses were needed to create the final movement. This last step, allows to validate the proposed system, through the use of the RRT and PRM algorithms. Which were chosen because they are most used in the field of robot path planning.Os novos paradigmas da Indústria 4.0 exigem a colaboração entre robôs e seres humanos. Estes podem ajudar e colaborar entre si sem qualquer segurança adicional, ao contrário de outros manipuladores convencionais. Para isto, o robô deve ter a capacidade de adquirir o meio ambiente e planear (ou re-planear) on-the-fly o movimento evitando obstáculos e pessoas. Este trabalho propõe um sistema que adquire o espaço do ambiente através do sensor Kinect. O sistema deve executar o planeamento do caminho de manipuladores que possuem movimentos de um ponto a outro (ponto inicial e final), evitando os objetos ao seu redor, com base na nuvem de pontos gerada pelo Kinect. E devido aos resultados obtidos na simulação, foi possível aplicar este sistema em situações reais. A estrutura base do sistema é o software ROS, que facilita aplicações robóticas com um poderoso conjunto de bibliotecas e ferramentas. O MoveIt! e Rviz são exemplos destas ferramentas, com elas foi possível realizar simulações e conseguir os resultados de planeamento livre de colisões. Os resultados são informados por meio de arquivos logs, indicando se o movimento do UR5 foi realizado com sucesso e quantas poses do manipulador foram necessárias criar para atingir o movimento final. Este último passo, permite validar o sistema proposto, através do uso dos algoritmos RRT e PRM. Que foram escolhidos por serem mais utilizados no ramo de planeamento de trajetória para robôs

    Design and implementation of robot skill programming and control

    Get PDF
    Abstract. Skill-based approach has been represented as a solution to the raising complicity of robot programming and control. The skills rely heavily on the use of sensors integrating sensor perceptions and robot actions, which enable the robot to adapt to changes and uncertainties in the real world and operate autonomously. The aim of this thesis was to design and implement a programming concept for skill-based control of industrial robots. At the theoretical part of this thesis, the industrial robot system is introduced as well as some basic concepts of robotics. This is followed by the introduction of different robot programming and 3D machine vision methods. At the last section of the theoretical part, the structure of skill-based programs is presented. In the experimental part, structure of the skills required for the “grinding with localization” -task are presented. The task includes skills such as global localization with 3D-depth sensor, scanning the object with 2D-profile scanner, precise localization of the object as well as two grinding skills: level surface grinding and straight seam grinding. Skills are programmed with an off-line programming tool and implemented in a robot cell, composed of a standard industrial robot with grinding tools, 3D-depth sensors and 2D-profile scanners. The results show that global localization can be carried out with consumer class 3D-depth sensors and more accurate local localization with an industrial high-accuracy 2D-profile scanner attached to the robot’s flange. The grinding experiments and tests were focused on finding suitable structures of the skill programs as well as to understand how the different parameters influence on the quality of the grinding.Robotin taitopohjaisten ohjelmien ohjelmointi ja testaus. Tiivistelmä. Robotin taitopohjaisia ohjelmia on esitetty ratkaisuksi robottien jatkuvasti monimutkaistuvaan ohjelmointiin. Taidot pohjautuvat erilaisten antureiden ja robotin toimintojen integroimiseen, joiden avulla robotti pystyy havainnoimaan muutokset reaalimaailmassa ja toimimaan autonomisesti. Tämän työn tavoitteena oli suunnitella ja toteuttaa taitopohjaisia ohjelmia teollisuusrobotille. Aluksi työn teoriaosuudessa esitellään teollisuusrobottijärjestelmään kuuluvia osia ja muutamia robotiikan olennaisimpia käsitteitä. Sen jälkeen käydään läpi eri robotin ohjelmointitapoja ja eri 3D-konenäön toimintaperiaatteita. Teoriaosuuden lopussa esitellään taitopohjaisten ohjelmien rakennetta. Käytännön osuudessa esitellään ”hionta paikoituksella” -tehtävän suoritukseen tarvittavien taitojen rakenne. Tehtävän vaatimia taitoja ovat muun muassa kappaleen globaalipaikoitus 3D-syvyyskameralla, kappaleen skannaus 2D-profiiliskannerilla, kappaleen tarkkapaikoitus ja kaksi eri hiontataitoa: tasomaisen pinnan ja suoran sauman hionta. Taidot ohjelmoidaan off-line ohjelmointityökalulla ja implementoidaan robottisoluun, joka muodostuu hiontatyökaluilla varustetusta teollisuusrobotista, 3D-kameroista ja 2D-profiiliskannereista. Työn tuloksista selviää, että kappaleen globaalipaikoitus voidaan suorittaa kuluttajille suunnatuilla 3D-syvyyskameroilla ja kappaleen tarkempi lokaalipaikoitus robotin ranteeseen kiinnitetyllä teollisuuden käyttämillä 2D-profiiliskannereilla. Hiontojen kokeellisessa osuudessa etsitään ohjelmien oikeanlaista rakennetta sekä muodostetaan käsitys eri parametrien vaikutuksesta hionnan laatuun

    Kinematics and Robot Design II (KaRD2019) and III (KaRD2020)

    Get PDF
    This volume collects papers published in two Special Issues “Kinematics and Robot Design II, KaRD2019” (https://www.mdpi.com/journal/robotics/special_issues/KRD2019) and “Kinematics and Robot Design III, KaRD2020” (https://www.mdpi.com/journal/robotics/special_issues/KaRD2020), which are the second and third issues of the KaRD Special Issue series hosted by the open access journal robotics.The KaRD series is an open environment where researchers present their works and discuss all topics focused on the many aspects that involve kinematics in the design of robotic/automatic systems. It aims at being an established reference for researchers in the field as other serial international conferences/publications are. Even though the KaRD series publishes one Special Issue per year, all the received papers are peer-reviewed as soon as they are submitted and, if accepted, they are immediately published in MDPI Robotics. Kinematics is so intimately related to the design of robotic/automatic systems that the admitted topics of the KaRD series practically cover all the subjects normally present in well-established international conferences on “mechanisms and robotics”.KaRD2019 together with KaRD2020 received 22 papers and, after the peer-review process, accepted only 17 papers. The accepted papers cover problems related to theoretical/computational kinematics, to biomedical engineering and to other design/applicative aspects

    Real-time Target Tracking and Following with UR5 Collaborative Robot Arm

    Get PDF
    The rise of the camera usage and their availability give opportunities for developing robotics applications and computer vision applications. Especially, recent development in depth sensing (e.g., Microsoft Kinect) allows development of new methods for Human Robot Interaction (HRI) field. Moreover, Collaborative robots (co-bots) are adapted for the manufacturing industry. This thesis focuses on HRI using the capabilities of Microsoft Kinect, Universal Robot-5 (UR5) and Robot Operating System (ROS). In this particular study, the movement of a fingertip is perceived and the same movement is repeated on the robot side. Seamless cooperation, accurate trajectory and safety during the collaboration are the most important parts of the HRI. The study aims to recognize and track the fingertip accurately and to transform it as the motion of UR5. It also aims to improve the motion performance of UR5 and interaction efficiency during collaboration. In the experimental part, nearest-point approach is used via Kinect sensor's depth image (RGB-D). The approach is based on the Euclidean distance which has robust properties against different environments. Moreover, Point Cloud Library (PCL) and its built-in filters are used for processing the depth data. After the depth data provided via Microsoft Kinect have been processed, the difference of the nearest points is transmitted to the robot via ROS. On the robot side, MoveIt! motion planner is used for the smooth trajectory. Once the data has been processed successfully and the motion code has been implemented without bugs, 84.18% total accuracy was achieved. After the improvements in motion planning and data processing, the total accuracy was increased to 94.14%. Lastly, the latency was reduced from 3-4 seconds to 0.14 seconds
    corecore