853 research outputs found

    DEVELOPMENT OF AN INDUSTRIAL ROBOTIC ARM EDUCATION KIT BASED ON OBJECT RECOGNITION AND ROBOT KINEMATICS FOR ENGINEERS

    Get PDF
    DEVELOPMENT OF AN INDUSTRIAL ROBOTIC ARM EDUCATION KIT BASED ON OBJECT RECOGNITION AND ROBOT KINEMATICS FOR ENGINEERSAbstractRobotic vision makes systems in the industry more advantageous regarding practicality and flexibility. For this reason, it is essential to provide the necessary training for the standard use of vision based robotic systems on production lines. In this article, it is aimed to design a low cost computer vision based industrial robotic arm education kit with eye-to-hand configuration. This kit is based on classifying and stacking products in random locations in a short time, making them ready for industrial operations or logistics. In the development phase of the system, firstly, motion simulation of the robotic arm was performed and then, experimental setup was established, and the performance of the system was tested by experimental studies. This system, which operates with a great success rate, has been made available for use within the scope of education. Regarding the use of the system for educational purposes, this kit supports theoretical lessons by reviewing object recognition (vision systems), forward - inverse kinematics, and trajectory planning (robot kinematics) and running the system several times. Thus, engineering students are expected to approach the industry more consciously and to develop the industry. It can also be used for training of relevant engineers in the institution where vision based robotic systems are available.Keywords: Education Kit, Stereo Vision, Robotic Arm, Object Recognition and Classification, Pick-and-Place Tas

    Intelligent 3D seam tracking and adaptable weld process control for robotic TIG welding

    Get PDF
    Tungsten Inert Gas (TIG) welding is extensively used in aerospace applications, due to its unique ability to produce higher quality welds compared to other shielded arc welding types. However, most TIG welding is performed manually and has not achieved the levels of automation that other welding techniques have. This is mostly attributed to the lack of process knowledge and adaptability to complexities, such as mismatches due to part fit-up. Recent advances in automation have enabled the use of industrial robots for complex tasks that require intelligent decision making, predominantly through sensors. Applications such as TIG welding of aerospace components require tight tolerances and need intelligent decision making capability to accommodate any unexpected variation and to carry out welding of complex geometries. Such decision making procedures must be based on the feedback about the weld profile geometry. In this thesis, a real-time position based closed loop system was developed with a six axis industrial robot (KUKA KR 16) and a laser triangulation based sensor (Micro-Epsilon Scan control 2900-25). [Continues.

    A low cost 3D vision system for positioning welding mobile robots using a FPGA prototyping system

    Get PDF
    The aim of this work is to explore some solutions for artificial vision systems applied to welding autonomous robots. We take advantage from the UNSHADES-1 system, developed in Filiation (blind for evaluation). This system can be used for building a powerful 3D vision system. The system concentrates and process the three images in parallel, producing an accurate value of the position, that can be improved if the relative position of the cameras are well defined

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Robot guidance using machine vision techniques in industrial environments: A comparative review

    Get PDF
    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works

    An Automatic Laser Scanning System for Accurate 3D Reconstruction of Indoor Scenes

    Get PDF
    • 

    corecore