174 research outputs found

    Multiple object detection of workpieces based on fusion of deep learning and image processing

    Get PDF
    A workpiece detection method based on fusion of deep learning and image processing is proposed. Firstly, the workpiece bounding boxes are located in the workpiece images by YOLOv3, whose parameters are compressed by an improved convolutional neural network residual structure pruning strategy. Then, the workpiece images are cropped based on the bounding boxes with cropping biases. Finally, the contours and suitable gripping points of the workpieces are obtained through image processing. The experimental results show that mean Average Precision (mAP) is 98.60% for YOLOv3, and 99.38% for that one by pruning 50.89% of its parameters, and the inference time is shortened by 31.13%. Image processing effectively corrects the bounding boxes obtained by deep learning, and obtains workpiece contour and gripping point information

    Vision-enhanced Peg-in-Hole for automotive body parts using semantic image segmentation and object detection

    Get PDF
    Artificial Intelligence (AI) is an enabling technology in the context of Industry 4.0. In particular, the automotive sector is among those who can benefit most of the use of AI in conjunction with advanced vision techniques. The scope of this work is to integrate deep learning algorithms in an industrial scenario involving a robotic Peg-in-Hole task. More in detail, we focus on a scenario where a human operator manually positions a carbon fiber automotive part in the workspace of a 7 Degrees of Freedom (DOF) manipulator. To cope with the uncertainty on the relative position between the robot and the workpiece, we adopt a three stage strategy. The first stage concerns the Three-Dimensional (3D) reconstruction of the workpiece using a registration algorithm based on the Iterative Closest Point (ICP) paradigm. Such a procedure is integrated with a semantic image segmentation neural network, which is in charge of removing the background of the scene to improve the registration. The adoption of such network allows to reduce the registration time of about 28.8%. In the second stage, the reconstructed surface is compared with a Computer Aided Design (CAD) model of the workpiece to locate the holes and their axes. In this stage, the adoption of a Convolutional Neural Network (CNN) allows to improve the holes’ position estimation of about 57.3%. The third stage concerns the insertion of the peg by implementing a search phase to handle the remaining estimation errors. Also in this case, the use of the CNN reduces the search phase duration of about 71.3%. Quantitative experiments, including a comparison with a previous approach without both the segmentation network and the CNN, have been conducted in a realistic scenario. The results show the effectiveness of the proposed approach and how the integration of AI techniques improves the success rate from 84.5% to 99.0%

    A MODEL VISION OF SORTING SYSTEM APPLICATION USING ROBOTIC MANIPULATOR

    Get PDF
    Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system in the Moduler Processing System (MPS) laboratirium which consists of four integrated stations of distribution, testing, processing and handling with a new image processing feature. Existing sorting method uses a set of inductive, capacitive and optical sensors do differentiate object color. This paper presents a mechatronics color sorting system solution with the application of image processing. Supported by OpenCV, image processing procedure senses the circular objects in an image captured in realtime by a webcam and then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator (Mitsubishi Movemaster RV-M1) that does pick-and-place mechanism. Extensive testing proves that this color based object sorting system works 100% accurate under ideal condition in term of adequate illumination, circular objects’ shape and color. The circular objects tested for sorting are silver, red and black. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.

    Collaborative and Cooperative Robotics Applications using Visual Perception

    Get PDF
    The objective of this Thesis is to develop novel integrated strategies for collaborative and cooperative robotic applications. Commonly, industrial robots operate in structured environments and in work-cell separated from human operators. Nowadays, collaborative robots have the capacity of sharing the workspace and collaborate with humans or other robots to perform complex tasks. These robots often operate in an unstructured environment, whereby they need sensors and algorithms to get information about environment changes. Advanced vision and control techniques have been analyzed to evaluate their performance and their applicability to industrial tasks. Then, some selected techniques have been applied for the first time to an industrial context. A Peg-in-Hole task has been chosen as first case study, since it has been extensively studied but still remains challenging: it requires accuracy both in the determination of the hole poses and in the robot positioning. Two solutions have been developed and tested. Experimental results have been discussed to highlight the advantages and disadvantages of each technique. Grasping partially known objects in unstructured environments is one of the most challenging issues in robotics. It is a complex task and requires to address multiple subproblems, in order to be accomplished, including object localization and grasp pose detection. Also for this class of issues some vision techniques have been analyzed. One of these has been adapted to be used in industrial scenarios. Moreover, as a second case study, a robot-to-robot object handover task in a partially structured environment and in the absence of explicit communication between the robots has been developed and validated. Finally, the two case studies have been integrated in two real industrial setups to demonstrate the applicability of the strategies to solving industrial problems

    Digital Twins of production systems - Automated validation and update of material flow simulation models with real data

    Get PDF
    Um eine gute Wirtschaftlichkeit und Nachhaltigkeit zu erzielen, müssen Produktionssysteme über lange Zeiträume mit einer hohen Produktivität betrieben werden. Dies stellt produzierende Unternehmen insbesondere in Zeiten gesteigerter Volatilität, die z.B. durch technologische Umbrüche in der Mobilität, sowie politischen und gesellschaftlichen Wandel ausgelöst wird, vor große Herausforderungen, da sich die Anforderungen an das Produktionssystem ständig verändern. Die Frequenz von notwendigen Anpassungsentscheidungen und folgenden Optimierungsmaßnahmen steigt, sodass der Bedarf nach Bewertungsmöglichkeiten von Szenarien und möglichen Systemkonfigurationen zunimmt. Ein mächtiges Werkzeug hierzu ist die Materialflusssimulation, deren Einsatz aktuell jedoch durch ihre aufwändige manuelle Erstellung und ihre zeitlich begrenzte, projektbasierte Nutzung eingeschränkt wird. Einer längerfristigen, lebenszyklusbegleitenden Nutzung steht momentan die arbeitsintensive Pflege des Simulationsmodells, d.h. die manuelle Anpassung des Modells bei Veränderungen am Realsystem, im Wege. Das Ziel der vorliegenden Arbeit ist die Entwicklung und Umsetzung eines Konzeptes inkl. der benötigten Methoden, die Pflege und Anpassung des Simulationsmodells an die Realität zu automatisieren. Hierzu werden die zur Verfügung stehenden Realdaten genutzt, die aufgrund von Trends wie Industrie 4.0 und allgemeiner Digitalisierung verstärkt vorliegen. Die verfolgte Vision der Arbeit ist ein Digitaler Zwilling des Produktionssystems, der durch den Dateninput zu jedem Zeitpunkt ein realitätsnahes Abbild des Systems darstellt und zur realistischen Bewertung von Szenarien verwendet werden kann. Hierfür wurde das benötigte Gesamtkonzept entworfen und die Mechanismen zur automatischen Validierung und Aktualisierung des Modells entwickelt. Im Fokus standen dabei unter anderem die Entwicklung von Algorithmen zur Erkennung von Veränderungen in der Struktur und den Abläufen im Produktionssystem, sowie die Untersuchung des Einflusses der zur Verfügung stehenden Daten. Die entwickelten Komponenten konnten an einem realen Anwendungsfall der Robert Bosch GmbH erfolgreich eingesetzt werden und führten zu einer Steigerung der Realitätsnähe des Digitalen Zwillings, der erfolgreich zur Produktionsplanung und -optimierung eingesetzt werden konnte. Das Potential von Lokalisierungsdaten für die Erstellung von Digitalen Zwillingen von Produktionssystem konnte anhand der Versuchsumgebung der Lernfabrik des wbk Instituts für Produktionstechnik demonstriert werden

    A MODEL VISION OF SORTING SYSTEM APPLICATION USING ROBOTIC MANIPULATOR

    Full text link

    Selected Papers from the 5th International Electronic Conference on Sensors and Applications

    Get PDF
    This Special Issue comprises selected papers from the proceedings of the 5th International Electronic Conference on Sensors and Applications, held on 15–30 November 2018, on sciforum.net, an online platform for hosting scholarly e-conferences and discussion groups. In this 5th edition of the electronic conference, contributors were invited to provide papers and presentations from the field of sensors and applications at large, resulting in a wide variety of excellent submissions and topic areas. Papers which attracted the most interest on the web or that provided a particularly innovative contribution were selected for publication in this collection. These peer-reviewed papers are published with the aim of rapid and wide dissemination of research results, developments, and applications. We hope this conference series will grow rapidly in the future and become recognized as a new way and venue by which to (electronically) present new developments related to the field of sensors and their applications

    Trajectory planning of collaborative robotic contact-based applications

    Get PDF

    Extraction of 3D Machined Surface Features and Applications.

    Full text link
    In the modern production, the measurement of surface functions becomes more and more important. Most previous work on surface functional characterization are focused on surface tribological properties (roughness domain) and cover only a small area of a large engineering surface. Therefore, characterizing large engineering surface comprehensively and rapidly presents significant challenges. This research is focused on extracting 3D surface features from waviness domain and using these features to predict surface function and detect machining errors. In this research, an improved Gaussian filter is first designed to accurately extract 3D surface waviness from a large surface height map measured by a large field view interferometer. This filter technique enhances the performance of the standard Gaussian filter when applied to a surface which has large form distortion and many sharp peaks/valleys/noise. Following this, a 3D surface waviness feature of the machined workpiece is defined and applied to assess severe tool wear. Secondly, a two-channel filter bank diagram is developed that applies a 2D wavelet to decompose a 3D surface into multiple-scale subsurfaces. 3D surface features extracted from multiple-scale subsurfaces are then used to predict surface functions and detect machining faults. In the proposed surface decomposition process, two important issues: the elimination of border distortion and the transformation between the wavelet scale and its physical dimension are addressed. Applications of 2D wavelet decomposition to 3D surfaces are demonstrated using several automotive case studies, including abrupt tool breakage detection, chatter detection, cylinder head mating/sealing surface leak path detection, and transmission clutch piston surface non-clean up detection. Finally, a novel and automated surface defect detection and classification system for flat machined surfaces is designed. The purpose of this work is to extract microscopic surface anomalies and assign each anomaly to a surface defect type commonly found on the automotive machined surfaces. A “breadboard” version surface defect inspection system using multiple directional illuminations is constructed. Related image processing algorithms are developed to detect and identify 5 types of 2D or 3D surface defects (pore, 2D blemish, residue dirt, scratch, and gouge).Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78782/1/yiliao_1.pd

    Industrial Segment Anything -- a Case Study in Aircraft Manufacturing, Intralogistics, Maintenance, Repair, and Overhaul

    Full text link
    Deploying deep learning-based applications in specialized domains like the aircraft production industry typically suffers from the training data availability problem. Only a few datasets represent non-everyday objects, situations, and tasks. Recent advantages in research around Vision Foundation Models (VFM) opened a new area of tasks and models with high generalization capabilities in non-semantic and semantic predictions. As recently demonstrated by the Segment Anything Project, exploiting VFM's zero-shot capabilities is a promising direction in tackling the boundaries spanned by data, context, and sensor variety. Although, investigating its application within specific domains is subject to ongoing research. This paper contributes here by surveying applications of the SAM in aircraft production-specific use cases. We include manufacturing, intralogistics, as well as maintenance, repair, and overhaul processes, also representing a variety of other neighboring industrial domains. Besides presenting the various use cases, we further discuss the injection of domain knowledge
    • …
    corecore