616 research outputs found

    Collaborative and Cooperative Robotics Applications using Visual Perception

    Get PDF
    The objective of this Thesis is to develop novel integrated strategies for collaborative and cooperative robotic applications. Commonly, industrial robots operate in structured environments and in work-cell separated from human operators. Nowadays, collaborative robots have the capacity of sharing the workspace and collaborate with humans or other robots to perform complex tasks. These robots often operate in an unstructured environment, whereby they need sensors and algorithms to get information about environment changes. Advanced vision and control techniques have been analyzed to evaluate their performance and their applicability to industrial tasks. Then, some selected techniques have been applied for the first time to an industrial context. A Peg-in-Hole task has been chosen as first case study, since it has been extensively studied but still remains challenging: it requires accuracy both in the determination of the hole poses and in the robot positioning. Two solutions have been developed and tested. Experimental results have been discussed to highlight the advantages and disadvantages of each technique. Grasping partially known objects in unstructured environments is one of the most challenging issues in robotics. It is a complex task and requires to address multiple subproblems, in order to be accomplished, including object localization and grasp pose detection. Also for this class of issues some vision techniques have been analyzed. One of these has been adapted to be used in industrial scenarios. Moreover, as a second case study, a robot-to-robot object handover task in a partially structured environment and in the absence of explicit communication between the robots has been developed and validated. Finally, the two case studies have been integrated in two real industrial setups to demonstrate the applicability of the strategies to solving industrial problems

    High Resolution Vision-Based Servomechanism Using a Dynamic Target with Application to CNC Machines

    Get PDF
    This dissertation introduces a novel three dimensional vision-based servomechanism with application to real time position control for manufacturing equipment, such as Computer Numerical Control (CNC) machine tools. The proposed system directly observes the multi-dimensional position of a point on the moving tool relative to a fixed ground, thus bypassing the inaccurate kinematic model normally used to convert axis sensor-readings into an estimate of the tool position. A charge-coupled device (CCD camera) is used as the position transducer, which directly measures the current position error of the tool referenced to an absolute coordinate system. Due to the direct-sensing nature of the transducer no geometric error compensation is required. Two new signal processing algorithms, based on a recursive Newton-Raphson optimization routine, are developed to process the input data collected through digital imaging. The algorithms allow simultaneous high-precision position and orientation estimation from single readings. The desired displacement command of the tool in a planar environment is emulated, in one end of the kinematic chain, by an active element or active target pattern on a liquid-crystal display (LCD). On the other end of the kinematic chain the digital camera observes the active target and provides visual feedback information utilized for position control of the tool. Implementation is carried out on an XYΞZ stage, which is position with high resolution. The introduction of the camera into the control loop yields a visual servo architecture; the dynamic problems and stability assessment of which are analyzed in depth for the case study of the single CAM- single image processing thread-configuration. Finally, two new command generation protocols are explained for full implementation of the proposed structure in real-time control applications. Command issuing resolutions do not depend upon the size of the smallest element of the grid/display being imaged, but can instead be determined in accordance with the sensor\u27s resolution

    Towards Robotic Laboratory Automation Plug & Play: Survey and Concept Proposal on Teaching-free Robot Integration with the LAPP Digital Twin

    Full text link
    The Laboratory Automation Plug & Play (LAPP) framework is an over-arching reference architecture concept for the integration of robots in life science laboratories. The plug & play nature lies in the fact that manual configuration is not required, including the teaching of the robots. In this paper a digital twin (DT) based concept is proposed that outlines the types of information that have to be provided for each relevant component of the system. In particular, for the devices interfacing with the robot, the robot positions have to be defined beforehand in a device-attached coordinate system (CS) by the vendor. This CS has to be detectable by the vision system of the robot by means of optical markers placed on the front side of the device. With that, the robot is capable of tending the machine by performing the pick-and-place type transportation of standard sample carriers. This basic use case is the primary scope of the LAPP-DT framework. The hardware scope is limited to simple benchtop and mobile manipulators with parallel grippers at this stage. This paper first provides an overview of relevant literature and state-of-the-art solutions, after which it outlines the framework on the conceptual level, followed by the specification of the relevant DT parameters for the robot, for the devices and for the facility. Finally, appropriate technologies and strategies are identified for the implementation

    A framework for digitisation of manual manufacturing task knowledge using gaming interface technology

    Get PDF
    Intense market competition and the global skill supply crunch are hurting the manufacturing industry, which is heavily dependent on skilled labour. Companies must look for innovative ways to acquire manufacturing skills from their experts and transfer them to novices and eventually to machines to remain competitive. There is a lack of systematic processes in the manufacturing industry and research for cost-effective capture and transfer of human skills. Therefore, the aim of this research is to develop a framework for digitisation of manual manufacturing task knowledge, a major constituent of which is human skill. The proposed digitisation framework is based on the theory of human-workpiece interactions that is developed in this research. The unique aspect of the framework is the use of consumer-grade gaming interface technology to capture and record manual manufacturing tasks in digital form to enable the extraction, decoding and transfer of manufacturing knowledge constituents that are associated with the task. The framework is implemented, tested and refined using 5 case studies, including 1 toy assembly task, 2 real-life-like assembly tasks, 1 simulated assembly task and 1 real-life composite layup task. It is successfully validated based on the outcomes of the case studies and a benchmarking exercise that was conducted to evaluate its performance. This research contributes to knowledge in five main areas, namely, (1) the theory of human-workpiece interactions to decipher human behaviour in manual manufacturing tasks, (2) a cohesive and holistic framework to digitise manual manufacturing task knowledge, especially tacit knowledge such as human action and reaction skills, (3) the use of low-cost gaming interface technology to capture human actions and the effect of those actions on workpieces during a manufacturing task, (4) a new way to use hidden Markov modelling to produce digital skill models to represent human ability to perform complex tasks and (5) extraction and decoding of manufacturing knowledge constituents from the digital skill models

    CAD2Real: Deep learning with domain randomization of CAD data for 3D pose estimation of electronic control unit housings

    Get PDF
    Electronic control units (ECUs) are essential for many automobile components, e.g. engine, anti-lock braking system (ABS), steering and airbags. For some products, the 3D pose of each single ECU needs to be determined during series production. Deep learning approaches can not easily be applied to this problem, because labeled training data is not available in sufficient numbers. Thus, we train state-of-the-art artificial neural networks (ANNs) on purely synthetic training data, which is automatically created from a single CAD file. By randomizing parameters during rendering of training images, we enable inference on RGB images of a real sample part. In contrast to classic image processing approaches, this data-driven approach poses only few requirements regarding the measurement setup and transfers to related use cases with little development effort.Comment: Proc. 30. Workshop Computational Intelligence, Berlin, 202

    Computing gripping points in 2D parallel surfaces via polygon clipping

    Get PDF

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)
    • 

    corecore