64 research outputs found

    Visual Calibration, Identification and Control of 6-RSS Parallel Robots

    Get PDF
    Parallel robots present some outstanding advantages in high force-to-weight ratio, better stiffness and theoretical higher accuracy compared with serial manipulators. Hence parallel robots have been utilized increasingly in various applications. However, due to the manufacturing tolerances and defections in the robot structure, the positioning accuracy of parallel robots is basically equivalent with that of serial manipulators according to previous researches on the accuracy analysis of the Stewart Platform [1], which is difficult to meet the precision requirement of many potential applications. In addition, the existence of closed-chain mechanism yields difficulties in designing control system for practical applications, due to its highly coupled dynamics. Visual sensor is a good choice for providing non-contact measurement of the end-effector pose (position and orientation) with simplicity in operation and low cost compared to other measurement methods such as the coordinate measurement machine (CMM) [2] and the laser tracker [3]. In this research, a series of solutions including kinematic calibration, dynamic identification and visual servoing are proposed to improve the positioning and tracking performance of the parallel robot based on the visual sensor. The main contributions of this research include three parts. In the first part, a relative pose-based algorithm (RPBA) is proposed to solve the kinematic calibration problem of a six-revolute-spherical-spherical (6-RSS) parallel robot by using the optical CMM sensor. Based on the relative poses between the candidate and the initial configurations, a calibration algorithm is proposed to determine the optimal error parameters of the robot kinematic model and external parameters introduced by the optical sensor. The experimental results demonstrate that the proposal RPBA using optical CMM is an implementable and effective method for the parallel robot calibration. The second part focuses on the dynamic model identification of the 6-RSS parallel robots. A visual closed-loop output-error identification method based on an optical CMM sensor is proposed for the purpose of the advanced model-based visual servoing control design of parallel robots. By using an outer loop visual servoing controller to stabilize both the parallel robot and the simulated model, the visual closed-loop output-error identification method is developed and the model parameters are identified by using a nonlinear optimization technique. The effectiveness of the proposed identification algorithm is validated by experimental tests. In the last part, a dynamic sliding mode control (DSMC) scheme combined with the visual servoing method is proposed to improve the tracking performance of the 6-RSS parallel robot based on the optical CMM sensor. By employing a position-to-torque converter, the torque command generated by DSMC can be applied to the position controlled industrial robot. The stability of the proposed DSMC has been proved by using Lyapunov theorem. The real-time experiment tests on a 6-RSS parallel robot demonstrate that the developed DSMC scheme is robust to the modeling errors and uncertainties. Compared with the classical kinematic level controllers, the proposed DSMC exhibits the superiority in terms of tracking performance and robustness

    Exploitation of time-of-flight (ToF) cameras

    Get PDF
    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin

    Robust Position-based Visual Servoing of Industrial Robots

    Get PDF
    Recently, the researchers have tried to use dynamic pose correction methods to improve the accuracy of industrial robots. The application of dynamic path tracking aims at adjusting the end-effector’s pose by using a photogrammetry sensor and eye-to-hand PBVS scheme. In this study, the research aims to enhance the accuracy of industrial robot by designing a chattering-free digital sliding mode controller integrated with a novel adaptive robust Kalman filter (ARKF) validated on Puma 560 model on simulation. This study includes Gaussian noise generation, pose estimation, design of adaptive robust Kalman filter, and design of chattering-free sliding mode controller. The designed control strategy has been validated and compared with other control strategies in Matlab 2018a Simulink on a 64bits PC computer. The main contributions of the research work are summarized as follows. First, the noise removal in the pose estimation is carried out by the novel ARKF. The proposed ARKF deals with experimental noise generated from photogrammetry observation sensor C-track 780. It exploits the advantages of adaptive estimation method for states noise covariance (Q), least square identification for measurement noise covariance (R) and a robust mechanism for state variables error covariance (P). The Gaussian noise generation is based on the collected data from the C-track when the robot is in a stationary status. A novel method for estimating covariance matrix R considering both effects of the velocity and pose is suggested. Next, a robust PBVS approach for industrial robots based on fast discrete sliding mode controller (FDSMC) and ARKF is proposed. The FDSMC takes advantage of a nonlinear reaching law which results in faster and more accurate trajectory tracking compared to standard DSMC. Substituting the switching function with a continuous nonlinear reaching law leads to a continuous output and thus eliminating the chattering. Additionally, the sliding surface dynamics is considered to be a nonlinear one, which results in increasing the convergence speed and accuracy. Finally, the analysis techniques related to various types of sliding mode controller have been used for comparison. Also, the kinematic and dynamic models with revolutionary joints for Puma 560 are built for simulation validation. Based on the computed indicators results, it is proven that after tuning the parameters of designed controller, the chattering-free FDSMC integrated with ARKF can essentially reduce the effect of uncertainties on robot dynamic model and improve the tracking accuracy of the 6 degree-of-freedom (DOF) robot

    ToF cameras for active vision in robotics

    Get PDF
    ToF cameras are now a mature technology that is widely being adopted to provide sensory input to robotic applications. Depending on the nature of the objects to be perceived and the viewing distance, we distinguish two groups of applications: those requiring to capture the whole scene and those centered on an object. It will be demonstrated that it is in this last group of applications, in which the robot has to locate and possibly manipulate an object, where the distinctive characteristics of ToF cameras can be better exploited. After presenting the physical sensor features and the calibration requirements of such cameras, we review some representative works highlighting for each one which of the distinctive ToF characteristics have been more essential. Even if at low resolution, the acquisition of 3D images at frame-rate is one of the most important features, as it enables quick background/ foreground segmentation. A common use is in combination with classical color cameras. We present three developed applications, using a mobile robot and a robotic arm, to exemplify with real images some of the stated advantages.This work was supported by the EU project GARNICS FP7-247947, by the Spanish Ministry of Science and Innovation under project PAU+ DPI2011-27510, and by the Catalan Research Commission through SGR-00155Peer Reviewe

    A framework for flexible integration in robotics and its applications for calibration and error compensation

    Get PDF
    Robotics has been considered as a viable automation solution for the aerospace industry to address manufacturing cost. Many of the existing robot systems augmented with guidance from a large volume metrology system have proved to meet the high dimensional accuracy requirements in aero-structure assembly. However, they have been mainly deployed as costly and dedicated systems, which might not be ideal for aerospace manufacturing having low production rate and long cycle time. The work described in this thesis is to provide technical solutions to improve the flexibility and cost-efficiency of such metrology-integrated robot systems. To address the flexibility, a software framework that supports reconfigurable system integration is developed. The framework provides a design methodology to compose distributed software components which can be integrated dynamically at runtime. This provides the potential for the automation devices (robots, metrology, actuators etc.) controlled by these software components to be assembled on demand for various assembly applications. To reduce the cost of deployment, this thesis proposes a two-stage error compensation scheme for industrial robots that requires only intermittent metrology input, thus allowing for one expensive metrology system to be used by a number of robots. Robot calibration is employed in the first stage to reduce the majority of robot inaccuracy then the metrology will correct the residual errors. In this work, a new calibration model for serial robots having a parallelogram linkage is developed that takes into account both geometric errors and joint deflections induced by link masses and weight of the end-effectors. Experiments are conducted to evaluate the two pieces of work presented above. The proposed framework is adopted to create a distributed control system that implements calibration and error compensation for a large industrial robot having a parallelogram linkage. The control system is formed by hot-plugging the control applications of the robot and metrology used together. Experimental results show that the developed error model was able to improve the 3 positional accuracy of the loaded robot from several millimetres to less than one millimetre and reduce half of the time previously required to correct the errors by using only the metrology. The experiments also demonstrate the capability of sharing one metrology system to more than one robot

    A framework for digitisation of manual manufacturing task knowledge using gaming interface technology

    Get PDF
    Intense market competition and the global skill supply crunch are hurting the manufacturing industry, which is heavily dependent on skilled labour. Companies must look for innovative ways to acquire manufacturing skills from their experts and transfer them to novices and eventually to machines to remain competitive. There is a lack of systematic processes in the manufacturing industry and research for cost-effective capture and transfer of human skills. Therefore, the aim of this research is to develop a framework for digitisation of manual manufacturing task knowledge, a major constituent of which is human skill. The proposed digitisation framework is based on the theory of human-workpiece interactions that is developed in this research. The unique aspect of the framework is the use of consumer-grade gaming interface technology to capture and record manual manufacturing tasks in digital form to enable the extraction, decoding and transfer of manufacturing knowledge constituents that are associated with the task. The framework is implemented, tested and refined using 5 case studies, including 1 toy assembly task, 2 real-life-like assembly tasks, 1 simulated assembly task and 1 real-life composite layup task. It is successfully validated based on the outcomes of the case studies and a benchmarking exercise that was conducted to evaluate its performance. This research contributes to knowledge in five main areas, namely, (1) the theory of human-workpiece interactions to decipher human behaviour in manual manufacturing tasks, (2) a cohesive and holistic framework to digitise manual manufacturing task knowledge, especially tacit knowledge such as human action and reaction skills, (3) the use of low-cost gaming interface technology to capture human actions and the effect of those actions on workpieces during a manufacturing task, (4) a new way to use hidden Markov modelling to produce digital skill models to represent human ability to perform complex tasks and (5) extraction and decoding of manufacturing knowledge constituents from the digital skill models

    Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation

    Get PDF
    Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This aptitude reflects on the crucial role played by objects\u2019 manipulation in the world that we have shaped for us. With a view of bringing robots outside industries for supporting people during everyday life, the ability of manipulating objects autonomously and in unstructured environments is therefore one of the basic skills they need. Autonomous manipulation is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. Humans rely on vision for wideranging tridimensional information, prioprioception for the awareness of the relative position of their own body in the space and the sense of touch for local information when physical interaction with objects happens. The study of autonomous manipulation in robotics aims at transferring similar perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing increasingly the research attention in the latest years. In this work of Thesis, we propose possible solutions to some key components of autonomous manipulation, focusing in particular on the perception problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information to be processed for inferring how to interact with objects. The object modeling and grasping pipeline based on superquadric functions we designed meets this need, since it reconstructs the object 3D model from partial point cloud and computes a suitable hand pose for grasping the object. Retrieving objects information with touch sensors only is a relevant skill that becomes crucial when vision is occluded, as happens for instance during physical interaction with the object. We addressed this problem with the design of a novel tactile localization algorithm, named Memory Unscented Particle Filter, capable of localizing and recognizing objects relying solely on 3D contact points collected on the object surface. Another key point of autonomous manipulation we report on in this Thesis work is bi-manual coordination. The execution of more advanced manipulation tasks in fact might require the use and coordination of two arms. Tool usage for instance often requires a proper in-hand object pose that can be obtained via dual-arm re-grasping. In pick-and-place tasks sometimes the initial and target position of the object do not belong to the same arm workspace, then requiring to use one hand for lifting the object and the other for locating it in the new position. At this regard, we implemented a pipeline for executing the handover task, i.e. the sequences of actions for autonomously passing an object from one robot hand on to the other. The contributions described thus far address specific subproblems of the more complex task of autonomous manipulation. This actually differs from what humans do, in that humans develop their manipulation skills by learning through experience and trial-and-error strategy. Aproper mathematical formulation for encoding this learning approach is given by Deep Reinforcement Learning, that has recently proved to be successful in many robotics applications. For this reason, in this Thesis we report also on the six month experience carried out at Berkeley Artificial Intelligence Research laboratory with the goal of studying Deep Reinforcement Learning and its application to autonomous manipulation

    Lokales Lernen fĂĽr visuell kontrollierte Roboter

    Get PDF
    In this thesis a new supervised function approximation technique called Hierarchical Network of Locally Arranged Models is proposed to aid the development of learning-based visual robotic systems. In a coherent framework the new approach offers various means to create modular solutions to learning problems. It is possible to built up heterogeneous hierarchies so that different subnetworks can rely on different information sources. Modularity is realized by an automatic division of the input space of the target function into local regions where non-redundant models perform the demanded mapping into the output space. The goal is to replace one complex global model by a set of simple local ones. E.g. non-linear functions should be approximated with a number of simple linear models. The advantage of locality is the reduction of complexity: simple local models can more robustly be established and more easily be analyzed. Global validity is ensured by local specialization. The presented approach relies essentially on two new contributions: means to define the so-called domains of the local models (i.e. the region of their validity) and algorithms to split up the input space in order to achieve good approximation quality. The suggested models for the domains have different flexibility so that the local regions can have various shapes. Two learning algorithms are developed of which the offine version works on a fixed training set that is acquired before the application of the network, while the online version is useful if the network should be continually refined during operation. Both algorithms follow the strategy to place more local models at these regions of the input space where good approximation of the target function is harder to achieve. Furthermore, mechanisms are proposed that unify domains in order to simplify created networks, that define the degree of cooperation and competition between the different local models and that automatically detect data outliers to secure the application of a network. The value of the new approach is validated with public benchmark tests where several competitors are outperformed. The second major topic of this thesis is the application of the new machine learning technique in an adaptive robot vision system. The task is solved to train an arm robot to play a shape sorter puzzle where blocks have to be inserted into holes. To do so, different software modules are developed that realize interleaving perception-action cycles that drive the robot w.r.t. visual feedback. A visual servoing algorithm is presented that offers a simple principle to learn robot movements. It is based on the acquisition of training samples which represent observations of correct robot moves. The new approach to machine learning - specifically its features that are uncommon for supervised learning techniques - proves useful to realize this robot vision system. The possibility to combine different information sources in a hierarchy of local models helps to introduce application specific knowledge into the trained models. The outlier detection mechanism triggers error feedback within the system. The online learning algorithm makes the robot system robust against changes of its environment

    Integrating Vision and Physical Interaction for Discovery, Segmentation and Grasping of Unknown Objects

    Get PDF
    In dieser Arbeit werden Verfahren der Bildverarbeitung und die Fähigkeit humanoider Roboter, mit ihrer Umgebung physisch zu interagieren, in engem Zusammenspiel eingesetzt, um unbekannte Objekte zu identifizieren, sie vom Hintergrund und anderen Objekten zu trennen, und letztendlich zu greifen. Im Verlauf dieser interaktiven Exploration werden außerdem Eigenschaften des Objektes wie etwa sein Aussehen und seine Form ermittelt

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
    • …
    corecore