9 research outputs found

    Improving vision based pose estimation using LSTM neural networks

    Get PDF

    Localization and estimation of bending and twisting loads using neural networks

    Get PDF

    Improving vision based pose estimation using LSTM neural networks

    Get PDF

    Developing Learning Algorithms for Enhancing Industrial Machine Vision Systems and Improving Task Accuracy of Robotic Manipulators

    No full text
    Vision based learning techniques have become increasingly important in recent years for the development of highly accurate and robust algorithms in various fields of industry. Some of the most important applications of machine vision and learning in industry are structural health monitoring (SHM) and industrial robotics. SHM has been a critical technology in monitoring the structural integrity of composite materials used in aerospace industry. Since airframes operate under continuous external loads, they are exposed to large deformations that may adversely affect their structural integrity. Therefore, critical components such as wings should be continuously monitored to ensure a long service life. In this thesis, a realtime SHM system is developed for airframe structures to localize and estimate the magnitude of the loads causing deflections to the wings. To this end, a framework based on artificial neural networks (ANN) is developed where features extracted from a depth camera are utilized. The localization of the load is treated as a multinomial logistic classification problem and the load magnitude estimation as a logistic regression problem. The neural networks trained for classification and regression are preceded with an autoencoder (AE), through which data at a much smaller scale are extracted from the depth images. The effectiveness of the proposed method is validated by an experimental study performed on a composite UAV wing subject to concentrated and distributed loads, and the results obtained by the proposed method are superior when compared with a method based on Castigliano’s theorem. As for industrial robots, they are poised to replace CNC machines in the near future due to their lower price, high degree of automation and larger working space. However, their relatively low accuracy is a hindrance in their wide deployment in the manufacturing industry. Laser trackers are known to significantly increase their accuracy for manipulation tasks, however their high cost is a major problem for their usage. Therefore, more affordable solutions such as machine vision systems can become a valuable addition to the robotics industry. In this thesis an eye to hand camera based pose estimation system is developed for robotic machining and the accuracy of the estimated pose obtained through the Levenberg-Marquardt (LM) algorithm is improved using three supervised learning approaches. These approaches can enhance the estimated pose during both no-load trajectory tracking and machining process. The first proposed method is based on a Long Short Term Memory (LSTM) neural network and the other two are based on sparse regression and they are named as Sparse Identification of Nonlinear Statics (SINS) and Sparse Nonlinear Finite Impulse Response (SNFIR). Both of the LSTM and SNFIR algorithms can take the dynamics into account during robotic machining through utilization of the torque information available from the sensors at each joint to improve the estimated pose. The SINS algorithm can be used to improve the estimated pose through utilization of nonlinear static functions during no-load trajectory tracking. The proposed methods are validated by an experimental study performed using a KUKA KR240 R2900 ultra robot while following sixteen distinct trajectories based on ISO 9283 in addition to two distinct machining processes. The machining was performed while milling a NAS 979 part during which the orientation of the cutting tool was fixed, and free form milling, during which the orientation of the cutting tool continuously changed. Additionally, a target object to be tracked by the camera was designed with fiducial markers to guarantee trackability with ± 90° in all directions. The design of these fiducial markers guarantee the detection of at least two distinct non-parallel markers from any view, thus preventing pose estimation ambiguities. Moreover, in order to reduce the human errors due to the construction of the camera target and placement of the markers on it, this work proposes a method for optimizing the positions of the corners of the fiducial markers in the object frame using a laser tracker. The proposed methods were compared with an Extended Kalman Filter (EKF) and the experimental results show that the proposed approaches significantly improve the pose estimation accuracy and precision of the vision based system during robotic machining while proving much more effective than the EKF approach. Moreover, the proposed methods based on sparse regression provide parsimonious models and better results when compared with the proposed LSTM based approach

    Increasing trajectory tracking accuracy of industrial robots using SINDYc

    No full text
    In this work a feedforward control approach based on SINDYc (Sparse Identification of Nonlinear Dynamics with Control) is proposed for increasing the trajectory tracking accuracy of industrial robots. Initially, the dynamic relationship between the desired and the actual trajectory is sparsely identified using polynomial basis functions. Then a new trajectory is created from the desired trajectory using a feedforward controller based on the inverse of the sparsely identified dynamic model. The effectiveness of the proposed approach is evaluated by a simulation study in which 4 different KUKA robots were tasked to follow 16 distinct trajectories based on ISO 9283 standard. The obtained results show that the proposed method successfully models the dynamic relationship between the desired and the actual trajectory with accuracies above 98.09% when all of the robots are considered. Moreover, the developed feedforward controller improves the trajectory tracking accuracy of industrial robots by at least 91.1% and 94.5% for position and orientation tracking, respectively while providing parsimonious models

    Realtime localization and estimation of loads on aircraft wings from depth images

    No full text
    This paper deals with the development of a realtime structural health monitoring system for airframe structures to localize and estimate the magnitude of the loads causing deflections to the critical components, such as wings. To this end, a framework that is based on artificial neural networks is developed where features that are extracted from a depth camera are utilized. The localization of the load is treated as a multinomial logistic classification problem and the load magnitude estimation as a logistic regression problem. The neural networks trained for classification and regression are preceded with an autoencoder, through which maximum informative data at a much smaller scale are extracted from the depth features. The effectiveness of the proposed method is validated by an experimental study performed on a composite unmanned aerial vehicle (UAV) wing subject to concentrated and distributed loads, and the results obtained by the proposed method are superior when compared with a method based on Castigliano's theorem

    Development of a vision based pose estimation system for robotic machining and improving its accuracy using LSTM neural networks and sparse regression

    No full text
    In this work, an eye to hand camera based pose estimation system is developed for robotic machining and the accuracy of the estimated pose is improved using two different approaches, namely Long Short Term Memory (LSTM) neural networks and sparse regression. To improve the accuracy obtained from the Levenberg–Marquardt (LM) based pose estimation algorithm, two distinct supervised data driven approaches are proposed which can take the dynamics into account during robotic machining through utilization of the torque information available from the sensors at each joint. The first one is a LSTM neural network and the second one is a method based on sparse regression. The proposed methods are validated by an experimental study performed using a KUKA KR240 R2900 ultra robot while machining a NAS 979 part, during which the orientation of the cutting tool was fixed, and free form milling, during which the orientation of the cutting tool continuously changed. A target object to be tracked by the camera was designed with fiducial markers to guarantee trackability with ±90°in all directions. The design of these fiducial markers guarantee the detection of at least two distinct non-parallel markers from any view, thus preventing pose estimation ambiguities. Moreover, in order to reduce the errors due to the construction of the camera target and placement of the markers on it, this work proposes a method for optimizing the positions of the corners of the fiducial markers in the object frame using a laser tracker. The proposed methods were compared with an Extended Kalman Filter (EKF) and the experimental results show that both of the proposed approaches significantly improve the pose estimation accuracy and precision of the vision based system during robotic machining while proving much more effective than the EKF approach. The attainable absolute position errors were 5.47 mm, 2.9 mm and 2.05 mm on average for NAS 979 machining and 5.35 mm, 2.17 mm and 0.86 mm on average for free form machining when using the EKF, the proposed LSTM network and the proposed sparse regression approaches, respectively. Moreover, the proposed sparse regression based method provides parsimonious models and better results when compared with the proposed LSTM based approach
    corecore