4,306 research outputs found

    Cross-Section Bead Image Prediction in Laser Keyhole Welding of AISI 1020 Steel Using Deep Learning Architectures

    Get PDF
    A deep learning model was applied for predicting a cross-sectional bead image from laser welding process parameters. The proposed model consists of two successive generators. The first generator produces a weld bead segmentation map from laser intensity and interaction time, which is subsequently translated into an optical microscopic (OM) image by the second generator. Both generators exhibit an encoder & x2013;decoder structure based on a convolutional neural network (CNN). In the second generator, a conditional generative adversarial network (cGAN) was additionally employed with multiscale discriminators and residual blocks, considering the size of the OM image. For a training dataset, laser welding experiments with AISI 1020 steel were conducted on a large process window using a 2 KW fiber laser, and a total of 39 process conditions were used for the training. High-resolution OM images were successfully generated, and the predicted bead shapes were reasonably accurate (R-Squared: 89.0 & x0025; for penetration depth, 93.6 & x0025; for weld bead area)

    Recognizing Intent in Collaborative Manipulation

    Full text link
    Collaborative manipulation is inherently multimodal, with haptic communication playing a central role. When performed by humans, it involves back-and-forth force exchanges between the participants through which they resolve possible conflicts and determine their roles. Much of the existing work on collaborative human-robot manipulation assumes that the robot follows the human. But for a robot to match the performance of a human partner it needs to be able to take initiative and lead when appropriate. To achieve such human-like performance, the robot needs to have the ability to (1) determine the intent of the human, (2) clearly express its own intent, and (3) choose its actions so that the dyad reaches consensus. This work proposes a framework for recognizing human intent in collaborative manipulation tasks using force exchanges. Grounded in a dataset collected during a human study, we introduce a set of features that can be computed from the measured signals and report the results of a classifier trained on our collected human-human interaction data. Two metrics are used to evaluate the intent recognizer: overall accuracy and the ability to correctly identify transitions. The proposed recognizer shows robustness against the variations in the partner's actions and the confounding effects due to the variability in grasp forces and dynamic effects of walking. The results demonstrate that the proposed recognizer is well-suited for implementation in a physical interaction control scheme
    corecore