8 research outputs found

    Training convolutional neural networks in virtual reality for grasp detection from 3D images

    No full text
    Master's thesis in Cybernetics and signal processingThe focus of this project has been on training convolutional neural networks for grasp detection with synthetic data. Convolutional neural networks have had great success on a wide variety of computer vision tasks, but they require large amounts of labelled training data, which currently is non existent for grasp detection tasks. In this thesis, a novel approach for generating large amounts of synthetic data for grasp detection is proposed. By working solely with depth images, realistic looking data can be generated with 3D models in a virtual environment. It is proposed to use simulated physics to ensure that the generated depth images captures objects in natural poses. Additionally, the use of heuristics for choosing the best grip vectors for the objects in relation to their environment is proposed, to serve as the labels for the generated depth images. A virtual environment for synthetic depth image generation was created and a convolutional neural network was trained on the generated data. The results show that neural networks can find good grasps from the synthetic depth images for three different types of objects in cluttered scenes. A novel way of creating real world data sets for grasping using a head mounted display and tracked hand controllers is also proposed. The results show that this may enable easy and fast labelling of real data which can be performed without training by non-technical people

    Grasping virtual fish: A step towards deep learning from demonstration in virtual reality

    No full text
    We present an approach to robotic deep learning from demonstration in virtual reality, which combines a deep 3D convolutional neural network, for grasp detection from 3D point clouds, with domain randomization to generate a large training data set. The use of virtual reality (VR) enables robot learning from demonstration in a virtual environment. In this environment, a human user can easily and intuitively demonstrate examples of how to grasp an object, such as a fish. From a few dozen of these demonstrations, we use domain randomization to generate a large synthetic training data set consisting of 76 000 example grasps of fish. After training the network using this data set, the network is able to guide a gripper to grasp virtual fish with good success rates. Our domain randomization approach is a step towards an efficient way to perform robotic deep learning from demonstration in virtual reality.publishedVersio

    Grasping virtual fish: A step towards deep learning from demonstration in virtual reality

    No full text
    We present an approach to robotic deep learning from demonstration in virtual reality, which combines a deep 3D convolutional neural network, for grasp detection from 3D point clouds, with domain randomization to generate a large training data set. The use of virtual reality (VR) enables robot learning from demonstration in a virtual environment. In this environment, a human user can easily and intuitively demonstrate examples of how to grasp an object, such as a fish. From a few dozen of these demonstrations, we use domain randomization to generate a large synthetic training data set consisting of 76 000 example grasps of fish. After training the network using this data set, the network is able to guide a gripper to grasp virtual fish with good success rates. Our domain randomization approach is a step towards an efficient way to perform robotic deep learning from demonstration in virtual reality

    Identifikasjon av lakseindivider — Biometri fase 1 (SalmID)

    Get PDF
    I lakseindustrien er fleksibilitet svært viktig for å kunne levere et mangfold av produkter til konsumentene, på en effektiv og lønnsom måte. På samme tid ønskes det å utnytte råvaren (fisken) på den beste og mest optimale måten for mest mulig høykvalitets produkter som kan gå til forbrukeren. For å gjennomføre en fleksibel produksjon hvor hvert enkelt individ skal kunne få en tilpasset behandling er det viktig å ha god sporing og identifikasjon på individene. Gjennom flere prosjekter i SINTEF har det vært jobbet med kvalitetsanalyse basert på ytre tekk, men nå også ved indikasjoner på indre kvalitet basert på inspeksjon av bukhulen. For å adressere den fleksible håndteringen har SINTEF, i dette forprosjektet, jobbet med hvordan finne og bruke en unik markør på hvert individ uten å gjøre fysiske inngrep. Tidligere har det vært sett på muligheten for markering med RF‐ID (radio‐brikke) eller andre fysiske markeringssystem. Dette har vært noe industrien har sett på med skepsis. En fysisk markering krever ikke bare maskiner i direkte kontakt med fisken, men etterlater seg også et fysisk objekt som må håndteres senere i prosessen, et objektet som potensielt kan skade fisken — noe som vil forringe dens kvalitet. Dette forprosjektet er basert på en idé om et system som bruker maskinsyn for å identifisere et unikt mønster som all laks har — et mønster basert på fiskens «fregner». Laksen har et distinkt prikkemønster på hode og kroppen. Prosjektet vil fokusere på å etablere en metode for å bruke dette som en unik markør for enkeltindivider.publishedVersio

    Teaching a Robot to Grasp Real Fish by Imitation Learning from a Human Supervisor in Virtual Reality

    No full text
    We teach a real robot to grasp real fish, by training a virtual robot exclusively in virtual reality. Our approach implements robot imitation learning from a human supervisor in virtual reality. A deep 3D convolutional neural network computes grasps from a 3D occupancy grid obtained from depth imaging at multiple viewpoints. In virtual reality, a human supervisor can easily and intuitively demonstrate examples of how to grasp an object, such as a fish. From a few dozen of these demonstrations, we use domain randomization to generate a large synthetic training data set consisting of 100 000 example grasps of fish. Using this data set for training purposes, the network is able to guide a real robot and gripper to grasp real fish with good success rates. The newly proposed domain randomization approach constitutes the first step in how to efficiently perform robot imitation learning from a human supervisor in virtual reality in a way that transfers well to the real world

    Teaching a Robot to Grasp Real Fish by Imitation Learning from a Human Supervisor in Virtual Reality

    No full text
    We teach a real robot to grasp real fish, by training a virtual robot exclusively in virtual reality. Our approach implements robot imitation learning from a human supervisor in virtual reality. A deep 3D convolutional neural network computes grasps from a 3D occupancy grid obtained from depth imaging at multiple viewpoints. In virtual reality, a human supervisor can easily and intuitively demonstrate examples of how to grasp an object, such as a fish. From a few dozen of these demonstrations, we use domain randomization to generate a large synthetic training data set consisting of 100 000 example grasps of fish. Using this data set for training purposes, the network is able to guide a real robot and gripper to grasp real fish with good success rates. The newly proposed domain randomization approach constitutes the first step in how to efficiently perform robot imitation learning from a human supervisor in virtual reality in a way that transfers well to the real world.submittedVersion© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Bin Picking of Reflective Steel Parts Using a Dual-Resolution Convolutional Neural Network Trained in a Simulated Environment

    Get PDF
    We consider the case of robotic bin picking of reflective steel parts, using a structured light 3D camera as a depth imaging device. In this paper, we present a new method for bin picking, based on a dual-resolution convolutional neural network trained entirely in a simulated environment. The dual-resolution network consists of a high resolution focus network to compute the grasp and a low resolution context network to avoid local collisions. The reflectivity of the steel parts result in depth images that have a lot of missing data. To take this into account, training of the neural net is done by domain randomization on a large set of synthetic depth images that simulate the missing data problems of the real depth images. We demonstrate both in simulation and in a real-world test that our method can perform bin picking of reflective steel parts

    Bin Picking of Reflective Steel Parts using a Dual-Resolution Convolutional Neural Network Trained in a Simulated Environment

    No full text
    We consider the case of robotic bin picking of reflective steel parts, using a structured light 3D camera as a depth imaging device. In this paper, we present a new method for bin picking, based on a dual-resolution convolutional neural network trained entirely in a simulated environment. The dualresolution network consists of a high resolution focus network to compute the grasp and a low resolution context network to avoid local collisions.The reflectivity of the steel parts result in depth images that have a lot of missing data. To take this into account, training of the neural net is done by domain randomization on a large set of synthetic depth images that simulate the missing data problems of the real depth images. We demonstrate both in simulation and in a real-world test that our method can perform bin picking of reflective steel part
    corecore