5 research outputs found

    Cutting Pose Prediction from Point Clouds

    Get PDF

    Comparative analysis of 3D- depth cameras in industrial bin picking solution

    Get PDF
    Machine vision is a crucial component of a successful bin picking solution. During the past few years, there has been large advancements in depth sensing technologies. This has led to them receiving a lot of attention, especially in bin picking applications. With reduced costs and greater accessibility, the use of machine vision has rapidly increased. Automated bin picking poses a technical challenge, which is present in numerous industrial processes. Robots need perception from their surroundings, and machine vision attempt to solve this by providing eyes to the machine. The motivation behind solving this challenge is the increased productivity, enabled by automated bin picking. The main goal of this thesis is to address the challenges of bin picking by comparing the performance of different 3D- depth cameras with illustrative case studies and experimental research. The depth cameras are exposed to different ambient conditions and object properties, where the performance of different 3D- imaging technologies is evaluated and compared between each other. The performance of a commercial bin picking solution is also researched through illustrative case studies to evaluate the accuracy, reliability, and flexibility of the solution. Feasibility study is also conducted, and the capabilities of the bin picking solution is demonstrated in two industrial applications. This research work focuses on three different depth sensing technologies. Comparison is done between structured light, stereo vision, and time-of-flight technologies. The main categories for evaluation are ambient light tolerance, reflective surfaces, and how well the depth cameras can detect simple and complex geometric features. The comparison between the depth cameras is limited to opaque objects, ranging from shiny metal blanks to matte connector components and porous surface textures. The performance of each depth camera is evaluated, and the advantages and disadvantages of each technology are discussed. Results of this thesis showed that while all of the technologies are capable of performing in a bin picking solution, structured light performed the best in the evaluation criteria of this thesis. The results from bin picking solution accuracy evaluation also illustrated some of the many challenges of bin picking, and how the true accuracy of the bin picking solution is not dictated purely by the resolution of the vision sensor. Finally, to conclude this thesis the results and future suggestions are discussed.Konenäkö on keskeinen osa automatisoitua kasasta poimintasovellusta. Syvyyskamerateknologiat ovat kehittyneet paljon kuluneiden vuosien aikana, joka on herättänyt paljon keskustelua niiden käyttömahdollisuuksista. Kustannusten alenemisen, sekä paremman saatavuuden myötä konenäön käyttö, erityisesti kasasta poimintasovelluksissa onkin lisääntynyt nopeasti. Automatisoitu kasasta poiminta kuitenkin omaa teknisiä haasteita, jotka ovat läsnä lukuisissa teollisissa prosesseissa. Motivaatio automatisoidun kasasta poiminnan taustalla on tuotettavuuden kasvu, jonka konenäkö mahdollistaa tarjoamalla dataa robotin ympäristöstä. Tämän diplomityön tavoitteina on vastata kasasta poiminnan haasteisiin vertailemalla erilaisten 3D-syvyyskameroiden suorituskykyä tapaustutkimusten sekä kokeellisen tutkimuksen avulla. Syvyyskameroiden toimintaa arvioidaan erilaisissa ympäristöissä sekä erilaisilla kappaleilla, jonka seurauksena 3D-kuvaustekniikoiden suorituskykyä vertaillaan keskenään. Työn aikana arvioidaan myös kaupallisen kasasta poimintasovelluksen suorituskykyä, jossa tutkitaan tapaustutkimusten avulla sovelluksen tarkkuutta, luotettavuutta sekä joustavuutta. Tämän lisäksi sovelluksen toimintaa pilotoidaan, ja ratkaisun ominaisuuksia demonstroidaan kahdessa teollisessa sovelluksessa. Tämä diplomityö keskittyy kolmeen eri syvyyskameratekniikkaan. Vertailu tehdään strukturoidun valon, stereonäön sekä Time-of-Flight tekniikoiden välillä. Arvioinnin pääkategoriat ovat ympäristön valoisuus, geometristen muotojen havainnointikyky, sekä heijastavat pinnat. Syvyyskameroiden välinen vertailu rajoittuu läpinäkymättömiin kappaleisiin, jotka vaihtelevat kiiltävistä metalliaihioista mattapintaisiin liitinkomponentteihin ja huokoisiin pintarakenteisiin. Tutkimuksen tulokset osoittivat, että vaikka kaikki tekniikat kykenevät automatisoituun kasasta poimintaan, strukturoitu valo suoriutui tutkituista teknologioista parhaiten. Kasasta poimintasovelluksen tarkkuuden arviointi havainnollisti myös sen monia haasteita, sekä kuinka sovelluksen todellinen tarkkuus ei riipu ainoastaan syvyyskameran resoluutiosta. Loppupäätelmien lisäksi työ päätetään ehdotuksilla tutkimuksen jatkamiseksi

    Automated Assembly Using 3D and 2D Cameras

    No full text
    2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to < 1 and < 1 . This was demonstrated in a real industrial assembly task where high accuracy is required

    Automated assembly using 3D and 2D cameras

    No full text
    D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to <1 and <1 . This was demonstrated in a real industrial assembly task where high accuracy is required

    Automated Assembly Using 3D and 2D Cameras

    No full text
    corecore