8 research outputs found

    Editorial

    Get PDF

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    UGV Navigation in ROS using LIDAR 3D

    Get PDF
    This works addresses to give a step forward the achievement of robust Unmanned Ground Vehicles (UGVs), which can drive in urban environments. More specifically, it focuses in the management of a four wheeled vehicle in ROS using mainly the inputs provided by a LIDAR 3D. Simulations were carried out in ad-hoc scenarios designed and run using GAZEBO. Visual information provided by sensors is processed through PCL library. Thanks to this processing the needed parameters to manage the UGV are obtained and its guidance can be carried out though a PID controller.El foco de este trabajo consiste en avanzar un paso hacia la consecución de vehículos terrestres no tripulados robustos, que puedan circular en zonas urbanas. Más concretamente se centra en el manejo de un vehículo de cuatro ruedas en ROS usando, sobre todo, las entradas proporcionadas por un LIDAR 3D. Las simulaciones se llevaron a cabo en escenarios ad-hoc diseñados y ejecutados usando GAZEBO. La información visual de los sensores es procesada mediante la librería PCL. Gracias a este procesamiento se obtienen los parámetros para conducir el UGV y su guiado puede ser llevado a cabo mediante un controlador PID.Máster Universitario en Ingeniería Industrial (M141

    Reflective gas cell structure for spectroscopic carbon dioxide sensor

    Get PDF
    This thesis describes an optical gas sensing system suitable for monitoring the presence of carbon dioxide (CO2). An optical gas sensor using low cost and compact mid-infrared components has been developed and tested. The emitter and detector are compact, inexpensive and have low power consumption when compared with devices typically used in gas spectroscopy. Simulators such as ZEMAX®12 and SpectralCalc.com are primarily used in this work. Firstly, the research focuses on the simulation of the optimized and low cost gas cell for improvement using ZEMAX®12. Few gas cell structures have been designed and analyzed, which include Single-Input-Single-Output (SISO), 2-Multi-Input-Single-Output, 4-Multi- Input-Single-Output (4-MISO) and 8-Multi-Input-Single-Output. Of all these structures, SISO achieves the highest power efficiency of 28.028%. However, sensitivity analysis has shown that 4-MISO yields the highest sensitivity of - 0.2879%-1 and -0.2895%-1 for concentration range from 1.5% to 1.8% and from 1.1% to 2.0% respectively. Secondly, the optomechanical design of optimized 4-MISO was analyzed and fabricated using low cost and robust material. Experimental works were then carried out and the sensor’s output was acquired and recorded using Data Acquisition card and LabVIEW programme. Experimental results show that the new developed 4-MISO sensor has similar sensitivity to simulated gas sensor for detecting carbon dioxide gas concentration range from 1.5% to 1.8% with overall sensitivity of -0.2916%-1. However, the deviation of sensitivity between the measured and simulated range of concentration was calculated at 0.0037%-1. Finally, the developed low cost sensor has shown the capability of detecting CO2 gas concentration with high accuracy of 0.6357% and response time of less than 1 second. The optimized gas sensor can be applied in various potential applications such as in monitoring indoor air quality, automotive, horticulture and heating, ventilating and air conditioning systems

    Traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data

    Get PDF
    Scene perception and traversability analysis are real challenges for autonomous driving systems. In the context of off-road autonomy, there are additional challenges due to the unstructured environments and the existence of various vegetation types. It is necessary for the Autonomous Ground Vehicles (AGVs) to be able to identify obstacles and load-bearing surfaces in the terrain to ensure a safe navigation (McDaniel et al. 2012). The presence of vegetation in off-road autonomy applications presents unique challenges for scene understanding: 1) understory vegetation makes it difficult to detect obstacles or to identify load-bearing surfaces; and 2) trees are usually regarded as obstacles even though only trunks of the trees pose collision risk in navigation. The overarching goal of this dissertation was to study traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data. More specifically, to address the aforementioned challenges, this dissertation studied the impacts of the understory vegetation density on the solid obstacle detection performance of the off-road autonomous systems. By leveraging a physics-based autonomous driving simulator, a classification-based machine learning framework was proposed for obstacle detection based on point cloud data captured by LIDAR. Features were extracted based on a cumulative approach meaning that information related to each feature was updated at each timeframe when new data was collected by LIDAR. It was concluded that the increase in the density of understory vegetation adversely affected the classification performance in correctly detecting solid obstacles. Additionally, a regression-based framework was proposed for estimating the understory vegetation density for safe path planning purposes according to which the traversabilty risk level was regarded as a function of estimated density. Thus, the denser the predicted density of an area, the higher the risk of collision if the AGV traversed through that area. Finally, for the trees in the terrain, the dissertation investigated statistical features that can be used in machine learning algorithms to differentiate trees from solid obstacles in the context of forested off-road scenes. Using the proposed extracted features, the classification algorithm was able to generate high precision results for differentiating trees from solid obstacles. Such differentiation can result in more optimized path planning in off-road applications

    Comparative analysis of 3D- depth cameras in industrial bin picking solution

    Get PDF
    Machine vision is a crucial component of a successful bin picking solution. During the past few years, there has been large advancements in depth sensing technologies. This has led to them receiving a lot of attention, especially in bin picking applications. With reduced costs and greater accessibility, the use of machine vision has rapidly increased. Automated bin picking poses a technical challenge, which is present in numerous industrial processes. Robots need perception from their surroundings, and machine vision attempt to solve this by providing eyes to the machine. The motivation behind solving this challenge is the increased productivity, enabled by automated bin picking. The main goal of this thesis is to address the challenges of bin picking by comparing the performance of different 3D- depth cameras with illustrative case studies and experimental research. The depth cameras are exposed to different ambient conditions and object properties, where the performance of different 3D- imaging technologies is evaluated and compared between each other. The performance of a commercial bin picking solution is also researched through illustrative case studies to evaluate the accuracy, reliability, and flexibility of the solution. Feasibility study is also conducted, and the capabilities of the bin picking solution is demonstrated in two industrial applications. This research work focuses on three different depth sensing technologies. Comparison is done between structured light, stereo vision, and time-of-flight technologies. The main categories for evaluation are ambient light tolerance, reflective surfaces, and how well the depth cameras can detect simple and complex geometric features. The comparison between the depth cameras is limited to opaque objects, ranging from shiny metal blanks to matte connector components and porous surface textures. The performance of each depth camera is evaluated, and the advantages and disadvantages of each technology are discussed. Results of this thesis showed that while all of the technologies are capable of performing in a bin picking solution, structured light performed the best in the evaluation criteria of this thesis. The results from bin picking solution accuracy evaluation also illustrated some of the many challenges of bin picking, and how the true accuracy of the bin picking solution is not dictated purely by the resolution of the vision sensor. Finally, to conclude this thesis the results and future suggestions are discussed.Konenäkö on keskeinen osa automatisoitua kasasta poimintasovellusta. Syvyyskamerateknologiat ovat kehittyneet paljon kuluneiden vuosien aikana, joka on herättänyt paljon keskustelua niiden käyttömahdollisuuksista. Kustannusten alenemisen, sekä paremman saatavuuden myötä konenäön käyttö, erityisesti kasasta poimintasovelluksissa onkin lisääntynyt nopeasti. Automatisoitu kasasta poiminta kuitenkin omaa teknisiä haasteita, jotka ovat läsnä lukuisissa teollisissa prosesseissa. Motivaatio automatisoidun kasasta poiminnan taustalla on tuotettavuuden kasvu, jonka konenäkö mahdollistaa tarjoamalla dataa robotin ympäristöstä. Tämän diplomityön tavoitteina on vastata kasasta poiminnan haasteisiin vertailemalla erilaisten 3D-syvyyskameroiden suorituskykyä tapaustutkimusten sekä kokeellisen tutkimuksen avulla. Syvyyskameroiden toimintaa arvioidaan erilaisissa ympäristöissä sekä erilaisilla kappaleilla, jonka seurauksena 3D-kuvaustekniikoiden suorituskykyä vertaillaan keskenään. Työn aikana arvioidaan myös kaupallisen kasasta poimintasovelluksen suorituskykyä, jossa tutkitaan tapaustutkimusten avulla sovelluksen tarkkuutta, luotettavuutta sekä joustavuutta. Tämän lisäksi sovelluksen toimintaa pilotoidaan, ja ratkaisun ominaisuuksia demonstroidaan kahdessa teollisessa sovelluksessa. Tämä diplomityö keskittyy kolmeen eri syvyyskameratekniikkaan. Vertailu tehdään strukturoidun valon, stereonäön sekä Time-of-Flight tekniikoiden välillä. Arvioinnin pääkategoriat ovat ympäristön valoisuus, geometristen muotojen havainnointikyky, sekä heijastavat pinnat. Syvyyskameroiden välinen vertailu rajoittuu läpinäkymättömiin kappaleisiin, jotka vaihtelevat kiiltävistä metalliaihioista mattapintaisiin liitinkomponentteihin ja huokoisiin pintarakenteisiin. Tutkimuksen tulokset osoittivat, että vaikka kaikki tekniikat kykenevät automatisoituun kasasta poimintaan, strukturoitu valo suoriutui tutkituista teknologioista parhaiten. Kasasta poimintasovelluksen tarkkuuden arviointi havainnollisti myös sen monia haasteita, sekä kuinka sovelluksen todellinen tarkkuus ei riipu ainoastaan syvyyskameran resoluutiosta. Loppupäätelmien lisäksi työ päätetään ehdotuksilla tutkimuksen jatkamiseksi

    Evaluation of SLAM algorithms in realistic sensor test conditions

    Get PDF
    Autonomous robotic systems rely on Simultaneous Localisation and Mapping (SLAM) algorithms that use ranging or other sensory data as input to create a map of the environment. Numerous algorithms have been developed and demonstrated, many of which utilise data from high precision ranging instruments. Small Unmanned Aircraft Systems (UAS) have significant restrictions on the size and weight of sensors they can carry, and light-weight ranging sensors tend to be subject to greater error than their larger counterparts. The effect of these errors on the mapping capabilities of SLAM algorithms will depend on the combination of algorithm and sensor. To quantitatively determine the quality of the map, a map quality metric is needed. This thesis presents an evaluation of the mapping performance of a variety of SLAM algorithms that are freely available in the Robot Operating System (ROS), in conjunction with ranging data from various ranging sensors suitable for use onboard small UAS. To compare the quality of the generated maps, an existing metric was initially employed, however deficiencies noted in this metric led to the development of two new metrics. A discussion of both the existing and new map quality metrics, and the advantages and disadvantages of each, is presented as part of this thesis. To evaluate the performance of algorithm/sensor combinations, ranging data was collected from various sensors in a known environment. Both sensor poses and the ground truth map were obtained using a highly-accurate motion capture system. The measured sensor poses were then corrupted with noise and drift to simulate odometry measurements required for the SLAM algorithms. Of the SLAM algorithms tested, Gmapping was found to produce high quality maps with wide-field-of-regard range sensors in the presence of odometry noise and drift. KartoSLAM produced similar maps to Gmapping (with wide field of regard sensors), though it did not cope as well with odometry errors. Hector Mapping tends to excel at creating maps with wide field of regard ranging sensors

    ステレオ視方式三次元距離センサーLSIの高性能化に関する研究

    Get PDF
    九州工業大学博士学位論文 学位記番号:情工博甲第302号 学位授与年月日:平成27年3月25日第1章 序論|第2章 三次元センサーによる距離検知技術|第3章 三次元距離センサーLSIの高集積化|第4章 相関信号鮮明化機能搭載三次元距離センサーLSI|第5章 広ダイナミックレンジイメージセンサー搭載三次元距離センサーLSI|第6章 距離検知精度向上三次元距離センサーLSI|第7章 総括九州工業大学平成26年
    corecore