3,174 research outputs found

    Stereo and ToF Data Fusion by Learning from Synthetic Data

    Get PDF
    Time-of-Flight (ToF) sensors and stereo vision systems are both capable of acquiring depth information but they have complementary characteristics and issues. A more accurate representation of the scene geometry can be obtained by fusing the two depth sources. In this paper we present a novel framework for data fusion where the contribution of the two depth sources is controlled by confidence measures that are jointly estimated using a Convolutional Neural Network. The two depth sources are fused enforcing the local consistency of depth data, taking into account the estimated confidence information. The deep network is trained using a synthetic dataset and we show how the classifier is able to generalize to different data, obtaining reliable estimations not only on synthetic data but also on real world scenes. Experimental results show that the proposed approach increases the accuracy of the depth estimation on both synthetic and real data and that it is able to outperform state-of-the-art methods

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Multi-modal video analysis for early fire detection

    Get PDF
    In dit proefschrift worden verschillende aspecten van een intelligent videogebaseerd branddetectiesysteem onderzocht. In een eerste luik ligt de nadruk op de multimodale verwerking van visuele, infrarood en time-of-flight videobeelden, die de louter visuele detectie verbetert. Om de verwerkingskost zo minimaal mogelijk te houden, met het oog op real-time detectie, is er voor elk van het type sensoren een set ’low-cost’ brandkarakteristieken geselecteerd die vuur en vlammen uniek beschrijven. Door het samenvoegen van de verschillende typen informatie kunnen het aantal gemiste detecties en valse alarmen worden gereduceerd, wat resulteert in een significante verbetering van videogebaseerde branddetectie. Om de multimodale detectieresultaten te kunnen combineren, dienen de multimodale beelden wel geregistreerd (~gealigneerd) te zijn. Het tweede luik van dit proefschrift focust zich hoofdzakelijk op dit samenvoegen van multimodale data en behandelt een nieuwe silhouet gebaseerde registratiemethode. In het derde en tevens laatste luik van dit proefschrift worden methodes voorgesteld om videogebaseerde brandanalyse, en in een latere fase ook brandmodellering, uit te voeren. Elk van de voorgestelde technieken voor multimodale detectie en multi-view lokalisatie zijn uitvoerig getest in de praktijk. Zo werden onder andere succesvolle testen uitgevoerd voor de vroegtijdige detectie van wagenbranden in ondergrondse parkeergarages

    Development and assessment of a multi-sensor platform for precision phenotyping of small grain cereals under field conditions

    Get PDF
    The growing world population, changing food habits especially to increased meat consumption in newly industrialized countries, the growing demand for energy and the climate change pose major challenges for tomorrows agriculture. The agricultural output has to be increased by 70% by 2050 to achieve food and energy security for the future and 90% of this increase must be achieved by increasing yields on existing agricultural land. Achieving this increase in yield is one of the biggest challenges for the global agriculture and requires, among other things, an efficient breeding of new, higher-yielding varieties adapted to the predicted climate change. To achieve this goal, new methods need to be established in plant breeding which include efficient genotyping and phenotyping approaches of crops. Enormous progress has been achieved in the field of genotyping which enables to gain a better understanding of the molecular basis of complex traits. However, phenotyping must be considered as equally important as genomic approaches rely on high quality phenotypic data and as efficient phenotyping enables the identification of superior lines in breeding programs. In contrast to the rapid development of genotyping approaches, phenotyping methods in plant breeding have changed only little in recent decades which is also referred to as phenotyping bottleneck. Due to this discrepancy between available phenotypic and genotypic information a significant potential for crop improvement remains unexploited. The aim of this work was the development and evaluation of a precision phenotyping platform for the non-invasive measurement of crops under field conditions. The developed platform is assembled of a tractor with 80 cm ground clearance, a carrier trailer and a sensor module attached to the carrier trailer. The innovative sensors for plant phenotyping, consisting of several 3D Time-of-Flight cameras, laser distance sensors, light curtains and a spectral imaging camera in the near infrared reflectance (NIR) range, and the entire system technology for data acquisition were fully integrated into the sensor module. To operate the system, software with a graphical user interface has been developed that enables recording of sensor raw data with time- and location information which is the basis of a subsequent sensor and data fusion for trait determination. Data analysis software with a graphical user interface was developed under Matlab. This software applies all created sensor models and algorithms on sensor raw data for parameter extraction, enables the flexible integration of new algorithms into the data analysis pipeline, offers the opportunity to generate and calibrate new sensor fusion models and allows for trait determination. The developed platform facilitates the simultaneous measurement of several plant parameters with a throughput of over 2,000 plots per day. Based on data of the years 2011 and 2012, extensive calibrations were developed for the traits plant height, dry matter content and biomass yield employing triticale as a model species. For this purpose, 600 plots were grown each year and recorded twice with the platform followed by subsequent phenotyping with state-of-the-art methods for reference value generation. The experiments of each year were subdivided into three measurements at different time points to incorporate information of three different developmental stages of the plants into the calibrations. To validate the raw data quality and robustness of the data collection and reduction process, the technical repeatability for all developed data analysis algorithms was determined. In addition to these analyses, the accuracy of the generated calibrations was assessed as the correlations between determined and observed phenotypic values. The calibration of plant height based on light curtain data achieved a technical repeatability of 0.99 and a correlation coefficient of 0.97, the calibration of dry matter content based on spectral imaging data a of 0.98 and a of 0.97. The generation and analysis of dry biomass calibrations revealed that a significant improvement of measurement accuracy can be achieved by a fusion of different sensors and data evaluations. The calibration of dry biomass based on data of the light curtains, laser distance sensors, 3D Time-of-Flight cameras and spectral imaging achieved a of 0.99 and a of 0.92. The achieved excellent results illustrate the suitability of the developed platform, the integrated sensors and the data analysis software to non-invasively measure small grain cereals under field conditions. The high utility of the platform for plant breeding as well as for genomic studies was illustrated by the measurement of a large population with a total of 647 doubled haploid triticale lines derived from four families that were grown in four environments. The phenotypic data was determined based on platform measurements and showed a very high heritability for dry biomass yield. The combination of these phenotypic data with a genomic approach enabled the identification of quantitative trait loci (QTL), i.e., chromosomal regions affecting this trait. Furthermore, the repeated measurements revealed that the accumulation of biomass is controlled by temporal genetic regulation. Taken together, the very high robustness of the system, the excellent calibration results and the high heritability of the phenotypic data determined based on platform measurements demonstrate the utility of the precision phenotyping platform for plant breeding and its enormous potential to widen the phenotyping bottleneck.Die stetig wachsende Weltbevölkerung, sich ändernde Ernährungsgewohnheiten hin zu vermehrtem Fleischkonsum in Schwellenländern, der stetig wachsende Energiebedarf sowie der Klimawandel stellen große Herausforderungen an die Landwirtschaft von morgen. Um eine gesicherte Lebensmittel- und Energieversorgung zu gewährleisten muss die landwirtschaftliche Produktion bis 2050 um 70% gesteigert werden, wobei 90% dieser Steigerung durch eine Erhöhung der Erträge auf bereits bestehenden landwirtschaftlichen Flächen erzielt werden muss. Diese erforderliche Ertragssteigerung ist eine der größten Herausforderungen für die weltweite Landwirtschaft und bedarf unter anderem einer effizienten Züchtung neuer, an den Klimawandel angepasster, ertragsreicherer Sorten. Um eine ausreichende Steigerung der Erträge sicherstellen zu können müssen neue Methoden in der Pflanzenzucht etabliert werden, welche auf einer effizienten Geno- sowie Phänotypisierung der Pflanzen basieren. Im Bereich der Genotypisierung gab es in den letzten Jahrzehnten große Fortschritte, wodurch ein enormer Wissenszuwachs über die molekulare Basis komplexer Merkmale erzielt werden konnte. Trotzdem ist der Bereich der Phänotypisierung als ebenso wichtig anzusehen, da genetische Untersuchungen unter anderem von der Qualität phänotypischer Daten abhängen und qualitativ hochwertige phänotypische Daten die Selektion überlegener Linien in der Pflanzenzucht verbessern können. Im Vergleich zur Genotypisierung gab es jedoch im Bereich der Phänotypisierung in den letzten Jahrzehnten nur wenig wissenschaftlichen Fortschritt. Durch dieses Missverhältnis zwischen der Qualität phänotypischer und genotypischer Informationen bleibt somit ein erhebliches Potential an neuen Erkenntnissen unentdeckt. Das Ziel dieser Arbeit war die Entwicklung und Bewertung einer Präzisionsphänotypisierungsplattform zur zerstörungsfreien Charakterisierung von Energiegetreide in der Pflanzenzucht, um den aktuell bestehenden Flaschenhals bei der Umsetzung neuer Zuchtmethoden zu weiten. Die entwickelte Plattform ist ein Gespann bestehend aus einem Hochradschlepper mit 80 cm Bodenfreiheit, einem eigens entwickelten Trägeranhänger und einem am Trägeranhänger befestigten Sensormodul. Die innovative Sensorik zur Pflanzenvermessung, bestehend aus mehreren 3D Time-of-Flight Kameras, Laserabstandssensoren, Lichtgittern und einem bildgebenden Spektralmessgerät im nahen infrarot (NIR) Bereich, sowie die gesamte Systemtechnik zur Datenaufnahme wurden vollständig im Sensormodul integriert. Zur Bedienung des Systems wurde eine Software mit graphischer Benutzeroberfläche entwickelt, die eine zeit- und ortsbezogene Aufnahme der Sensorrohdaten ermöglicht, was die Grundlage einer anschließenden Sensor- und Datenfusion zur Merkmalsbestimmung darstellt. Zur Datenauswertung wurde eine Software mit graphischer Benutzeroberfläche unter Matlab entwickelt. Durch diese Software werden alle erstellten Sensormodelle und Algorithmen zur Datenauswertung auf die Rohdaten angewendet, wobei neue Algorithmen flexibel in das System eingebunden, Sensorfusionsmodelle erzeugt und kalibriert und Pflanzenparameter bestimmt werden können. Die entwickelte Plattform ermöglicht die simultane Vermessung mehrerer Pflanzenparameter bei einem Durchsatz von über 2000 Parzellen pro Tag. Basierend auf Daten aus den Jahren 2011 und 2012 wurden umfangreiche Kalibrierungen für die Parameter Pflanzenhöhe, Trockensubstanzgehalt und Trockenmasse für Triticale erstellt. Zu diesem Zweck wurden in beiden Jahren Feldversuche mit jeweils 600 Parzellen angelegt, doppelt mit der Plattform vermessen und zur Referenzwertgenerierung im Anschluss konventionell phänotypisiert. In beiden Jahren wurden drei Messungen von jeweils 200 Parzellen zu drei verschiedenen Zeitpunkten durchgeführt, um Daten unterschiedlicher Entwicklungsstadien der Pflanzen für die Erstellung der Kalibrierungen zur Verfügung zu haben. Zur Validierung der Rohdatenqualität sowie der Robustheit der Datenreduktionsverfahren wurden zunächst für alle entwickelten Auswertungsalgorithmen basierend auf den Wiederholungsmessungen die technischen Wiederholbarkeiten bestimmt. Neben der Validierung der Rohdatenqualität wurden die Genauigkeiten der erstellten Kalibrierungen als Korrelation zwischen den Referenzwerten und den mit der Sensorplattform gemessenen Werten ermittelt. Die Kalibrierung der Pflanzenhöhe basierend auf Lichtgitterdaten erreicht eine technische Wiederholbarkeit Rw2 von 0.99 und einen Korrelationskoeffizienten Rc² von 0.97, die Kalibrierung des Trockensubstanzgehalts basierend auf Spectral-Imaging Daten ein Rw2 von 0.98 und ein Rc² von 0.97. Bei der Erstellung der Trockenmasse Kalibrierung konnte gezeigt werden, dass durch eine Fusion verschiedener Sensoren und Datenauswertungen eine signifikante Verbesserung der Messgenauigkeit erreicht werden kann. Die Kalibrierung der Trockenmasse basierend auf Daten der Lichtgitter, Laserabstandssensoren, 3D Time-of-Flight Kameras und des Spectral-Imaging erreicht ein Rw2 von 0.99 und ein Rc² von 0.92. Die hervorragenden technischen Wiederholbarkeiten, sowie die exzellenten Genauigkeiten der entwickelten Kalibrierungen verdeutlichen die herausragende Eignung der entwickelten Plattform, der integrierten Sensoren und der entwickelten Datenaufnahme- sowie Datenauswertesoftware zur zerstörungsfreien Phänotypisierung von Getreide unter Feldbedingungen. Der hohe praktische Nutzen der Plattform für die Pflanzenzucht sowie für genetische Studien konnte durch die wiederholte Phänotypisierung einer DH Population mit 647 doppelhaploiden Triticale Linien in vier Umwelten aufgezeigt werden. Die Pflanzen wurden mit der Plattform an drei verschiedenen Zeitpunkten phänotypisiert und die erzeugten Daten zeigten eine sehr hohe Heritabilität für Biomasse. Die Kombination dieser phänotypischen mit genotypischen Informationen in einer Assoziationskartierungsstudie ermöglichte die Identifizierung von Regionen im Genom welche für quantitative Merkmale (QTL) kodieren. So konnten z.B. Regionen auf mehreren Chromosomen identifiziert werden, welche die Biomasse beeinflussen. Des Weiteren konnte durch Auswertung der wiederholten Messungen der Nachweis erbracht werden, dass die Biomasseentwicklung durch sich zeitlich ändernde genetische Mechanismen beeinflusst wird. Die erreichte sehr hohe Robustheit des Systems, die exzellenten Kalibrierungsergebnisse und die hohen Heritabilitäten der mit der Plattform bestimmten phänotypischen Daten verdeutlichen die hervorragende Eignung des Systems zur Anwendung in der Pflanzenzucht und das enorme Potential der entwickelten Technologie zur Weitung des aktuell bestehenden Phänotypisierungs-Flaschenhalses

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    A systematic review of perception system and simulators for autonomous vehicles research

    Get PDF
    This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents the physical fundamentals, principle functioning, and electromagnetic spectrum used to operate the most common sensors used in perception systems (ultrasonic, RADAR, LiDAR, cameras, IMU, GNSS, RTK, etc.). Furthermore, their strengths and weaknesses are shown, and the quantification of their features using spider charts will allow proper selection of different sensors depending on 11 features. In the second part, the main elements to be taken into account in the simulation of a perception system of an AV are presented. For this purpose, the paper describes simulators for model-based development, the main game engines that can be used for simulation, simulators from the robotics field, and lastly simulators used specifically for AV. Finally, the current state of regulations that are being applied in different countries around the world on issues concerning the implementation of autonomous vehicles is presented.This work was partially supported by DGT (ref. SPIP2017-02286) and GenoVision (ref. BFU2017-88300-C2-2-R) Spanish Government projects, and the “Research Programme for Groups of Scientific Excellence in the Region of Murcia" of the Seneca Foundation (Agency for Science and Technology in the Region of Murcia – 19895/GERM/15)

    Thermal-Kinect Fusion Scanning System for Bodyshape Inpainting and Estimation under Clothing

    Get PDF
    In today\u27s interactive world 3D body scanning is necessary in the field of making virtual avatar, apparel industry, physical health assessment and so on. 3D scanners that are used in this process are very costly and also requires subject to be nearly naked or wear a special tight fitting cloths. A cost effective 3D body scanning system which can estimate body parameters under clothing will be the best solution in this regard. In our experiment we build such a body scanning system by fusing Kinect depth sensor and a Thermal camera. Kinect can sense the depth of the subject and create a 3D point cloud out of it. Thermal camera can sense the body heat of a person under clothing. Fusing these two sensors\u27 images could produce a thermal mapped 3D point cloud of the subject and from that body parameters could be estimated even under various cloths. Moreover, this fusion system is also a cost effective one. In our experiment, we introduce a new pipeline for working with our fusion scanning system, and estimate and recover body shape under clothing. We capture Thermal-Kinect fusion images of the subjects with different clothing and produce both full and partial 3D point clouds. To recover the missing parts from our low resolution scan we fit parametric human model on our images and perform boolean operations with our scan data. Further, we measure our final 3D point cloud scan to estimate the body parameters and compare it with the ground truth. We achieve a minimum average error rate of 0.75 cm comparing to other approaches
    • …
    corecore