748 research outputs found

    Towards Object-Centric Scene Understanding

    Get PDF
    Visual perception for autonomous agents continues to attract community attention due to the disruptive technologies and the wide applicability of such solutions. Autonomous Driving (AD), a major application in this domain, promises to revolutionize our approach to mobility while bringing critical advantages in limiting accident fatalities. Fueled by recent advances in Deep Learning (DL), more computer vision tasks are being addressed using a learning paradigm. Deep Neural Networks (DNNs) succeeded consistently in pushing performances to unprecedented levels and demonstrating the ability of such approaches to generalize to an increasing number of difficult problems, such as 3D vision tasks. In this thesis, we address two main challenges arising from the current approaches. Namely, the computational complexity of multi-task pipelines, and the increasing need for manual annotations. On the one hand, AD systems need to perceive the surrounding environment on different levels of detail and, subsequently, take timely actions. This multitasking further limits the time available for each perception task. On the other hand, the need for universal generalization of such systems to massively diverse situations requires the use of large-scale datasets covering long-tailed cases. Such requirement renders the use of traditional supervised approaches, despite the data readily available in the AD domain, unsustainable in terms of annotation costs, especially for 3D tasks. Driven by the AD environment nature and the complexity dominated (unlike indoor scenes) by the presence of other scene elements (mainly cars and pedestrians) we focus on the above-mentioned challenges in object-centric tasks. We, then, situate our contributions appropriately in fast-paced literature, while supporting our claims with extensive experimental analysis leveraging up-to-date state-of-the-art results and community-adopted benchmarks

    LiDAR-Based Place Recognition For Autonomous Driving: A Survey

    Full text link
    LiDAR-based place recognition (LPR) plays a pivotal role in autonomous driving, which assists Simultaneous Localization and Mapping (SLAM) systems in reducing accumulated errors and achieving reliable localization. However, existing reviews predominantly concentrate on visual place recognition (VPR) methods. Despite the recent remarkable progress in LPR, to the best of our knowledge, there is no dedicated systematic review in this area. This paper bridges the gap by providing a comprehensive review of place recognition methods employing LiDAR sensors, thus facilitating and encouraging further research. We commence by delving into the problem formulation of place recognition, exploring existing challenges, and describing relations to previous surveys. Subsequently, we conduct an in-depth review of related research, which offers detailed classifications, strengths and weaknesses, and architectures. Finally, we summarize existing datasets, commonly used evaluation metrics, and comprehensive evaluation results from various methods on public datasets. This paper can serve as a valuable tutorial for newcomers entering the field of place recognition and for researchers interested in long-term robot localization. We pledge to maintain an up-to-date project on our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table

    Shaped-based IMU/Camera Tightly Coupled Object-level SLAM using Rao-Blackwellized Particle Filtering

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a decades-old problem. The classical solution to this problem utilizes entities such as feature points that cannot facilitate the interactions between a robot and its environment (e.g., grabbing objects). Recent advances in deep learning have paved the way to accurately detect objects in the image under various illumination conditions and occlusions. This led to the emergence of object-level solutions to the SLAM problem. Current object-level methods depend on an initial solution using classical approaches and assume that errors are Gaussian. This research develops a standalone solution to object-level SLAM that integrates the data from a monocular camera and an IMU (available in low-end devices) using Rao Blackwellized Particle Filter (RBPF). RBPF does not assume Gaussian distribution for the error; thus, it can handle a variety of scenarios (such as when a symmetrical object with pose ambiguities is encountered). The developed method utilizes shape instead of texture; therefore, texture-less objects can be incorporated into the solution. In the particle weighing process, a new method is developed that utilizes the Intersection over the Union (IoU) area of the observed and projected boundaries of the object that does not require point-to-point correspondence. Thus, it is not prone to false data correspondences. Landmark initialization is another important challenge for object-level SLAM. In the state-of-the-art delayed initialization, the trajectory estimation only relies on the motion model provided by IMU mechanization (during the initialization), leading to large errors. In this thesis, two novel undelayed initializations are developed. One relies only on a monocular camera and IMU, and the other utilizes an ultrasonic rangefinder as well. The developed object-level SLAM is tested using wheeled robots and handheld devices, and an error (in the position) of 4.1 to 13.1 cm (0.005 to 0.028 of the total path length) has been obtained through extensive experiments using only a single object. These experiments are conducted in different indoor environments under different conditions (e.g. illumination). Further, it is shown that undelayed initialization using an ultrasonic sensor can reduce the algorithm's runtime by half

    Visual localisation of electricity pylons for power line inspection

    Get PDF
    Inspection of power infrastructure is a regular maintenance event. To date the inspection process has mostly been done manually, but there is growing interest in automating the process. The automation of the inspection process will require an accurate means for the localisation of the power infrastructure components. In this research, we studied the visual localisation of a pylon. The pylon is the most prominent component of the power infrastructure and can provide a context for the inspection of the other components. Point-based descriptors tend to perform poorly on texture less objects such as pylons, therefore we explored the localisation using convolutional neural networks and geometric constraints. The crossings of the pylon, or vertices, are salient points on the pylon. These vertices aid with recognition and pose estimation of the pylon. We were successfully able to use a convolutional neural network for the detection of the vertices. A model-based technique, geometric hashing, was used to establish the correspondence between the stored pylon model and the scene object. We showed the effectiveness of the method as a voting technique to determine the pose estimation from a single image. In a localisation framework, the method serves as the initialization of the tracking process. We were able to incorporate an extended Kalman filter for subsequent incremental tracking of the camera relative to the pylon. Also, we demonstrated an alternative tracking using heatmap details from the vertex detection. We successfully demonstrated the proposed algorithms and evaluated their effectiveness using a model pylon we built in the laboratory. Furthermore, we revalidated the results on a real-world outdoor electricity pylon. Our experiments illustrate that model-based techniques can be deployed as part of the navigation aspect of a robot

    LOCALIZATION OF PEOPLE IN GNSS-DENIED ENVIRONMENTS USING NEURAL-INERTIAL PREDICTION AND KALMAN FILTER CORRECTION

    Get PDF
    This thesis presents a method based on neural networks and Kalman filters for estimating the position of a person carrying a mobile device (i.e., cell phone or tablet) that can communicate with static UWB sensors or is carried in an environment with known landmark positions. This device is used to collect and share inertial measurement unit (IMU) information — which includes data from sensors such as accelerometers, gyroscopes, and magnetometers — and UWB and landmark information. The collected data, in combination with other necessary initial condition information, is input into a pre-trained deep neural network (DNN) which predicts the movement of the person. The prediction result is then periodically — based on outside measurement availability — updated to produce a more accurate result. The update process utilizes a Kalman Filter approach that relies on empirical and statistical models for DNN prediction and sensor noise. Therefore, the approach combines the principles of artificial intelligence and filtering techniques to produce a complete system which converts raw data to trajectory results of people. The initial tests were completed indoors where known landmark locations were compared with predicted positions. In a second set of experiments, GNSS location signals were combined with position estimation for correction. The final result shows the correction of neural network prediction with data from UWB sensors having known locations. Prediction and correction trajectories are shown and compared with the ground truth for applicable environments. The results show that the proposed system is accurate and reliable for predicting the trajectory of a person and can be used in future applications that require the localization of people in scenarios where GNSS is degraded or unavailable, such as indoors, in forests, or underground

    Scene representation and matching for visual localization in hybrid camera scenarios

    Get PDF
    Scene representation and matching are crucial steps in a variety of tasks ranging from 3D reconstruction to virtual/augmented/mixed reality applications, to robotics, and others. While approaches exist that tackle these tasks, they mostly overlook the issue of efficiency in the scene representation, which is fundamental in resource-constrained systems and for increasing computing speed. Also, they normally assume the use of projective cameras, while performance on systems based on other camera geometries remains suboptimal. This dissertation contributes with a new efficient scene representation method that dramatically reduces the number of 3D points. The approach sets up an optimization problem for the automated selection of the most relevant points to retain. This leads to a constrained quadratic program, which is solved optimally with a newly introduced variant of the sequential minimal optimization method. In addition, a new initialization approach is introduced for the fast convergence of the method. Extensive experimentation on public benchmark datasets demonstrates that the approach produces a compressed scene representation quickly while delivering accurate pose estimates. The dissertation also contributes with new methods for scene matching that go beyond the use of projective cameras. Alternative camera geometries, like fisheye cameras, produce images with very high distortion, making current image feature point detectors and descriptors less efficient, since designed for projective cameras. New methods based on deep learning are introduced to address this problem, where feature detectors and descriptors can overcome distortion effects and more effectively perform feature matching between pairs of fisheye images, and also between hybrid pairs of fisheye and perspective images. Due to the limited availability of fisheye-perspective image datasets, three datasets were collected for training and testing the methods. The results demonstrate an increase of the detection and matching rates which outperform the current state-of-the-art methods

    Key functions in BIM-based AR platforms

    Get PDF
    The integration of Augmented Reality and Building Information Modelling is a promising area of research; however, fragmentation in literature hinders the development of mature BIM-based AR platforms. This paper aims to minimise the fragmentation in the literature by identifying the key functions that represent the essential capabilities of BIM-AR platforms. A systematic literature review is employed to identify, categorise, and discuss the key functions. The outcome of this paper identifies six key functions: positioning (P), interaction (I), visualisation (V), collaboration (C), automation (A), and integration (T). These key functions act as the foundation for an evaluation framework that can assist practitioners, developers, and researchers with assessing the requirements of the targeted application area, and hence be better informed on the appropriate devices, software, and techniques to use. Finally, this paper emphasises the importance of industrial-academic collaboration in BIM-AR research and suggests prospects for automation through the application of artificial intelligence

    Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning

    Get PDF
    Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome ParkfunktionalitĂ€t in einem realen VersuchstrĂ€ger umgesetzt. Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit. Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken ĂŒber eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren DatensĂ€tze dieser Annotationsebene und Radarspezifikation öffentlich verfĂŒgbar. Das ĂŒberwachte Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen. Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstĂŒtzt. FĂŒr die kohĂ€rente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrĂŒckt. Ein speziell fĂŒr Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM fĂŒr beliebige statische Umgebungen realisiert. Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen ParkfunktionalitĂ€t evaluiert. Im Durchschnitt ĂŒber 42 autonome Parkmanöver (∅3.73 km/h) bei durchschnittlicher ManöverlĂ€nge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare Radar-Lokalisierungsergebnisse um ≈ 50% ĂŒbertrifft. Die Kartengenauigkeit von verĂ€nderlichen, neukartierten Orten ĂŒber eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. FĂŒr das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet

    DEUX: Active Exploration for Learning Unsupervised Depth Perception

    Full text link
    Depth perception models are typically trained on non-interactive datasets with predefined camera trajectories. However, this often introduces systematic biases into the learning process correlated to specific camera paths chosen during data acquisition. In this paper, we investigate the role of how data is collected for learning depth completion, from a robot navigation perspective, by leveraging 3D interactive environments. First, we evaluate four depth completion models trained on data collected using conventional navigation techniques. Our key insight is that existing exploration paradigms do not necessarily provide task-specific data points to achieve competent unsupervised depth completion learning. We then find that data collected with respect to photometric reconstruction has a direct positive influence on model performance. As a result, we develop an active, task-informed, depth uncertainty-based motion planning approach for learning depth completion, which we call DEpth Uncertainty-guided eXploration (DEUX). Training with data collected by our approach improves depth completion by an average greater than 18% across four depth completion models compared to existing exploration methods on the MP3D test set. We show that our approach further improves zero-shot generalization, while offering new insights into integrating robot learning-based depth estimation

    Visual Place Recognition: A Tutorial

    Full text link
    Localization is an essential capability for mobile robots. A rapidly growing field of research in this area is Visual Place Recognition (VPR), which is the ability to recognize previously seen places in the world based solely on images. This present work is the first tutorial paper on visual place recognition. It unifies the terminology of VPR and complements prior research in two important directions: 1) It provides a systematic introduction for newcomers to the field, covering topics such as the formulation of the VPR problem, a general-purpose algorithmic pipeline, an evaluation methodology for VPR approaches, and the major challenges for VPR and how they may be addressed. 2) As a contribution for researchers acquainted with the VPR problem, it examines the intricacies of different VPR problem types regarding input, data processing, and output. The tutorial also discusses the subtleties behind the evaluation of VPR algorithms, e.g., the evaluation of a VPR system that has to find all matching database images per query, as opposed to just a single match. Practical code examples in Python illustrate to prospective practitioners and researchers how VPR is implemented and evaluated.Comment: IEEE Robotics & Automation Magazine (RAM
    • 

    corecore