36 research outputs found

    Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots

    Full text link
    High-quality datasets can speed up breakthroughs and reveal potential developing directions in SLAM research. To support the research on corner cases of visual SLAM systems, this paper presents Ground-Challenge: a challenging dataset comprising 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. The dataset was collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR. All of these sensors were well-calibrated and synchronized, and their data were recorded simultaneously. To evaluate the performance of cutting-edge SLAM systems, we tested them on our dataset and demonstrated that these systems are prone to drift and fail on specific sequences. We will release the full dataset and relevant materials upon paper publication to benefit the research community. For more information, visit our project website at https://github.com/sjtuyinjie/Ground-Challenge

    Sensor Fusion for Localization of Automated Guided Vehicles

    Get PDF
    Automated Guided Vehicles (AGVs) need to localize themselves reliably in order to perform their tasks efficiently. To that end, they rely on noisy sensor measurements that potentially provide erroneous location estimates if they are used directly. To prevent this issue, measurements from different kinds of sensors are generally used together. This thesis presents a Kalman Filter based sensor fusion approach that is able to function with asynchronous measurements from laser scanners, odometry and Inertial Measurement Units (IMUs). The method uses general kinematic equations for state prediction that work with any type of vehicle kinematics and utilizes state augmentation to estimate gyroscope and accelerometer biases. The developed algorithm was tested with an open source multisensor navigation dataset and real-time experiments with an AGV. In both sets of experiments, scenarios in which the laser scanner was fully available, partially available or not available were compared. It was found that using sensor fusion resulted in a smaller deviation from the actual trajectory compared to using only a laser scanner. Furthermore, in each experiment, using sensor fusion decreased the localization error in the time periods where the laser was unavailable, although the amount of improvement depended on the duration of unavailability and motion characteristic

    Selbstkalibrierung mobiler Multisensorsysteme mittels geometrischer 3D-Merkmale

    Get PDF
    Ein mobiles Multisensorsystem ermöglicht die effiziente, räumliche Erfassung von Objekten und der Umgebung. Die Kalibrierung des mobilen Multisensorsystems ist ein notwendiger Vorverarbeitungsschritt für die Sensordatenfusion und für genaue räumliche Erfassungen. Bei herkömmlichen Verfahren kalibrieren Experten das mobile Multisensorsystem in aufwändigen Prozeduren vor Verwendung durch Aufnahmen eines Kalibrierobjektes mit bekannter Form. Im Gegensatz zu solchen objektbasierten Kalibrierungen ist eine Selbstkalibrierung praktikabler, zeitsparender und bestimmt die gesuchten Parameter mit höherer Aktualität. Diese Arbeit stellt eine neue Methode zur Selbstkalibrierung mobiler Multisensorsysteme vor, die als Merkmalsbasierte Selbstkalibrierung bezeichnet wird. Die Merkmalsbasierte Selbstkalibrierung ist ein datenbasiertes, universelles Verfahren, das für eine beliebige Kombination aus einem Posenbestimmungssensor und einem Tiefensensor geeignet ist. Die fundamentale Annahme der Merkmalsbasierten Selbstkalibrierung ist, dass die gesuchten Parameter am besten bestimmt sind, wenn die erfasste Punktwolke die höchstmögliche Qualität hat. Die Kostenfunktion, die zur Bewertung der Qualität verwendet wird, basiert auf Geometrischen 3D-Merkmalen, die wiederum auf den lokalen Nachbarschaften jedes Punktes basieren. Neben der detaillierten Analyse unterschiedlicher Aspekte der Selbstkalibrierung, wie dem Einfluss der Systemposen auf das Ergebnis, der Eignung verschiedener Geometrischer 3D-Merkmale für die Selbstkalibrierung und dem Konvergenzradius des Verfahrens, wird die Merkmalsbasierte Selbstkalibrierung anhand eines synthethischen und dreier realer Datensätze evaluiert. Diese Datensätze wurden dabei mit unterschiedlichen Sensoren und in unterschiedlichen Umgebungen aufgezeichnet. Die Experimente zeigen die vielseitige Einsetzbarkeit der Merkmalsbasierten Selbstkalibrierung hinsichtlich der Sensoren und der Umgebungen. Die Ergebnisse werden stets mit einer geeigneten objektbasierten Kalibrierung aus der Literatur und einer weiteren, nachimplementierten Selbstkalibrierung verglichen. Verglichen mit diesen Verfahren erzielt die Merkmalsbasierte Selbstkalibrierung bessere oder zumindest vergleichbare Genauigkeiten für alle Datensätze. Die Genauigkeit und Präzision der Merkmalsbasierten Selbstkalibrierung entspricht dem aktuellen Stand der Forschung. Für den Datensatz, der die höchsten Sensorgenauigkeiten aufweist, werden beispielsweise die Parameter der relativen Translation zwischen dem Rigid Body eines Motion Capture Systems und einem Laserscanner mit einer Genauigkeit von ca. 1 cm1\,\mathrm{cm} bestimmt, obwohl die Distanzmessgenauigkeit dieses Laserscanners nur 3 cm3\,\mathrm{cm} beträgt

    Geometric, Semantic, and System-Level Scene Understanding for Improved Construction and Operation of the Built Environment

    Full text link
    Recent advances in robotics and enabling fields such as computer vision, deep learning, and low-latency data passing offer significant potential for developing efficient and low-cost solutions for improved construction and operation of the built environment. Examples of such potential solutions include the introduction of automation in environment monitoring, infrastructure inspections, asset management, and building performance analyses. In an effort to advance the fundamental computational building blocks for such applications, this dissertation explored three categories of scene understanding capabilities: 1) Localization and mapping for geometric scene understanding that enables a mobile agent (e.g., robot) to locate itself in an environment, map the geometry of the environment, and navigate through it; 2) Object recognition for semantic scene understanding that allows for automatic asset information extraction for asset tracking and resource management; 3) Distributed coupling analysis for system-level scene understanding that allows for discovery of interdependencies between different built-environment processes for system-level performance analyses and response-planning. First, this dissertation advanced Simultaneous Localization and Mapping (SLAM) techniques for convenient and low-cost locating capabilities compared with previous work. To provide a versatile Real-Time Location System (RTLS), an occupancy grid mapping enhanced visual SLAM (vSLAM) was developed to support path planning and continuous navigation that cannot be implemented directly on vSLAM’s original feature map. The system’s localization accuracy was experimentally evaluated with a set of visual landmarks. The achieved marker position measurement accuracy ranges from 0.039m to 0.186m, proving the method’s feasibility and applicability in providing real-time localization for a wide range of applications. In addition, a Self-Adaptive Feature Transform (SAFT) was proposed to improve such an RTLS’s robustness in challenging environments. As an example implementation, the SAFT descriptor was implemented with a learning-based descriptor and integrated into a vSLAM for experimentation. The evaluation results on two public datasets proved the feasibility and effectiveness of SAFT in improving the matching performance of learning-based descriptors for locating applications. Second, this dissertation explored vision-based 1D barcode marker extraction for automated object recognition and asset tracking that is more convenient and efficient than the traditional methods of using barcode or asset scanners. As an example application in inventory management, a 1D barcode extraction framework was designed to extract 1D barcodes from video scan of a built environment. The performance of the framework was evaluated with video scan data collected from an active logistics warehouse near Detroit Metropolitan Airport (DTW), demonstrating its applicability in automating inventory tracking and management applications. Finally, this dissertation explored distributed coupling analysis for understanding interdependencies between processes affecting the built environment and its occupants, allowing for accurate performance and response analyses compared with previous research. In this research, a Lightweight Communications and Marshalling (LCM)-based distributed coupling analysis framework and a message wrapper were designed. This proposed framework and message wrapper were tested with analysis models from wind engineering and structural engineering, where they demonstrated the abilities to link analysis models from different domains and reveal key interdependencies between the involved built-environment processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155042/1/lichaox_1.pd

    Parallel Tracking and Mapping for Manipulation Applications with Golem Krang

    Get PDF
    Implementing a simultaneous localization and mapping system and an image semantic segmentation method on a mobile manipulation. The application of the SLAM is working towards navigating among obstacles in unknown environments. The object detection method will be integrated for future manipulation tasks such as grasping. This work will be demonstrated on a real robotics hardware system in the lab.Outgoin

    Towards vision based robots for monitoring built environments

    Get PDF
    In construction, projects are typically behind schedule and over budget, largely due to the difficulty of progress monitoring. Once a structure (e.g. a bridge) is built, inspection becomes an important yet dangerous and costly job. We can provide a solution to both problems if we can simplify or automate visual data collection, monitoring, and analysis. In this work, we focus specifically on improving autonomous image collection, building 3D models from the images, and recognizing materials for progress monitoring using the images and 3D models. Image capture can be done manually, but the process is tedious and better suited for autonomous robots. Robots follow a set trajectory to collect data of a site, but it is unclear if 3D reconstruction will be successful using the images captured by following this trajectory. We introduce a simulator that synthesizes feature tracks for 3D reconstruction to predict if images collected from a planned path will result in a successful 3D reconstruction. This can save time, money, and frustration because robot paths can be altered prior to the real image capture. When executing a planned trajectory, the robot needs to understand and navigate the environment autonomously. Robot navigation algorithms struggle in environments with few distinct features. We introduce a new fiducial marker that can be added to these scenes to increase the number of distinct features and a new detection algorithm that detects the marker with negligible computational overhead. Adding markers prior to data collection does not guarantee that the algorithms for 3D model generation will succeed. In fact, out of the box, these algorithms do not take advantage of the unique characteristics of markers. Thus, we introduce an improved structure from motion approach that takes advantage of marker detections when they are present. We also create a dataset of challenging indoor image collections with markers placed throughout and show that previous methods often fail to produce accurate 3D models. However, our approach produces complete, accurate 3D models for all of these new image collections. Recognizing materials on construction sites is useful for monitoring usage and tracking construction progress. However, it is difficult to recognize materials in real world scenes because shape and appearance vary considerably. Our solution is to introduce the first dataset of material patches that include both image data and 3D geometry. We then show that both independent and joint modeling of geometry are useful alongside image features to improve material recognition. Lastly, we use our material recognition with material priors from building plans to accurately identify progress on construction sites

    Combining independent visualization and tracking systems for augmented reality

    Get PDF
    The basic requirement for the successful deployment of a mobile augmented reality application is a reliable tracking system with high accuracy. Recently, a helmet-based inside-out tracking system which meets this demand has been proposed for self-localization in buildings. To realize an augmented reality application based on this tracking system, a display has to be added for visualization purposes. Therefore, the relative pose of this visualization platform with respect to the helmet has to be tracked. In the case of hand-held visualization platforms like smartphones or tablets, this can be achieved by means of image-based tracking methods like marker-based or model-based tracking. In this paper, we present two marker-based methods for tracking the relative pose between the helmet-based tracking system and a tablet-based visualization system. Both methods were implemented and comparatively evaluated in terms of tracking accuracy. Our results show that mobile inside-out tracking systems without integrated displays can easily be supplemented with a hand-held tablet as visualization device for augmented reality purposes
    corecore