179 research outputs found

    NEW SOURCE OF GEOSPATIAL DATA: CROWDSENSING BY ASSISTED AND AUTONOMOUS VEHICLE TECHNOLOGIES

    Get PDF
    The ongoing proliferation of remote sensing technologies in the consumer market has been rapidly reshaping the geospatial data acquisition world, and subsequently, the data processing as well as information dissemination processes. Smartphones have clearly established themselves as the primary crowdsourced data generators recently, and provide an incredible volume of remote sensed data with fairly good georeferencing. Besides the potential to map the environment of the smartphone users, they provide information to monitor the dynamic content of the object space. For example, real-time traffic monitoring is one of the most known and widely used real-time crowdsensed application, where the smartphones in vehicles jointly contribute to an unprecedentedly accurate traffic flow estimation. Now we are witnessing another milestone to happen, as driverless vehicle technologies will become another major source of crowdsensed data. Due to safety concerns, the requirements for sensing are higher, as the vehicles should sense other vehicles and the road infrastructure under any condition, not just daylight in favorable weather conditions, and at very fast speed. Furthermore, the sensing is based on using redundant and complementary sensor streams to achieve a robust object space reconstruction, needed to avoid collisions and maintain normal travel patterns. At this point, the remote sensed data in assisted and autonomous vehicles are discarded, or partially recorded for R&D purposes. However, in the long run, as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies mature, recording data will become a common place, and will provide an excellent source of geospatial information for road mapping, traffic monitoring, etc. This paper reviews the key characteristics of crowdsourced vehicle data based on experimental data, and then the processing aspects, including the Data Science and Deep Learning components

    Experimental evaluation of a UWB-based cooperative positioning system for pedestrians in GNSS-denied environment

    Get PDF
    Cooperative positioning (CP) utilises information sharing among multiple nodes to enable positioning in Global Navigation Satellite System (GNSS)-denied environments. This paper reports the performance of a CP system for pedestrians using Ultra-Wide Band (UWB) technology in GNSS-denied environments. This data set was collected as part of a benchmarking measurement campaign carried out at the Ohio State University in October 2017. Pedestrians were equipped with a variety of sensors, including two different UWB systems, on a specially designed helmet serving as a mobile multi-sensor platform for CP. Different users were walking in stop-and-go mode along trajectories with predefined checkpoints and under various challenging environments. In the developed CP network, both Peer-to-Infrastructure (P2I) and Peer-to-Peer (P2P) measurements are used for positioning of the pedestrians. It is realised that the proposed system can achieve decimetre-level accuracies (on average, around 20 cm) in the complete absence of GNSS signals, provided that the measurements from infrastructure nodes are available and the network geometry is good. In the absence of these good conditions, the results show that the average accuracy degrades to meter level. Further, it is experimentally demonstrated that inclusion of P2P cooperative range observations further enhances the positioning accuracy and, in extreme cases when only one infrastructure measurement is available, P2P CP may reduce positioning errors by up to 95%. The complete test setup, the methodology for development, and data collection are discussed in this paper. In the next version of this system, additional observations such as the Wi-Fi, camera, and other signals of opportunity will be included

    Experimental Evaluation of a UWB-Based Cooperative Positioning System for Pedestrians in GNSS-Denied Environment

    Get PDF
    Cooperative positioning (CP) utilises information sharing among multiple nodes to enable positioning in Global Navigation Satellite System (GNSS)-denied environments. This paper reports the performance of a CP system for pedestrians using Ultra-Wide Band (UWB) technology in GNSS-denied environments. This data set was collected as part of a benchmarking measurement campaign carried out at the Ohio State University in October 2017. Pedestrians were equipped with a variety of sensors, including two different UWB systems, on a specially designed helmet serving as a mobile multi-sensor platform for CP. Different users were walking in stop-and-go mode along trajectories with predefined checkpoints and under various challenging environments. In the developed CP network, both Peer-to-Infrastructure (P2I) and Peer-to-Peer (P2P) measurements are used for positioning of the pedestrians. It is realised that the proposed system can achieve decimetre-level accuracies (on average, around 20 cm) in the complete absence of GNSS signals, provided that the measurements from infrastructure nodes are available and the network geometry is good. In the absence of these good conditions, the results show that the average accuracy degrades to meter level. Further, it is experimentally demonstrated that inclusion of P2P cooperative range observations further enhances the positioning accuracy and, in extreme cases when only one infrastructure measurement is available, P2P CP may reduce positioning errors by up to 95%. The complete test setup, the methodology for development, and data collection are discussed in this paper. In the next version of this system, additional observations such as the Wi-Fi, camera, and other signals of opportunity will be included

    SEMANTIC LABELING OF STRUCTURAL ELEMENTS IN BUILDINGS BY FUSING RGB AND DEPTH IMAGES IN AN ENCODER-DECODER CNN FRAMEWORK

    Get PDF
    In the last decade, we have observed an increasing demand for indoor scene modeling in various applications, such as mobility inside buildings, emergency and rescue operations, and maintenance. Automatically distinguishing between structural elements of buildings, such as walls, ceilings, floors, windows, doors etc., and typical objects in buildings, such as chairs, tables and shelves, is particularly important for many reasons, such as 3D building modeling or navigation. This information can be generally retrieved through semantic labeling. In the past few years, convolutional neural networks (CNN) have become the preferred method for semantic labeling. Furthermore, there is ongoing research on fusing RGB and depth images in CNN frameworks. For pixel-level labeling, encoder-decoder CNN frameworks have been shown to be the most effective. In this study, we adopt an encoder-decoder CNN architecture to label structural elements in buildings and investigate the influence of using depth information on the detection of typical objects in buildings. For this purpose, we have introduced an approach to combine depth map with RGB images by changing the color space of the original image to HSV and then substitute the V channel with the depth information (D) and use it utilize it in the CNN architecture. As further variation of this approach, we also transform back the HSD images to RGB color space and use them within the CNN. This approach allows for using a CNN, designed for three-channel image input, and directly comparing our results with RGB-based labeling within the same network. We perform our tests using the Stanford 2D-3D-Semantics Dataset (2D-3D-S), a widely used indoor dataset. Furthermore, we compare our approach with results when using four-channel input created by stacking RGB and depth (RGBD). Our investigation shows that fusing RGB and depth improves results on semantic labeling; particularly, on structural elements of buildings. On the 2D- 3D-S dataset, we achieve up to 92.1 % global accuracy, compared to 90.9 % using RGB only and 93.6 % using RGBD. Moreover, the scores of Intersection over Union metric have improved using depth, which shows that it gives better labeling results at the boundaries

    Die Replantation von Augen. VII. Heteround Dysplastik

    No full text

    Object Tracking with LiDAR: Monitoring Taxiing and Landing Aircraft

    No full text
    Mobile light detection and ranging (LiDAR) sensors used in car navigation and robotics, such as the Velodyne’s VLP-16 and HDL-32E, allow for sensing the surroundings of the platform with high temporal resolution to detect obstacles, tracking objects and support path planning. This study investigates the feasibility of using LiDAR sensors for tracking taxiing or landing aircraft close to the ground to improve airport safety. A prototype system was developed and installed at an airfield to capture point clouds to monitor aircraft operations. One of the challenges of accurate object tracking using the Velodyne sensors is the relatively small vertical field of view (30°, 41.3°) and angular resolution (1.33°, 2°), resulting in a small number of points of the tracked object. The point density decreases with the object–sensor distance, and is already sparse at a moderate range of 30–40 m. The paper introduces our model-based tracking algorithms, including volume minimization and cube trajectories, to address the optimal estimation of object motion and tracking based on sparse point clouds. Using a network of sensors, multiple tests were conducted at an airport to assess the performance of the demonstration system and the algorithms developed. The investigation was focused on monitoring small aircraft moving on runways and taxiways, and the results indicate less than 0.7 m/s and 17 cm velocity and positioning accuracy achieved, respectively. Overall, based on our findings, this technology is promising not only for aircraft monitoring but for airport applications
    • …
    corecore