536 research outputs found

    Modelling and Visualization of the Surface Resulting from the Milling Process

    Get PDF

    3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation

    Full text link
    Global registration of heterogeneous ground and aerial mapping data is a challenging task. This is especially difficult in disaster response scenarios when we have no prior information on the environment and cannot assume the regular order of man-made environments or meaningful semantic cues. In this work we extensively evaluate different approaches to globally register UGV generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud maps from vision sensors. The approaches are realizations of different selections for: a) local features: key-points or segments; b) descriptors: FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR. Additionally, we compare the results against standard approaches like applying ICP after a good prior transformation has been given. The evaluation criteria include the distance which a UGV needs to travel to successfully localize, the registration error, and the computational cost. In this context, we report our findings on effectively performing the task on two new Search and Rescue datasets. Our results have the potential to help the community take informed decisions when registering point-cloud maps from ground robots to those from aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017

    3D Cameras: 3D Computer Vision of Wide Scope

    Get PDF
    The human visual sense is the one among all other senses that gathers most information we receive. Evolution has optimized our visual system to negotiate one's way in three dimensions even through cluttered environments. For perceiving 3D information, the human brain uses three important principles: stereo vision, motion parallax and a-priori knowledge about the perspective appearance of objects in dependency of their distance. These tasks pose a challenge to computer vision since decades. Today the most common techniques for 3D sensing are CCD- or CMOS-camera, laser scanner or 3D time-of-flight camera based. Even though evolution has shown predominance for passive stereo vision systems, three additional problems are remaining for 3D perception compared with the two mentioned active vision systems above. First, the computation needs a great deal of performance, since the correlation of two images from a different point of view has to be found. Second, distances to structureless surfaces cannot be measured, if the perspective projection of the object is larger than the camera’s field of view. This problem is often called aperture problem. Finally, a passive visual sensor has to cope with shadowing effects and changes in illumination over time. That is why for mapping purposes mostly active vision systems like laser scanners are used , e.g. [Thrun et al., 2000], [Wulf & Wagner, 2003], [Surmann et al., 2003]. But these approaches are usually not applicable to tasks considering environment dynamics. Due to this restriction, 3D cameras [CSEM SA, 2007], [PMDTec, 2007] have attracted attention since their invention nearly a decade ago. Distance measurements are also based on a time-of-flight principle but with an important difference. Instead of sampling laser beams serially to acquire distance data point-wise, the entire scene is measured in parallel with a modulated surface. This principle allows for higher frame rates and thus enables the consideration of environment dynamics. The first part of this chapter discusses the physical principles of 3D sensors, which are commonly used in the robotics community for typical problems like mapping and navigation. The second part concentrates on 3D cameras, their assets, drawbacks and perspectives. Based on these examining parts, some solutions are discussed that handle common problems occurring in dynamic environments with changing lighting conditions. Finally, it is shown in the last part of this chapter how 3D cameras can be applied to mapping, object localization and feature tracking tasks

    Markov-modulated marked Poisson processes for modelling disease dynamics based on medical claims data

    Full text link
    We explore Markov-modulated marked Poisson processes (MMMPPs) as a natural framework for modelling patients' disease dynamics over time based on medical claims data. In claims data, observations do not only occur at random points in time but are also informative, i.e. driven by unobserved disease levels, as poor health conditions usually lead to more frequent interactions with the healthcare system. Therefore, we model the observation process as a Markov-modulated Poisson process, where the rate of healthcare interactions is governed by a continuous-time Markov chain. Its states serve as proxies for the patients' latent disease levels and further determine the distribution of additional data collected at each observation time, the so-called marks. Overall, MMMPPs jointly model observations and their informative time points by comprising two state-dependent processes: the observation process (corresponding to the event times) and the mark process (corresponding to event-specific information), which both depend on the underlying states. The approach is illustrated using claims data from patients diagnosed with chronic obstructive pulmonary disease (COPD) by modelling their drug use and the interval lengths between consecutive physician consultations. The results indicate that MMMPPs are able to detect distinct patterns of healthcare utilisation related to disease processes and reveal inter-individual differences in the state-switching dynamics

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    Deployment of Aerial Robots during the Flood Disaster in Erftstadt / Blessem in July 2021

    Full text link
    Climate change is leading to more and more extreme weather events such as heavy rainfall and flooding. This technical report deals with the question of how rescue commanders can be better and faster provided with current information during flood disasters using Unmanned Aerial Vehicles (UAVs), i.e. during the flood in July 2021 in Central Europe, more specifically in Erftstadt / Blessem. The UAVs were used for live observation and regular inspections of the flood edge on the one hand, and on the other hand for the systematic data acquisition in order to calculate 3D models using Structure from Motion and MultiView Stereo. The 3D models embedded in a GIS application serve as a planning basis for the systematic exploration and decision support for the deployment of additional smaller UAVs but also rescue forces. The systematic data acquisition of the UAVs by means of autonomous meander flights provides high-resolution images which are computed to a georeferenced 3D model of the surrounding area within 15 minutes in a specially equipped robotic command vehicle (RobLW). From the comparison of high-resolution elevation profiles extracted from the 3D model on successive days, changes in the water level become visible. This information enables the emergency management to plan further inspections of the buildings and to search for missing persons on site.Comment: 6 papge

    NLOS mitigation techniques in GNSS receivers based on Level Crossing Rates (LCR) of correlation outputs

    Get PDF
    Global Navigation Satellite Systems (GNSS) provide navigation services with a highly precise estimation of the position. First military influenced, the use of satellite-based positioning has gained a lot of interest also in civilian tasks nowadays. Because the GNSS performance has been improved over the years, the state-of-the-art GNSS navigation does include indoor positioning and moving autonomously with help of GNSS. The accuracy, which essentially has to be high, can be disturbed by multipath (e.g. diffraction, reflection, refraction or scattering). A possibility to detect multipath, and possibly to avoid those signals in the position solution, is totally necessary. A non-direct signal, namely Non-Light-of-Sight (NLOS), can lead to low accuracy of the positioning. Therefore, this thesis is dealing with the NLOS detection by using the Level Crossing Rate (LCR), which has been used in electronic communication such as Wifi. The thesis is divided in two parts, including a literature review part, following by a simulation of the developed detection technique. All basic knowledge about this work can be extracted from the literature part. In the simulation section, several tests will be provided, done by Matlab simulations. To perform a realistic GNSS signal, a dynamic Galileo Composite Binary Offset Carrier (CBOC) signal was produced
    corecore