2,450 research outputs found

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Urban navigation of a mobile platform

    Get PDF
    This master thesis presents a method for 3D navigation of a robotic platform in urban environments. In autonomous navigation, the robot must know the localization of all the environment obstacles, so an algorithm for obstacle detection is developed and tested using a LiDAR and a camera as sensors, comparing the data points’ height. This detection focus on objects the robot could collide with in urban environments, including negative obstacles such as holes or stairs. The navigation and detection algorithms are all integrated in ROS (Robot Operating System). The simulation and experimental results show the effectiveness of the algorithm to detect those obstacles, being successful with the LiDAR as a sensor in urban environments, but not sufficient robust enough for the camera when the navigation is done outdoors with high sunlight

    Weighted simplicial complex reconstruction from mobile laser scanning using sensor topology

    Full text link
    We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points, weighted according to its distance to the sensor, and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create and filter triangles for each triplet of self-connected edges and according to their local planarity. We compare our results to an unweighted simplicial complex reconstruction.Comment: 8 pages, 11 figures, CFPT 2018. arXiv admin note: substantial text overlap with arXiv:1802.0748

    Unmanned Ground Robots for Rescue Tasks

    Get PDF
    This chapter describes two unmanned ground vehicles that can help search and rescue teams in their difficult, but life-saving tasks. These robotic assets have been developed within the framework of the European project ICARUS. The large unmanned ground vehicle is intended to be a mobile base station. It is equipped with a powerful manipulator arm and can be used for debris removal, shoring operations, and remote structural operations (cutting, welding, hammering, etc.) on very rough terrain. The smaller unmanned ground vehicle is also equipped with an array of sensors, enabling it to search for victims inside semi-destroyed buildings. Working together with each other and the human search and rescue workers, these robotic assets form a powerful team, increasing the effectiveness of search and rescue operations, as proven by operational validation tests in collaboration with end users

    Sensor-Based Topological Coverage And Mapping Algorithms For Resource-Constrained Robot Swarms

    Get PDF
    Coverage is widely known in the field of sensor networks as the task of deploying sensors to completely cover an environment with the union of the sensor footprints. Related to coverage is the task of exploration that includes guiding mobile robots, equipped with sensors, to map an unknown environment (mapping) or clear a known environment (searching and pursuit- evasion problem) with their sensors. This is an essential task for robot swarms in many robotic applications including environmental monitoring, sensor deployment, mine clearing, search-and-rescue, and intrusion detection. Utilizing a large team of robots not only improves the completion time of such tasks, but also improve the scalability of the applications while increasing the robustness to systems’ failure. Despite extensive research on coverage, mapping, and exploration problems, many challenges remain to be solved, especially in swarms where robots have limited computational and sensing capabilities. The majority of approaches used to solve the coverage problem rely on metric information, such as the pose of the robots and the position of obstacles. These geometric approaches are not suitable for large scale swarms due to high computational complexity and sensitivity to noise. This dissertation focuses on algorithms that, using tools from algebraic topology and bearing-based control, solve the coverage related problem with a swarm of resource-constrained robots. First, this dissertation presents an algorithm for deploying mobile robots to attain a hole-less sensor coverage of an unknown environment, where each robot is only capable of measuring the bearing angles to the other robots within its sensing region and the obstacles that it touches. Next, using the same sensing model, a topological map of an environment can be obtained using graph-based search techniques even when there is an insufficient number of robots to attain full coverage of the environment. We then introduce the landmark complex representation and present an exploration algorithm that not only is complete when the landmarks are sufficiently dense but also scales well with any swarm size. Finally, we derive a multi-pursuers and multi-evaders planning algorithm, which detects all possible evaders and clears complex environments

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Advanced Knowledge Application in Practice

    Get PDF
    The integration and interdependency of the world economy leads towards the creation of a global market that offers more opportunities, but is also more complex and competitive than ever before. Therefore widespread research activity is necessary if one is to remain successful on the market. This book is the result of research and development activities from a number of researchers worldwide, covering concrete fields of research
    • …
    corecore