112 research outputs found

    Filtering of Artifacts and Pavement Segmentation from Mobile LiDAR Data

    No full text
    International audienceThis paper presents an automatic method for filtering and segmenting 3D point clouds acquired from mobile LIDAR systems. Our approach exploits 3D information by using range images and several morphological operators. Firstly, a detection of artifacts is carried out in order to filter point clouds. The artifact detection is based on a Top-Hat of hole filling algorithm. Secondly, ground segmentation extracts the contour between pavements and roads. The method uses a quasi-flat zone algorithm and a region adjacency graph representation. Edges are evaluated with the local height difference along the corresponding boundary. Finally, edges with a value compatible with the pavement/road difference (about 14[ cm ] ) are selected. Preliminary results demonstrate the ability of this approach to automatically filter artifacts and segment pavements from 3D data

    3D Reconstruction for Optimal Representation of Surroundings in Automotive HMIs, Based on Fisheye Multi-Camera Systems

    Get PDF
    The aim of this thesis is the development of new concepts for environmental 3D reconstruction in automotive surround-view systems where information of the surroundings of a vehicle is displayed to a driver for assistance in parking and low-speed manouvering. The proposed driving assistance system represents a multi-disciplinary challenge combining techniques from both computer vision and computer graphics. This work comprises all necessary steps, namely sensor setup and image acquisition up to 3D rendering in order to provide a comprehensive visualization for the driver. Visual information is acquired by means of standard surround-view cameras with fish eye optics covering large fields of view around the ego vehicle. Stereo vision techniques are applied to these cameras in order to recover 3D information that is finally used as input for the image-based rendering. New camera setups are proposed that improve the 3D reconstruction around the whole vehicle, attending to different criteria. Prototypic realization was carried out that shows a qualitative measure of the results achieved and prove the feasibility of the proposed concept

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    Robust Extrinsic Self-Calibration of Camera and Solid State LiDAR

    Full text link
    This letter proposes an extrinsic calibration approach for a pair of monocular camera and prism-spinning solid-state LiDAR. The unique characteristics of the point cloud measured resulting from the flower-like scanning pattern is first disclosed as the vacant points, a type of outlier between foreground target and background objects. Unlike existing method using only depth continuous measurements, we use depth discontinuous measurements to retain more valid features and efficiently remove vacant points. The larger number of detected 3D corners thus contain more robust a priori information than usual which, together with the 2D corners detected by overlapping cameras and constrained by the proposed circularity and rectangularity rules, produce accurate extrinsic estimates. The algorithm is evaluated with real field experiments adopting both qualitative and quantitative performance criteria, and found to be superior to existing algorithms. The code is available on GitHub

    Dataset of Panoramic Images for People Tracking in Service Robotics

    Get PDF
    We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility.We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility

    Robot-assisted measurement in data-sparse regions

    Get PDF
    This work investigated the use of low-cost robots, small unmanned aerial vehicles (UAVs) and small unmanned surface vehicles (USVs), to assist researchers in environmental data collection in the Arkavathy River Basin in Karnataka, India. In the late 20th century, river flows in the Arkavathy began to decline severely, and Bangalore’s dependence on the basin for local water supply shifted while the causes of drying remain unknown. Due to the lack of available data for the region, it is difficult for water management agencies to address the issue of declining surface flows; by collecting critical hydrologic data accurately and efficiently through the use of robots, where data is not available or accessible, local water resources can more easily be managed for the greater Bangalore region. Three case study sites, including two irrigation tanks and one urban lake, within the Arkavathy basin were selected where unmanned aerial vehicles and unmanned surface vehicles collected data in the form of aerial imagery and bathymetric measurements. The data were further processed into 3D textured surface models and exported as digital elevations models (DEMs) for post-processing in GIS. From the DEMs, topographic and bathymetric maps were created and storage volumes and surface areas are calculated by relating water surface levels to tank bathymetry. The results are stage-storage and stage-surface area relationships for each case study site. These relationships provide valuable information relating to groundwater recharge and streamflow generation. Sensitivity analysis showed that the topographic surface data used in the stage-storage and stage-surface area curves was validated within ± 0.35 meters. By providing these relationships and curves, researchers can further understand hydrologic processes in the Arkavathy River Basin and inform local water management policies. From these case studies, three formative observations were made, relating to i) interpretation of the data fusion process using information collected from both UAV and USV systems; ii) observations for the human-robot interactions for USV and; iii) field observations for deployment and retrieval in water environments with low accessibility. This work is of interest to hydrologists and geoscientists who can use this methodology to assist in data collection and enhance their understanding of environmental processes

    State-of-the-Art Review on Wearable Obstacle Detection Systems Developed for Assistive Technologies and Footwear

    Get PDF
    Walking independently is essential to maintaining our quality of life but safe locomotion depends on perceiving hazards in the everyday environment. To address this problem, there is an increasing focus on developing assistive technologies that can alert the user to the risk destabilizing foot contact with either the ground or obstacles, leading to a fall. Shoe-mounted sensor systems designed to monitor foot-obstacle interaction are being employed to identify tripping risk and provide corrective feedback. Advances in smart wearable technologies, integrating motion sensors with machine learning algorithms, has led to developments in shoe-mounted obstacle detection. The focus of this review is gait-assisting wearable sensors and hazard detection for pedestrians. This literature represents a research front that is critically important in paving the way towards practical, low-cost, wearable devices that can make walking safer and reduce the increasing financial and human costs of fall injuries

    Measuring the interior of in-use sewage pipes using 3D vision

    Get PDF
    Sewage pipes may be renovated using tailored linings. However, the interior diameter of the pipes must be measured prior to renovation. This paper investigates the use of 3D vision sensors for measuring the interior diameter of sewage pipes, removing the need for human entry in the pipes. The 3D sensors are residing in a waterproof box that is lowered into the well. A RANSAC-based method is used for cylinder estimation from the acquired point clouds of the pipe and the diameter of these cylinders is used as a measure of the interior pipe diameter. The method is tested in 74 real-world sewage pipes with diameters between 150- and 1100 mm. The diameter of 68 pipes is measured within a tolerance of ±20mm whereas 8 pipes are above. It was found that the faulty estimates can be detected in the field using a combination of human-in-the-loop qualitative and quantitative data-driven measures.</p

    Outdoor navigation of mobile robots

    Get PDF
    AGVs in the manufacturing industry currently constitute the largest application area for mobile robots. Other applications have been gradually emerging, including various transporting tasks in demanding environments, such as mines or harbours. Most of the new potential applications require a free-ranging navigation system, which means that the path of a robot is no longer bound to follow a buried inductive cable. Moreover, changing the route of a robot or taking a new working area into use must be as effective as possible. These requirements set new challenges for the navigation systems of mobile robots. One of the basic methods of building a free ranging navigation system is to combine dead reckoning navigation with the detection of beacons at known locations. This approach is the backbone of the navigation systems in this study. The study describes research and development work in the area of mobile robotics including the applications in forestry, agriculture, mining, and transportation in a factory yard. The focus is on describing navigation sensors and methods for position and heading estimation by fusing dead reckoning and beacon detection information. A Kalman filter is typically used here for sensor fusion. Both cases of using either artificial or natural beacons have been covered. Artificial beacons used in the research and development projects include specially designed flat objects to be detected using a camera as the detection sensor, GPS satellite positioning system, and passive transponders buried in the ground along the route of a robot. The walls in a mine tunnel have been used as natural beacons. In this case, special attention has been paid to map building and using the map for positioning. The main contribution of the study is in describing the structure of a working navigation system, including positioning and position control. The navigation system for mining application, in particular, contains some unique features that provide an easy-to-use procedure for taking new production areas into use and making it possible to drive a heavy mining machine autonomously at speed comparable to an experienced human driver.reviewe

    Sensors for autonomous navigation and hazard avoidance on a planetary micro-rover

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1993.Includes bibliographical references (p. 263-264).by William N. Kaliardos.M.S
    • 

    corecore