467 research outputs found
Road terrain type classification based on laser measurement system data
For road vehicles, knowledge of terrain types is useful in improving passenger safety and comfort. The conventional methods are susceptible to vehicle speed variations and in this paper we present a method of using Laser Measurement System (LMS) data for speed independent road type classification. Experiments were carried out with an instrumented road vehicle (CRUISE), by manually driving on a variety of road terrain types namely Asphalt, Concrete, Grass, and Gravel roads at different speeds. A looking down LMS is used for capturing the terrain data. The range data is capable of capturing the structural differences while the remission values are used to observe anomalies in surface reflectance properties. Both measurements are combined and used in a Support Vector Machines Classifier to achieve an average accuracy of 95% on different road types
Editorial: Special issue on ground robots operating in dynamic, unstructured and large-scale outdoor environments
Real-world outdoor applications of ground robots have, to date, been limited primarily to remote inspection of suspected explosive devices and, with less success, to the broader domain of remote survey and inspection in hazardous environments. Such robots have almost exclusively been tele-operated. Also notable as examples of outdoor ground robots are the planetary rovers, currently deployed with great success on the surface of Mars. But with the rapid development of autonomous (driverless) cars, and the emergence of robotic vehicles in agriculture, it is likely that there will be significant growth in both the numbers and scope of commercial ground robots in outdoor environments in the near future.For this special issue we called for papers that present land robot systems deployed in the field in similar realistic challenges. We sought papers that focus on any aspect of robotic systems, from vehicle design to the overall system architecture and control, via terrain mapping, localization, mission planning and execution â with an emphasis on systems that fulfil a specific real world task. We specified that robot or system innovations must be supported by extensive field results. Also that field tests must be under realistic and challenging conditions with respect to the terrain type, the scenario to be achieved, and/or the conditions within which the scenarios must be achieved
Recommended from our members
Autonomous mobility scooters as assistive tools for the elderly
The aim of this research is to investigate the development of an autonomous navigation system that could be used as an assistive tool for elderly and disabled people in their activities of daily living. The navigation environment is an urban environment and the platform is a Mobility Scooter (MoS). To achieve this aim, a differentially steered MoS was modifed to receive motion commands from a computer and outfitted with onboard sensors that included a Global Positioning System (GPS) receiver and two 2D planar laser range sensors. Perception methods were developed to detect the presence of an outdoor pedestrian walkway. These methods achieved this by processing the range data produced by the laser sensors to identify features that are typically found around walkways like curbs, low vegetation, walls and barriers. A method that utilises GPS localisation information to plan and navigate a route in an outdoor urban environment was also developed. Extensive experimental work was conducted to test the accuracy, repeatability and usefulness of the sensory devices. The developed perception methodologies were evaluated in real world environments while the navigation algorithms were predominantly tested in virtual environments. A navigation system that plans a route in an urban environment and follows it using behaviours arranged in a hierarchy is presented and shown to have the ability to safely navigate an MoS along an outdoor pedestrian path
3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments
Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation
Discriminating Crop, Weeds and Soil Surface with a Terrestrial LIDAR Sensor
In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements
aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at
growth stage 12â14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and
reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R 2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate
mostly between soil and vegetation and, to a far lesser extent, between crop and weeds.
The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying
Vegetation detection and terrain classification for autonomous navigation
Diese Arbeit beleuchtet sieben neuartige Ansätze aus zwei Bereichen der maschinellen Wahrnehmung: Erkennung von Vegetation und Klassifizierung von Gelände. Diese Elemente bilden den Kern eines jeden Steuerungssystems fĂźr effiziente, autonome Navigation im AuĂenbereich. BezĂźglich der Vegetationserkennung, wird zuerst ein auf Indizierung basierender Ansatz beschrieben (1), der die reflektierenden und absorbierenden Eigenschaften von Pflanzen im Bezug auf sichtbares und nah-infrarotes Licht auswertet. Zweitens wird eine Fusionmethode von 2D/3D Merkmalen untersucht (2), die das menschliche System der Vegetationserkennung nachbildet. Zusätzlich wird ein integriertes System vorgeschlagen (3), welches die visuelle Wahrnehmung mit multi-spektralen Methoden ko mbiniert. Aufbauend auf detaillierten Studien zu Farb- und Textureigenschaften von Vegetation wird ein adaptiver selbstlernender Algorithmus eingefĂźhrt der robust und schnell Pflanzen(bewuchs) erkennt (4). Komplettiert wird die Vegetationserkennung durch einen Algorithmus zur Befahrbarkeitseinschätzung von Vegetation, der die Verformbarkeit von Pflanzen erkennt. Je leichter sich Pflanzen bewegen lassen, umso grĂśĂer ist ihre Befahrbarkeit. BezĂźglich der Geländeklassifizierung wird eine struktur-basierte Methode vorgestellt (6), welche die 3D Strukturdaten einer Umgebung durch die statistische Analyse lokaler Punkte von LiDAR Daten unterstĂźtzt. Zuletzt wird eine auf Klassifizierung basierende Methode (7) beschrieben, die LiDAR und Kamera-Daten kombiniert, um eine 3D Szene zu rekonstruieren. Basierend auf den Vorteilen der vorgestellten Algorithmen im Bezug auf die maschinelle Wahrnehmung, hoffen wir, dass diese Arbeit als Ausgangspunkt fĂźr weitere Entwicklung en von zuverlässigen Erkennungsmethoden dient.This thesis introduces seven novel contributions for two perception tasks: vegetation detection and terrain classification, that are at the core of any control system for efficient autonomous navigation in outdoor environments. Regarding vegetation detection, we first describe a vegetation index-based method (1), which relies on the absorption and reflectance properties of vegetation to visual light and near-infrared light, respectively. Second, a 2D/3D feature fusion (2), which imitates the human visual system in vegetation interpretation, is investigated. Alternatively, an integrated vision system (3) is proposed to realise our greedy ambition in combining visual perception-based and multi-spectral methods by only using a unit device. A depth study on colour and texture features of vegetation has been carried out, which leads to a robust and fast vegetation detection through an adaptive learning algorithm (4). In addition, a double-check of passable vegetation detection (5) is realised, relying on the compressibility of vegetation. The lower degree of resistance vegetation has, the more traversable it is. Regarding terrain classification, we introduce a structure-based method (6) to capture the world scene by inferring its 3D structures through a local point statistic analysis on LiDAR data. Finally, a classification-based method (7), which combines the LiDAR data and visual information to reconstruct 3D scenes, is presented. Whereby, object representation is described more details, thus enabling an ability to classify more object types. Based on the success of the proposed perceptual inference methods in the environmental sensing tasks, we hope that this thesis will really serve as a key point for further development of highly reliable perceptual inference methods
- âŚ