172 research outputs found
Vegetation detection and terrain classification for autonomous navigation
Diese Arbeit beleuchtet sieben neuartige Ansätze aus zwei Bereichen der maschinellen Wahrnehmung: Erkennung von Vegetation und Klassifizierung von Gelände. Diese Elemente bilden den Kern eines jeden Steuerungssystems für effiziente, autonome Navigation im Außenbereich. Bezüglich der Vegetationserkennung, wird zuerst ein auf Indizierung basierender Ansatz beschrieben (1), der die reflektierenden und absorbierenden Eigenschaften von Pflanzen im Bezug auf sichtbares und nah-infrarotes Licht auswertet. Zweitens wird eine Fusionmethode von 2D/3D Merkmalen untersucht (2), die das menschliche System der Vegetationserkennung nachbildet. Zusätzlich wird ein integriertes System vorgeschlagen (3), welches die visuelle Wahrnehmung mit multi-spektralen Methoden ko mbiniert. Aufbauend auf detaillierten Studien zu Farb- und Textureigenschaften von Vegetation wird ein adaptiver selbstlernender Algorithmus eingeführt der robust und schnell Pflanzen(bewuchs) erkennt (4). Komplettiert wird die Vegetationserkennung durch einen Algorithmus zur Befahrbarkeitseinschätzung von Vegetation, der die Verformbarkeit von Pflanzen erkennt. Je leichter sich Pflanzen bewegen lassen, umso größer ist ihre Befahrbarkeit. Bezüglich der Geländeklassifizierung wird eine struktur-basierte Methode vorgestellt (6), welche die 3D Strukturdaten einer Umgebung durch die statistische Analyse lokaler Punkte von LiDAR Daten unterstützt. Zuletzt wird eine auf Klassifizierung basierende Methode (7) beschrieben, die LiDAR und Kamera-Daten kombiniert, um eine 3D Szene zu rekonstruieren. Basierend auf den Vorteilen der vorgestellten Algorithmen im Bezug auf die maschinelle Wahrnehmung, hoffen wir, dass diese Arbeit als Ausgangspunkt für weitere Entwicklung en von zuverlässigen Erkennungsmethoden dient.This thesis introduces seven novel contributions for two perception tasks: vegetation detection and terrain classification, that are at the core of any control system for efficient autonomous navigation in outdoor environments. Regarding vegetation detection, we first describe a vegetation index-based method (1), which relies on the absorption and reflectance properties of vegetation to visual light and near-infrared light, respectively. Second, a 2D/3D feature fusion (2), which imitates the human visual system in vegetation interpretation, is investigated. Alternatively, an integrated vision system (3) is proposed to realise our greedy ambition in combining visual perception-based and multi-spectral methods by only using a unit device. A depth study on colour and texture features of vegetation has been carried out, which leads to a robust and fast vegetation detection through an adaptive learning algorithm (4). In addition, a double-check of passable vegetation detection (5) is realised, relying on the compressibility of vegetation. The lower degree of resistance vegetation has, the more traversable it is. Regarding terrain classification, we introduce a structure-based method (6) to capture the world scene by inferring its 3D structures through a local point statistic analysis on LiDAR data. Finally, a classification-based method (7), which combines the LiDAR data and visual information to reconstruct 3D scenes, is presented. Whereby, object representation is described more details, thus enabling an ability to classify more object types. Based on the success of the proposed perceptual inference methods in the environmental sensing tasks, we hope that this thesis will really serve as a key point for further development of highly reliable perceptual inference methods
Recommended from our members
Real-time spatial modeling to detect and track resources on construction sites
For more than 10 years the U.S. construction industry has experienced over 1,000
fatalities annually. Many fatalities may have been prevented had the individuals and
equipment involved been more aware of and alert to the physical state of the environment
around them. Awareness may be improved by automatic 3D (three-dimensional) sensing
and modeling of the job site environment in real-time. Existing 3D modeling approaches
based on range scanning techniques are capable of modeling static objects only, and thus
cannot model in real-time dynamic objects in an environment comprised of moving
humans, equipment, and materials. Emerging prototype 3D video range cameras offer
another alternative by facilitating affordable, wide field of view, automated static and
dynamic object detection and tracking at frame rates better than 1Hz (real-time).
This dissertation presents an imperical work and methodology to rapidly create a
spatial model of construction sites and in particular to detect, model, and track the position, dimension, direction, and velocity of static and moving project resources in real-time, based on range data obtained from a three-dimensional video range camera in a
static or moving position. Existing construction site 3D modeling approaches based on
optical range sensing technologies (laser scanners, rangefinders, etc.) and 3D modeling
approaches (dense, sparse, etc.) that offered potential solutions for this research are
reviewed. The choice of an emerging sensing tool and preliminary experiments with this
prototype sensing technology are discussed. These findings led to the development of a
range data processing algorithm based on three-dimensional occupancy grids which is
demonstrated in detail. Testing and validation of the proposed algorithms have been
conducted to quantify the performance of sensor and algorithm through extensive
experimentation involving static and moving objects. Experiments in indoor laboratory
and outdoor construction environments have been conducted with construction resources
such as humans, equipment, materials, or structures to verify the accuracy of the
occupancy grid modeling approach. Results show that modeling objects and measuring
their position, dimension, direction, and speed had an accuracy level compatible to the
requirements of active safety features for construction. Results demonstrate that video
rate 3D data acquisition and analysis of construction environments can support effective
detection, tracking, and convex hull modeling of objects. Exploiting rapidly generated
three-dimensional models for improved visualization, communications, and process
control has inherent value, broad application, and potential impact, e.g. as-built vs. as-planned comparison, condition assessment, maintenance, operations, and construction
activities control. In combination with effective management practices, this sensing
approach has the potential to assist equipment operators to avoid incidents that result in
reduce human injury, death, or collateral damage on construction sites.Civil, Architectural, and Environmental Engineerin
Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review
Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models
PATH PLANNING STRATEGIES FOR VISIBILITY ENHANCEMENT WITH UNMANNED AERIAL VEHICLES IN CLUTTERED ENVIRONMENTS
Ph.DDOCTOR OF PHILOSOPH
Real aperture synthetically organised radar
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Design and field trial measurement results for a portable and low cost VHF / UHF channel sounder platform for IoT propagation research
Propagation research is vital
for informing the
design of
reliable VHF and UHF communications
systems for the
Internet of Things (IoT). In this paper
,
a cost
-
effective and highly portable system is proposed and then used to obtain
propagation measur
ements in city and
sub
urban scenarios at 71
MHz and 869.525
MHz. The system calculates the
received
power,
power
delay profile and channel frequency response.
The portable sounding receiver uses readily available parts:
an
RTL
-
SDR (covering 27
MHz
–
1.7
GH
z) and a
Raspberry Pi
with
touchscreen. The Pi implements all the
channel
sounding
signal processing
algorithms in Python
,
in near real
-
time
.
Extracted propagation data and models are presented, from
example city and suburban field trials incorporating
pedestrian and car use
- …