11,135 research outputs found
Autonomous flight and remote site landing guidance research for helicopters
Automated low-altitude flight and landing in remote areas within a civilian environment are investigated, where initial cost, ongoing maintenance costs, and system productivity are important considerations. An approach has been taken which has: (1) utilized those technologies developed for military applications which are directly transferable to a civilian mission; (2) exploited and developed technology areas where new methods or concepts are required; and (3) undertaken research with the potential to lead to innovative methods or concepts required to achieve a manual and fully automatic remote area low-altitude and landing capability. The project has resulted in a definition of system operational concept that includes a sensor subsystem, a sensor fusion/feature extraction capability, and a guidance and control law concept. These subsystem concepts have been developed to sufficient depth to enable further exploration within the NASA simulation environment, and to support programs leading to the flight test
An Effective Multi-Cue Positioning System for Agricultural Robotics
The self-localization capability is a crucial component for Unmanned Ground
Vehicles (UGV) in farming applications. Approaches based solely on visual cues
or on low-cost GPS are easily prone to fail in such scenarios. In this paper,
we present a robust and accurate 3D global pose estimation framework, designed
to take full advantage of heterogeneous sensory data. By modeling the pose
estimation problem as a pose graph optimization, our approach simultaneously
mitigates the cumulative drift introduced by motion estimation systems (wheel
odometry, visual odometry, ...), and the noise introduced by raw GPS readings.
Along with a suitable motion model, our system also integrates two additional
types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random
Field assumption. We demonstrate how using these additional cues substantially
reduces the error along the altitude axis and, moreover, how this benefit
spreads to the other components of the state. We report exhaustive experiments
combining several sensor setups, showing accuracy improvements ranging from 37%
to 76% with respect to the exclusive use of a GPS sensor. We show that our
approach provides accurate results even if the GPS unexpectedly changes
positioning mode. The code of our system along with the acquired datasets are
released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters,
201
Task-Driven Estimation and Control via Information Bottlenecks
Our goal is to develop a principled and general algorithmic framework for
task-driven estimation and control for robotic systems. State-of-the-art
approaches for controlling robotic systems typically rely heavily on accurately
estimating the full state of the robot (e.g., a running robot might estimate
joint angles and velocities, torso state, and position relative to a goal).
However, full state representations are often excessively rich for the specific
task at hand and can lead to significant computational inefficiency and
brittleness to errors in state estimation. In contrast, we present an approach
that eschews such rich representations and seeks to create task-driven
representations. The key technical insight is to leverage the theory of
information bottlenecks}to formalize the notion of a "task-driven
representation" in terms of information theoretic quantities that measure the
minimality of a representation. We propose novel iterative algorithms for
automatically synthesizing (offline) a task-driven representation (given in
terms of a set of task-relevant variables (TRVs)) and a performant control
policy that is a function of the TRVs. We present online algorithms for
estimating the TRVs in order to apply the control policy. We demonstrate that
our approach results in significant robustness to unmodeled measurement
uncertainty both theoretically and via thorough simulation experiments
including a spring-loaded inverted pendulum running to a goal location.Comment: 9 pages, 4 figures, abridged version accepted to ICRA2019;
Incorporates changes in final conference submissio
Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior
This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic
causal model for predicting the behavior generated by modern percept-driven
robot plans. PHAMs represent aspects of robot behavior that cannot be
represented by most action models used in AI planning: the temporal structure
of continuous control processes, their non-deterministic effects, several modes
of their interferences, and the achievement of triggering conditions in
closed-loop robot plans.
The main contributions of this article are: (1) PHAMs, a model of concurrent
percept-driven behavior, its formalization, and proofs that the model generates
probably, qualitatively accurate predictions; and (2) a resource-efficient
inference method for PHAMs based on sampling projections from probabilistic
action models and state descriptions. We show how PHAMs can be applied to
planning the course of action of an autonomous robot office courier based on
analytical and experimental results
How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV
This work explores the feasibility of steering a drone with a (recurrent)
neural network, based on input from a forward looking camera, in the context of
a high-level navigation task. We set up a generic framework for training a
network to perform navigation tasks based on imitation learning. It can be
applied to both aerial and land vehicles. As a proof of concept we apply it to
a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a
room containing a number of obstacles. So far only feedforward neural networks
(FNNs) have been used to train UAV control. To cope with more complex tasks, we
propose the use of recurrent neural networks (RNN) instead and successfully
train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision
based control is a sequential prediction problem, known for its highly
correlated input data. The correlation makes training a network hard,
especially an RNN. To overcome this issue, we investigate an alternative
sampling method during training, namely window-wise truncated backpropagation
through time (WW-TBPTT). Further, end-to-end training requires a lot of data
which often is not available. Therefore, we compare the performance of
retraining only the Fully Connected (FC) and LSTM control layers with networks
which are trained end-to-end. Performing the relatively simple task of crossing
a room already reveals important guidelines and good practices for training
neural control networks. Different visualizations help to explain the behavior
learned.Comment: 12 pages, 30 figure
Simultaneous localization and map-building using active vision
An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio
A layered control architecture for mobile robot navigation
A Thesis submitted to the University Research Degree Committee in fulfillment ofthe
requirements for the degree of DOCTOR OF PHILOSOPHY in RoboticsThis thesis addresses the problem of how to control an autonomous mobile robot navigation in indoor environments, in the face of sensor noise, imprecise information, uncertainty and limited response time. The thesis argues that the effective control of autonomous mobile robots can be achieved by organising low level and higher level control activities into a layered architecture. The low level reactive control allows the robot to respond to contingencies quickly. The higher level control allows the robot to make longer term decisions and arranges appropriate sequences for a task execution. The thesis describes the design and implementation of a two layer control architecture, a task template based sequencing layer and a fuzzy behaviour based low level control layer. The sequencing layer works at the pace of the higher level of abstraction, interprets a task plan, mediates and monitors the controlling activities. While the low level performs fast computation in response to dynamic changes in the real world and carries out robust control under uncertainty. The organisation and fusion of fuzzy behaviours are described extensively for the construction of a low level control system. A learning methodology is also developed to systematically learn fuzzy behaviours and the behaviour selection network and therefore solve the difficulties in configuring the low level control layer. A two layer control system has been implemented and used to control a simulated mobile robot performing two tasks in simulated indoor environments. The effectiveness of the layered control and learning methodology is demonstrated through the traces of controlling activities at the two different levels. The results also show a general design methodology that the high level should be used to guide the robot's actions while the low level takes care of detailed control in the face of sensor noise and environment uncertainty in real time
- …