39,102 research outputs found
High accuracy navigation in unknown environment using adaptive control
Aiming to reduce cycle time and improving the accuracy on tracking, a modified adaptive control was developed, which adapts autonomously to changing dynamic parameters. The platform used is based on a robot with a vision based sensory system. Goal and obstacles angles are calculated relatively to robot orientation from image processing software. Autonomous robots are programmed to navigate in unknown and unstructured environments where there are multiple obstacles which can readily change their position. This approach underlies in dynamic attractor and repulsive forces. This theory uses differential equations that produce vector fields to control speed and direction of the robot. This new strategy was
compared with existing PID method experimentally and it proved to be more effective in terms of behaviour and time-response. Calibration parameters used in PID control are in this case unnecessary. The experiments were carried out in robot Middle Size League football players built for RoboCup. Target pursuit, namely, ball, goal or any absolute position, was tested. Results showed high tracking accuracy and rapid
response to moving targets. This dynamic control system enables a good balance
between fast movements and smooth behaviour
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Reinforcement Learning for UAV Attitude Control
Autopilot systems are typically composed of an "inner loop" providing
stability and control, while an "outer loop" is responsible for mission-level
objectives, e.g. way-point navigation. Autopilot systems for UAVs are
predominately implemented using Proportional, Integral Derivative (PID) control
systems, which have demonstrated exceptional performance in stable
environments. However more sophisticated control is required to operate in
unpredictable, and harsh environments. Intelligent flight control systems is an
active area of research addressing limitations of PID control most recently
through the use of reinforcement learning (RL) which has had success in other
applications such as robotics. However previous work has focused primarily on
using RL at the mission-level controller. In this work, we investigate the
performance and accuracy of the inner control loop providing attitude control
when using intelligent flight control systems trained with the state-of-the-art
RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy
Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate
these unknowns we first developed an open-source high-fidelity simulation
environment to train a flight controller attitude control of a quadrotor
through RL. We then use our environment to compare their performance to that of
a PID controller to identify if using RL is appropriate in high-precision,
time-critical flight control.Comment: 13 pages, 9 figure
Long-term experiments with an adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability
Recommended from our members
Motion Planning for Optimal Information Gathering in Opportunistic Navigation Systems
Motion planning for optimal information gathering in an opportunistic navigation (OpNav)
environment is considered. An OpNav environment can be thought of as a radio
frequency signal landscape within which a receiver locates itself in space and time by extracting
information from ambient signals of opportunity (SOPs). The receiver is assumed
to draw only pseudorange-type observations from the SOPs, and such observations are
fused through an estimator to produce an estimate of the receiver’s own states. Since
not all SOP states in the OpNav environment may be known a priori, the receiver must
estimate the unknown SOP states of interest simultaneously with its own states. In this
work, the following problem is studied. A receiver with no a priori knowledge about its
own states is dropped in an unknown, yet observable, OpNav environment. Assuming that
the receiver can prescribe its own trajectory, what motion planning strategy should the
receiver adopt in order to build a high-fidelity map of the OpNav signal landscape, while
simultaneously localizing itself within this map in space and time? To answer this question,
first, the minimum conditions under which the OpNav environment is fully observable are
established, and the need for receiver maneuvering to achieve full observability is highlighted.
Then, motivated by the fact that not all trajectories a receiver may take in the
environment are equally beneficial from an information gathering point of view, a strategy
for planning the motion of the receiver is proposed. The strategy is formulated in a
coupled estimation and optimal control framework of a gradually identified system, where
optimality is defined through various information-theoretic measures. Simulation results
are presented to illustrate the improvements gained from adopting the proposed strategy
over random and pre-defined receiver trajectories.Aerospace Engineering and Engineering Mechanic
A deep reinforcement learning based homeostatic system for unmanned position control
Deep Reinforcement Learning (DRL) has been proven to be capable of designing an optimal control theory by minimising the error in dynamic systems. However, in many of the real-world operations, the exact behaviour of the environment is unknown. In such environments, random changes cause the system to reach different states for the same action. Hence, application of DRL for unpredictable environments is difficult as the states of the world cannot be known for non-stationary transition and reward functions. In this paper, a mechanism to encapsulate the randomness of the environment is suggested using a novel bio-inspired homeostatic approach based on a hybrid of Receptor Density Algorithm (an artificial immune system based anomaly detection application) and a Plastic Spiking Neuronal model. DRL is then introduced to run in conjunction with the above hybrid model. The system is tested on a vehicle to autonomously re-position in an unpredictable environment. Our results show that the DRL based process control raised the accuracy of the hybrid model by 32%.N/
- …