6 research outputs found
Motion planning algorithms for a group of mobile agents
Building autonomous mobile agents has been a major research effort for a while
with cooperative mobile robotics receiving a lot of attention in recent times. Motion
planning is a critical problem in deploying autonomous agents. In this research we
have developed two novel global motion planning schemes for a group of mobile agents
which eliminate some of the disadvantages of the current methods available. The first
is the homotopy method in which the planning is done in polynomial space. In this
method the position in local frame of each mobile agent is mapped to a complex
number and a time varying polynomial contains information regarding the current
positions of all mobile agents, the degree of the polynomial being the number of
mobile agents and the roots of the polynomial representing the position in local
frame of the mobile agents at a given time. This polynomial is constructed by finding
a path parameterized in time from the initial to the goal polynomial (represent the
initial and goal positions in local frame of the mobile agents) so that the discriminant
variety or the set of polynomials with multiple roots is avoided in polynomial space.
This is equivalent to saying that there is no collision between any two agents in going
from initial position to goal position. The second is the homogeneous deformation
method. It is based on continuum theory for motion of deformable bodies. In this
method a swarm of vehicles is considered at rest in an initial configuration with no
restrictions on the initial shape or the locations of the vehicles within that shape. A
motion plan is developed to move this swarm of vehicles from the initial configuration to a new configuration such that there are no collisions between any vehicles at
any time instant. It is achieved via a linear map between the initial and desired
final configuration such that the map is invertible at all times. Both the methods
proposed are computationally attractive. Also they facilitate motion coordination
between groups of mobile agents with limited or no sensing and communication
Cybernetic automata: An approach for the realization of economical cognition for multi-robot systems
The multi-agent robotics paradigm has attracted much attention due to the
variety of pertinent applications that are well-served by the use of a multiplicity of
agents (including space robotics, search and rescue, and mobile sensor networks). The
use of this paradigm for most applications, however, demands economical, lightweight
agent designs for reasons of longer operational life, lower economic cost, faster and
easily-verified designs, etc.
An important contributing factor to an agent’s cost is its control architecture.
Due to the emergence of novel implementation technologies carrying the promise of
economical implementation, we consider the development of a technology-independent
specification for computational machinery. To that end, the use of cybernetics toolsets
(control and dynamical systems theory) is appropriate, enabling a principled specifi-
cation of robotic control architectures in mathematical terms that could be mapped
directly to diverse implementation substrates.
This dissertation, hence, addresses the problem of developing a technologyindependent
specification for lightweight control architectures to enable robotic agents
to serve in a multi-agent scheme. We present the principled design of static and dynamical
regulators that elicit useful behaviors, and integrate these within an overall
architecture for both single and multi-agent control. Since the use of control theory
can be limited in unstructured environments, a major focus of the work is on the engineering of emergent behavior.
The proposed scheme is highly decentralized, requiring only local sensing and
no inter-agent communication. Beyond several simulation-based studies, we provide
experimental results for a two-agent system, based on a custom implementation employing
field-programmable gate arrays
Learning Dynamic Priority Scheduling Policies with Graph Attention Networks
The aim of this thesis is to develop novel graph attention network-based models to automatically learn scheduling policies for effectively solving resource optimization problems, covering both deterministic and stochastic environments. The policy learning methods utilize both imitation learning, when expert demonstrations are accessible at low cost, and reinforcement learning, when otherwise reward engineering is feasible. By parameterizing the learner with graph attention networks, the framework is computationally efficient and results in scalable resource optimization schedulers that adapt to various problem structures. This thesis addresses the problem of multi-robot task allocation (MRTA) under temporospatial constraints. Initially, robots with deterministic and homogeneous task performance are considered with the development of the RoboGNN scheduler. Then, I develop ScheduleNet, a novel heterogeneous graph attention network model, to efficiently reason about coordinating teams of heterogeneous robots. Next, I address problems under the more challenging stochastic setting in two parts. Part 1) Scheduling with stochastic and dynamic task completion times. The MRTA problem is extended by introducing human coworkers with dynamic learning curves and stochastic task execution. HybridNet, a hybrid network structure, has been developed that utilizes a heterogeneous graph-based encoder and a recurrent schedule propagator, to carry out fast schedule generation in multi-round settings. Part 2) Scheduling with stochastic and dynamic task arrival and completion times. With an application in failure-predictive plane maintenance, I develop a heterogeneous graph-based policy optimization (HetGPO) approach to enable learning robust scheduling policies in highly stochastic environments. Through extensive experiments, the proposed framework has been shown to outperform prior state-of-the-art algorithms in different applications. My research contributes several key innovations regarding designing graph-based learning algorithms in operations research.Ph.D
Life Long Learning In Sparse Learning Environments
Life long learning is a machine learning technique that deals with learning sequential tasks over time. It seeks to transfer knowledge from previous learning tasks to new learning tasks in order to increase generalization performance and learning speed. Real-time learning environments in which many agents are participating may provide learning opportunities but they are spread out in time and space outside of the geographical scope of a single learning agent. This research seeks to provide an algorithm and framework for life long learning among a network of agents in a sparse real-time learning environment. This work will utilize the robust knowledge representation of neural networks, and make use of both functional and representational knowledge transfer to accomplish this task. A new generative life long learning algorithm utilizing cascade correlation and reverberating pseudo-rehearsal and incorporating a method for merging divergent life long learning paths will be implemented