874 research outputs found

    Navigation Control of an Automated Guided Underwater Robot using Neural Network Technique

    Get PDF
    In recent years, under water robots play an important role in various under water operations. There is an increase in research in this area because of the application of autonomous underwater robots in several issues like exploring under water environment and resource, doing scientific and military tasks under water. We need good maneuvering capabilities and a well precision for moving in a specified track in these applications. However, control of these under water bots become very difficult due to the highly non-linear and dynamic characteristics of the underwater world. The logical answer to this problem is the application of non-linear controllers. As neural networks (NNs) are characterized by flexibility and an aptitude for dealing with non-linear problems, they are envisaged to be beneficial when used on underwater robots. In this research our artificial intelligence system is based on neural network model for navigation of an Automated Underwater robot in unpredictable and imprecise environment. Thus the back propagation algorithm has been used for the steering analysis of the underwater robot when it is encountered by a left, right and front as well as top obstacle. After training the neural network the neural network pattern was used in the controller of the underwater robot. The simulation of underwater robot under various obstacle conditions are shown using MATLAB

    Intelligent Control Strategies for an Autonomous Underwater Vehicle

    Get PDF
    The dynamic characteristics of autonomous underwater vehicles (AUVs) present a control problem that classical methods cannot often accommodate easily. Fundamentally, AUV dynamics are highly non-linear, and the relative similarity between the linear and angular velocities about each degree of freedom means that control schemes employed within other flight vehicles are not always applicable. In such instances, intelligent control strategies offer a more sophisticated approach to the design of the control algorithm. Neurofuzzy control is one such technique, which fuses the beneficial properties of neural networks and fuzzy logic in a hybrid control architecture. Such an approach is highly suited to development of an autopilot for an AUV. Specifically, the adaptive network-based fuzzy inference system (ANFIS) is discussed in Chapter 4 as an effective new approach for neurally tuning course-changing fuzzy autopilots. However, the limitation of this technique is that it cannot be used for developing multivariable fuzzy structures. Consequently, the co-active ANFIS (CANFIS) architecture is developed and employed as a novel multi variable AUV autopilot within Chapter 5, whereby simultaneous control of the AUV yaw and roll channels is achieved. Moreover, this structure is flexible in that it is extended in Chapter 6 to perform on-line control of the AUV leading to a novel autopilot design that can accommodate changing vehicle pay loads and environmental disturbances. Whilst the typical ANFIS and CANFIS structures prove effective for AUV control system design, the well known properties of radial basis function networks (RBFN) offer a more flexible controller architecture. Chapter 7 presents a new approach to fuzzy modelling and employs both ANFIS and CANFIS structures with non-linear consequent functions of composite Gaussian form. This merger of CANFIS and a RBFN lends itself naturally to tuning with an extended form of the hybrid learning rule, and provides a very effective approach to intelligent controller development.The Sea Systems and Platform Integration Sector, Defence Evaluation and Research Agency, Winfrit

    A survey on uninhabited underwater vehicles (UUV)

    Get PDF
    ASME Early Career Technical Conference, ASME ECTC, October 2-3, 2009, Tuscaloosa, Alabama, USAThis work presents the initiation of our underwater robotics research which will be focused on underwater vehicle-manipulator systems. Our aim is to build an underwater vehicle with a robotic manipulator which has a robust system and also can compensate itself under the influence of the hydrodynamic effects. In this paper, overview of the existing underwater vehicle systems, thruster designs, their dynamic models and control architectures are given. The purpose and results of the existing methods in underwater robotics are investigated

    Adaptive low-level control of autonomous underwater vehicles using deep reinforcement learning

    Get PDF
    Low-level control of autonomous underwater vehicles (AUVs) has been extensively addressed by classical control techniques. However, the variable operating conditions and hostile environments faced by AUVs have driven researchers towards the formulation of adaptive control approaches. The reinforcement learning (RL) paradigm is a powerful framework which has been applied in different formulations of adaptive control strategies for AUVs. However, the limitations of RL approaches have lead towards the emergence of deep reinforcement learning which has become an attractive and promising framework for developing real adaptive control strategies to solve complex control problems for autonomous systems. However, most of the existing applications of deep RL use video images to train the decision making artificial agent but obtaining camera images only for an AUV control purpose could be costly in terms of energy consumption. Moreover, the rewards are not easily obtained directly from the video frames. In this work we develop a deep RL framework for adaptive control applications of AUVs based on an actor-critic goal-oriented deep RL architecture, which takes the available raw sensory information as input and as output the continuous control actions which are the low-level commands for the AUV's thrusters. Experiments on a real AUV demonstrate the applicability of the stated deep RL approach for an autonomous robot control problem.Fil: Carlucho, Ignacio. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires; ArgentinaFil: de Paula, Mariano. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires; ArgentinaFil: Wang, Sen. Heriot-Watt University; Reino UnidoFil: Petillot, Yvan. Heriot-Watt University; Reino UnidoFil: Acosta, Gerardo Gabriel. Universidad Nacional del Centro de la Provincia de Buenos Aires. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires. - Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Centro de Investigaciones en Física e Ingeniería del Centro de la Provincia de Buenos Aires; Argentin

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Smart element aware gate controller for intelligent wheeled robot navigation

    Get PDF
    The directing of a wheeled robot in an unknown moving environment with physical barriers is a difficult proposition. In particular, having an optimal or near-optimal path that avoids obstacles is a major challenge. In this paper, a modified neuro-controller mechanism is proposed for controlling the movement of an indoor mobile robot. The proposed mechanism is based on the design of a modified Elman neural network (MENN) with an effective element aware gate (MEEG) as the neuro-controller. This controller is updated to overcome the rigid and dynamic barriers in the indoor area. The proposed controller is implemented with a mobile robot known as Khepera IV in a practical manner. The practical results demonstrate that the proposed mechanism is very efficient in terms of providing shortest distance to reach the goal with maximum velocity as compared with the MENN. Specifically, the MEEG is better than MENN in minimizing the error rate by 58.33%

    Biologically inspired learning system

    Get PDF
    Learning Systems used on robots require either a-priori knowledge in the form of models, rules of thumb or databases or require that robot to physically execute multitudes of trial solutions. The first requirement limits the robot’s ability to operate in unstructured changing environments, and the second limits the robot’s service life and resources. In this research a generalized approach to learning was developed through a series of algorithms that can be used for construction of behaviors that are able to cope with unstructured environments through adaptation of both internal parameters and system structure as a result of a goal based supervisory mechanism. Four main learning algorithms have been developed, along with a goal directed random exploration routine. These algorithms all use the concept of learning from a recent memory in order to save the robot/agent from having to exhaustively execute all trial solutions. The first algorithm is a reactive online learning algorithm that uses a supervised learning to find the sensor/action combinations that promote realization of a preprogrammed goal. It produces a feed forward neural network controller that is used to control the robot. The second algorithm is similar to first in that it uses a supervised learning strategy, but it produces a neural network that considers past values, thus providing a non-reactive solution. The third algorithm is a departure from the first two in that uses a non-supervised learning technique to learn the best actions for each situation the robot encounters. The last algorithm builds a graph of the situations encountered by agent/robot in order to learn to associate the best actions with sensor inputs. It uses an unsupervised learning approach based on shortest paths to a goal situation in the graph in order to generate a non-reactive feed forward neural network. Test results were good, the first and third algorithms were tested in a formation maneuvering task in both simulation and onboard mobile robots, while the second and fourth were tested simulation

    Neural network modeling of the dynamics of autonomous underwater vehicles for Kalman filtering and improved localization

    Get PDF
    Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles are used for a variety of underwater operations and deep-sea explorations. One of the major challenges faced by these vehicles is localization i.e., the ability of these vehicles to identify their location with respect to a reference point. The kinematic Extended Kalman filters have been used in localization in a method known as dead reckoning. The accuracy of the localization systems can be improved if a dynamic model is used instead of the kinematic model. The previously derived dynamic model was implemented in real time in UUVSim, a simulation environment. The dynamic model was tested against the kinematic model on various test courses and it was found that the dynamic model was more stable and accurate than the kinematic model. One of the major drawbacks of the dynamic model was that it required the use of numerous coefficients. The process of determining these coefficients was extensive, requiring significant experimentation time. This research explores the use of a Neural Network architecture to replace these dynamic equations. Initial experiments have showed promising results for the Neural Network although modifications will be required before the controller can be made universally applicable

    COLREG-Compliant Collision Avoidance for Unmanned Surface Vehicle using Deep Reinforcement Learning

    Full text link
    Path Following and Collision Avoidance, be it for unmanned surface vessels or other autonomous vehicles, are two fundamental guidance problems in robotics. For many decades, they have been subject to academic study, leading to a vast number of proposed approaches. However, they have mostly been treated as separate problems, and have typically relied on non-linear first-principles models with parameters that can only be determined experimentally. The rise of Deep Reinforcement Learning (DRL) in recent years suggests an alternative approach: end-to-end learning of the optimal guidance policy from scratch by means of a trial-and-error based approach. In this article, we explore the potential of Proximal Policy Optimization (PPO), a DRL algorithm with demonstrated state-of-the-art performance on Continuous Control tasks, when applied to the dual-objective problem of controlling an underactuated Autonomous Surface Vehicle in a COLREGs compliant manner such that it follows an a priori known desired path while avoiding collisions with other vessels along the way. Based on high-fidelity elevation and AIS tracking data from the Trondheim Fjord, an inlet of the Norwegian sea, we evaluate the trained agent's performance in challenging, dynamic real-world scenarios where the ultimate success of the agent rests upon its ability to navigate non-uniform marine terrain while handling challenging, but realistic vessel encounters
    corecore