209 research outputs found

    Mobile Robotics, Moving Intelligence

    Get PDF

    Mobile Robot Navigation in Static and Dynamic Environments using Various Soft Computing Techniques

    Get PDF
    The applications of the autonomous mobile robot in many fields such as industry, space, defence and transportation, and other social sectors are growing day by day. The mobile robot performs many tasks such as rescue operation, patrolling, disaster relief, planetary exploration, and material handling, etc. Therefore, an intelligent mobile robot is required that could travel autonomously in various static and dynamic environments. The present research focuses on the design and implementation of the intelligent navigation algorithms, which is capable of navigating a mobile robot autonomously in static as well as dynamic environments. Navigation and obstacle avoidance are one of the most important tasks for any mobile robots. The primary objective of this research work is to improve the navigation accuracy and efficiency of the mobile robot using various soft computing techniques. In this research work, Hybrid Fuzzy (H-Fuzzy) architecture, Cascade Neuro-Fuzzy (CN-Fuzzy) architecture, Fuzzy-Simulated Annealing (Fuzzy-SA) algorithm, Wind Driven Optimization (WDO) algorithm, and Fuzzy-Wind Driven Optimization (Fuzzy-WDO) algorithm have been designed and implemented to solve the navigation problems of a mobile robot in different static and dynamic environments. The performances of these proposed techniques are demonstrated through computer simulations using MATLAB software and implemented in real time by using experimental mobile robots. Furthermore, the performances of Wind Driven Optimization algorithm and Fuzzy-Wind Driven Optimization algorithm are found to be most efficient (in terms of path length and navigation time) as compared to rest of the techniques, which verifies the effectiveness and efficiency of these newly built techniques for mobile robot navigation. The results obtained from the proposed techniques are compared with other developed techniques such as Fuzzy Logics, Genetic algorithm (GA), Neural Network, and Particle Swarm Optimization (PSO) algorithm, etc. to prove the authenticity of the proposed developed techniques

    Control of Real Mobile Robot Using Artificial Intelligence Technique

    Get PDF
    An eventual objective of mobile robotics research is to bestow the robot with high cerebral skill, of which navigation in an unfamiliar environment can be succeeded by using on‐line sensory information, which is essentially starved of humanoid intermediation. This research emphases on mechanical design of real mobile robot, its kinematic & dynamic model analysis and selection of AI technique based on perception, cognition, sensor fusion, path scheduling and analysis, which has to be implemented in robot for achieving integration of different preliminary robotic behaviors (e.g. obstacle avoidance, wall and edge following, escaping dead end and target seeking). Navigational paths as well as time taken during navigation by the mobile robot can be expressed as an optimization problem and thus can be analyzed and solved using AI techniques. The optimization of path as well as time taken is based on the kinematic stability and the intelligence of the robot controller. A set of linguistic fuzzy rules are developed to implement expert knowledge under various situations. Both of Mamdani and Takagi-Sugeno fuzzy model are employed in control algorithm for experimental purpose. Neural network has also been used to enhance and optimize the outcome of controller, e.g. by introducing a learning ability. The cohesive framework combining both fuzzy inference system and neural network enabled mobile robot to generate reasonable trajectories towards the target. An authenticity checking has been done by performing simulation as well as experimental results which showed that the mobile robot is capable of avoiding stationary obstacles, escaping traps, and reaching the goal efficiently

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Analysis and Control of Mobile Robots in Various Environmental Conditions

    Get PDF
    The world sees new inventions each day, made to make the lifestyle of humans more easy and luxurious. In such global scenario, the robots have proved themselves to be an invention of great importance. The robots are being used in almost each and every field of the human world. Continuous studies are being done on them to make them simpler and easier to work with. All fields are being unraveled to make them work better in the human world without human interference. We focus on the navigation field of these mobile robots. The aim of this thesis is to find the controller that produces the most optimal path for the robot to reach its destination without colliding or damaging itself or the environment. The techniques like Fuzzy logic, Type 2 fuzzy logic, Neural networks and Artificial bee colony have been discussed and experimented to find the best controller that could find the most optimal path for the robot to reach its goal position. Simulation and Experiments have been done alike to find out the optimal path for the robot

    Optimized state feedback regulation of 3DOF helicopter system via extremum seeking

    Get PDF
    In this paper, an optimized state feedback regulation of a 3 degree of freedom (DOF) helicopter is designed via extremum seeking (ES) technique. Multi-parameter ES is applied to optimize the tracking performance via tuning State Vector Feedback with Integration of the Control Error (SVFBICE). Discrete multivariable version of ES is developed to minimize a cost function that measures the performance of the controller. The cost function is a function of the error between the actual and desired axis positions. The controller parameters are updated online as the optimization takes place. This method significantly decreases the time in obtaining optimal controller parameters. Simulations were conducted for the online optimization under both fixed and varying operating conditions. The results demonstrate the usefulness of using ES for preserving the maximum attainable performance
    corecore