40,056 research outputs found

    Sliding Mode Control for Trajectory Tracking of a Non-holonomic Mobile Robot using Adaptive Neural Networks

    Get PDF
    In this work a sliding mode control method for a non-holonomic mobile robot using an adaptive neural network is proposed. Due to this property and restricted mobility, the trajectory tracking of this system has been one of the research topics for the last ten years. The proposed control structure combines a feedback linearization model, based on a nominal kinematic model, and a practical design that combines an indirect neural adaptation technique with sliding mode control to compensate for the dynamics of the robot. A neural sliding mode controller is used to approximate the equivalent control in the neighbourhood of the sliding manifold, using an online adaptation scheme. A sliding control is appended to ensure that the neural sliding mode control can achieve a stable closed-loop system for the trajectory-tracking control of a mobile robot with unknown non-linear dynamics. Also, the proposed control technique can reduce the steady-state error using the online adaptive neural network with sliding mode control; the design is based on Lyapunov’s theory. Experimental results show that the proposed method is effective in controlling mobile robots with large dynamic uncertaintiesFil: Rossomando, Francisco Guido. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Carelli Albarracin, Ricardo Oscar. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; Argentin

    Unsupervised Neural Network for the Control of a Mobile Robot

    Full text link
    This article introduces an unsupervised neural architecture for the control of a mobile robot. The system allows incremental learning of the plant during robot operation, with robust performance despite unexpected changes of robot parameters such as wheel radius and inter-wheel distance. The model combines Vector associative Map (VAM) learning and associate learning, enabling the robot to reach targets at arbitrary distances without knowledge of the robot kinematics and without trajectory recording, but relating wheel velocities with robot movements.Sloan Fellowship (BR-3122); Air Force Office of Scientific Research (F49620-92-J-0499

    A Real-Time Unsupervised Neural Network for the Low-Level Control of a Mobile Robot in a Nonstationary Environment

    Full text link
    This article introduces a real-time, unsupervised neural network that learns to control a two-degree-of-freedom mobile robot in a nonstationary environment. The neural controller, which is termed neural NETwork MObile Robot Controller (NETMORC), combines associative learning and Vector Associative Map (YAM) learning to generate transformations between spatial and velocity coordinates. As a result, the controller learns the wheel velocities required to reach a target at an arbitrary distance and angle. The transformations are learned during an unsupervised training phase, during which the robot moves as a result of randomly selected wheel velocities. The robot learns the relationship between these velocities and the resulting incremental movements. Aside form being able to reach stationary or moving targets, the NETMORC structure also enables the robot to perform successfully in spite of disturbances in the enviroment, such as wheel slippage, or changes in the robot's plant, including changes in wheel radius, changes in inter-wheel distance, or changes in the internal time step of the system. Finally, the controller is extended to include a module that learns an internal odometric transformation, allowing the robot to reach targets when visual input is sporadic or unreliable.Sloan Fellowship (BR-3122), Air Force Office of Scientific Research (F49620-92-J-0499

    Controlling a mobile robot with a biological brain

    Get PDF
    The intelligent controlling mechanism of a typical mobile robot is usually a computer system. Some recent research is ongoing in which biological neurons are being cultured and trained to act as the brain of an interactive real world robot�thereby either completely replacing, or operating in a cooperative fashion with, a computer system. Studying such hybrid systems can provide distinct insights into the operation of biological neural structures, and therefore, such research has immediate medical implications as well as enormous potential in robotics. The main aim of the research is to assess the computational and learning capacity of dissociated cultured neuronal networks. A hybrid system incorporating closed-loop control of a mobile robot by a dissociated culture of neurons has been created. The system is flexible and allows for closed-loop operation, either with hardware robot or its software simulation. The paper provides an overview of the problem area, gives an idea of the breadth of present ongoing research, establises a new system architecture and, as an example, reports on the results of conducted experiments with real-life robots

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    Neural Network Controller Design for a Mobile Robot Navigation; a Case Study

    Get PDF
    Mobile robot are widely applied in various aspect of human  life. The main issue of this type of robot is how to navigate safely to reach the goal or finish the assigned task  when applied autonomously in dynamic and uncertain environment. The  ap- plication of artificial intelligence, namely neural   network,  can provide a ”brain” for the robot to navigate safely in completing the assigned task. By applying neural network, the complexity of mobile robot control can be  reduced by choosing the right model of the system, either   from mathematical modeling or directly taken from the input of sensory data  information. In this study, we compare the presented methods of previous  researches that applies neural network to mobile robot navigation. The comparison  is started  by considering  the right  mathematical model for the robot, getting the Jacobian  matrix  for online training, and giving the achieved input model to  the designed neural network layers in order to get the estimated position of the robot. From this literature study, it  is concluded that the consideration of both kinematics and dynamics modeling  of the robot will result in better performance since the exact parameters of the system are known

    Use of human gestures for controlling a mobile robot via adaptive CMAC network and fuzzy logic controller

    Get PDF
    Mobile robots with manipulators have been more and more commonly applied in extreme and hostile environments to assist or even replace human operators for complex tasks. In addition to autonomous abilities, mobile robots need to facilitate the human–robot interaction control mode that enables human users to easily control or collaborate with robots. This paper proposes a system which uses human gestures to control an autonomous mobile robot integrating a manipulator and a video surveillance platform. A human user can control the mobile robot just as one drives an actual vehicle in the vehicle’s driving cab. The proposed system obtains human’s skeleton joints information using a motion sensing input device, which is then recognized and interpreted into a set of control commands. This is implemented, based on the availability of training data set and requirement of in-time performance, by an adaptive cerebellar model articulation controller neural network, a finite state machine, a fuzzy controller and purposely designed gesture recognition and control command generation systems. These algorithms work together implement the steering and velocity control of the mobile robot in real-time. The experimental results demonstrate that the proposed approach is able to conveniently control a mobile robot using virtual driving method, with smooth manoeuvring trajectories in various speeds
    corecore