24,197 research outputs found
Neural Networks in Mobile Robot Motion
This paper deals with a path planning and intelligent control of an
autonomous robot which should move safely in partially structured environment.
This environment may involve any number of obstacles of arbitrary shape and
size; some of them are allowed to move. We describe our approach to solving the
motion-planning problem in mobile robot control using neural networks-based
technique. Our method of the construction of a collision-free path for moving
robot among obstacles is based on two neural networks. The first neural network
is used to determine the "free" space using ultrasound range finder data. The
second neural network "finds" a safe direction for the next robot section of
the path in the workspace while avoiding the nearest obstacles. Simulation
examples of generated path with proposed techniques will be presented.Comment: 9 Page
Near range path navigation using LGMD visual neural networks
In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically
inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios
Recommended from our members
Design of an adaptive neural predictive nonlinear controller for nonholonomic mobile robot system based on posture identifier in the presence of disturbance
This paper proposes an adaptive neural predictive nonlinear controller to guide a nonholonomic wheeled mobile robot during continuous and non-continuous gradients trajectory tracking. The structure of the controller consists of two models that describe the kinematics and dynamics of the mobile robot system and a feedforward neural controller. The models are modified Elman neural network and feedforward multi-layer perceptron respectively. The modified Elman neural network model is trained off-line and on-line stages to guarantee the outputs of the model accurately represent the actual outputs of the mobile robot system. The trained neural model acts as the position and orientation identifier. The feedforward neural controller is trained off-line and adaptive weights are adapted on-line to find the reference torques, which controls the steady-state outputs of the mobile robot system. The feedback neural controller is based on the posture neural identifier and quadratic performance index optimization algorithm to find the optimal torque action in the transient state for N-step-ahead prediction. General back propagation algorithm is used to learn the feedforward neural controller and the posture neural identifier. Simulation results show the effectiveness of the proposed adaptive neural predictive control algorithm; this is demonstrated by the minimised tracking error and the smoothness of the torque control signal obtained with bounded external disturbances
Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated
Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to
the image of an approaching object. These neurons are called the lobula giant movement
detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the
development of an LGMD model for use as an artificial collision detector in robotic applications.
To date, robots have been equipped with only a single, central artificial LGMD sensor, and this
triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly,
for a robot to behave autonomously, it must react differently to stimuli approaching from
different directions. In this study, we implement a bilateral pair of LGMD models in Khepera
robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD
models using methodologies inspired by research on escape direction control in cockroaches.
Using ‘randomised winner-take-all’ or ‘steering wheel’ algorithms for LGMD model integration,
the khepera robots could escape an approaching threat in real time and with a similar
distribution of escape directions as real locusts. We also found that by optimising these
algorithms, we could use them to integrate the left and right DCMD responses of real jumping
locusts offline and reproduce the actual escape directions that the locusts took in a particular
trial. Our results significantly advance the development of an artificial collision detection and
evasion system based on the locust LGMD by allowing it reactive control over robot behaviour.
The success of this approach may also indicate some important areas to be pursued in future
biological research
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
- …