6,684 research outputs found
Near range path navigation using LGMD visual neural networks
In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically
inspired visual neural network â lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios
Performance optimisation of mobile robots in dynamic environments
This paper presents a robotic simulation system, that combines task allocation and motion planning of multiple mobile robots, for performance optimisation in dynamic environments. While task allocation assigns jobs to robots, motion planning generates routes for robots to execute the assigned jobs. Task allocation and motion planning together play a pivotal role in optimisation of robot team performance. These two issues become more challenging when there are often operational uncertainties in dynamic environments. We address these issues by proposing an auction-based closed-loop module for task allocation and a bio-inspired intelligent module for motion planning to optimise robot team performance in dynamic environments. The task allocation module is characterised by a closed-loop bid adjustment mechanism to improve the bid accuracy even in light of stochastic disturbances. The motion planning module is bio-inspired intelligent in that it features detection of imminent neighbours and responsiveness of virtual force navigation in dynamic traffic conditions. Simulations show that the proposed system is a practical tool to optimise the operations by a team of robots in dynamic environments. © 2012 IEEE.published_or_final_versionThe IEEE International Conference on Virtual Environments Human-Computer Interfaces and Measurement Systems (VECIMS 2012), Tianjin, China, 2-4 July 2012. In Proceedings of IEEE VECIMS, 2012, p. 54-5
Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated
Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to
the image of an approaching object. These neurons are called the lobula giant movement
detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the
development of an LGMD model for use as an artificial collision detector in robotic applications.
To date, robots have been equipped with only a single, central artificial LGMD sensor, and this
triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly,
for a robot to behave autonomously, it must react differently to stimuli approaching from
different directions. In this study, we implement a bilateral pair of LGMD models in Khepera
robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD
models using methodologies inspired by research on escape direction control in cockroaches.
Using ârandomised winner-take-allâ or âsteering wheelâ algorithms for LGMD model integration,
the khepera robots could escape an approaching threat in real time and with a similar
distribution of escape directions as real locusts. We also found that by optimising these
algorithms, we could use them to integrate the left and right DCMD responses of real jumping
locusts offline and reproduce the actual escape directions that the locusts took in a particular
trial. Our results significantly advance the development of an artificial collision detection and
evasion system based on the locust LGMD by allowing it reactive control over robot behaviour.
The success of this approach may also indicate some important areas to be pursued in future
biological research
A short curriculum of the robotics and technology of computer lab
Our research Lab is directed by Prof. Anton Civit. It is an interdisciplinary group of 23
researchers that carry out their teaching and researching labor at the Escuela
PolitĂ©cnica Superior (Higher Polytechnic School) and the Escuela de IngenierĂa
InformĂĄtica (Computer Engineering School). The main research fields are: a)
Industrial and mobile Robotics, b) Neuro-inspired processing using electronic spikes,
c) Embedded and real-time systems, d) Parallel and massive processing computer
architecture, d) Information Technologies for rehabilitation, handicapped and elder
people, e) Web accessibility and usability
In this paper, the Lab history is presented and its main publications and research
projects over the last few years are summarized.Nuestro grupo de investigaciĂłn estĂĄ liderado por el profesor Civit. Somos un grupo
multidisciplinar de 23 investigadores que realizan su labor docente e investigadora
en la Escuela PolitĂ©cnica Superior y en Escuela de IngenierĂa InformĂĄtica. Las
principales lĂneas de investigaciones son: a) RobĂłtica industrial y mĂłvil. b)
Procesamiento neuro-inspirado basado en pulsos electrĂłnicos. c) Sistemas
empotrados y de tiempo real. d) Arquitecturas paralelas y de procesamiento masivo.
e) TecnologĂa de la informaciĂłn aplicada a la discapacidad, rehabilitaciĂłn y a las
personas mayores. f) Usabilidad y accesibilidad Web.
En este artĂculo se reseña la historia del grupo y se resumen las principales
publicaciones y proyectos que ha conseguido en los Ășltimos años
Towards Odor-Sensitive Mobile Robots
J. Monroy, J. Gonzalez-Jimenez, "Towards Odor-Sensitive Mobile Robots", Electronic Nose Technologies and Advances in Machine Olfaction, IGI Global, pp. 244--263, 2018, doi:10.4018/978-1-5225-3862-2.ch012
VersiĂłn preprint, con permiso del editorOut of all the components of a mobile robot, its sensorial system is undoubtedly among the most critical
ones when operating in real environments. Until now, these sensorial systems mostly relied on range
sensors (laser scanner, sonar, active triangulation) and cameras. While electronic noses have barely
been employed, they can provide a complementary sensory information, vital for some applications, as
with humans. This chapter analyzes the motivation of providing a robot with gas-sensing capabilities
and also reviews some of the hurdles that are preventing smell from achieving the importance of other
sensing modalities in robotics. The achievements made so far are reviewed to illustrate the current status
on the three main fields within robotics olfaction: the classification of volatile substances, the spatial
estimation of the gas dispersion from sparse measurements, and the localization of the gas source within
a known environment
Development of a bio-inspired vision system for mobile micro-robots
In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts' vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control
Towards an Autonomous Walking Robot for Planetary Surfaces
In this paper, recent progress in the development of
the DLR Crawler - a six-legged, actively compliant walking
robot prototype - is presented. The robot implements
a walking layer with a simple tripod and a more complex
biologically inspired gait. Using a variety of proprioceptive
sensors, different reflexes for reactively crossing obstacles
within the walking height are realised. On top of
the walking layer, a navigation layer provides the ability
to autonomously navigate to a predefined goal point in
unknown rough terrain using a stereo camera. A model
of the environment is created, the terrain traversability is
estimated and an optimal path is planned. The difficulty
of the path can be influenced by behavioral parameters.
Motion commands are sent to the walking layer and the
gait pattern is switched according to the estimated terrain
difficulty. The interaction between walking layer and navigation
layer was tested in different experimental setups
- âŠ