4,011 research outputs found
Redundant neural vision systems: competing for collision recognition roles
Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition
Unifying Foundation Models with Quadrotor Control for Visual Tracking Beyond Object Categories
Visual control enables quadrotors to adaptively navigate using real-time
sensory data, bridging perception with action. Yet, challenges persist,
including generalization across scenarios, maintaining reliability, and
ensuring real-time responsiveness. This paper introduces a perception framework
grounded in foundation models for universal object detection and tracking,
moving beyond specific training categories. Integral to our approach is a
multi-layered tracker integrated with the foundation detector, ensuring
continuous target visibility, even when faced with motion blur, abrupt light
shifts, and occlusions. Complementing this, we introduce a model-free
controller tailored for resilient quadrotor visual tracking. Our system
operates efficiently on limited hardware, relying solely on an onboard camera
and an inertial measurement unit. Through extensive validation in diverse
challenging indoor and outdoor environments, we demonstrate our system's
effectiveness and adaptability. In conclusion, our research represents a step
forward in quadrotor visual tracking, moving from task-specific methods to more
versatile and adaptable operations
Grounding action in visuo-haptic space using experience networks
Traditional approaches to the use of machine learning algorithms do not provide a method to learn multiple tasks in one-shot on an embodied robot. It is proposed that grounding actions within the sensory space leads to the development of action-state relationships which can be re-used despite a change in task. A novel approach called an Experience Network is developed and assessed on a real-world robot required to perform three separate tasks. After grounded representations were developed in the initial task, only minimal further learning was required to perform the second and third task
"Going back to our roots": second generation biocomputing
Researchers in the field of biocomputing have, for many years, successfully
"harvested and exploited" the natural world for inspiration in developing
systems that are robust, adaptable and capable of generating novel and even
"creative" solutions to human-defined problems. However, in this position paper
we argue that the time has now come for a reassessment of how we exploit
biology to generate new computational systems. Previous solutions (the "first
generation" of biocomputing techniques), whilst reasonably effective, are crude
analogues of actual biological systems. We believe that a new, inherently
inter-disciplinary approach is needed for the development of the emerging
"second generation" of bio-inspired methods. This new modus operandi will
require much closer interaction between the engineering and life sciences
communities, as well as a bidirectional flow of concepts, applications and
expertise. We support our argument by examining, in this new light, three
existing areas of biocomputing (genetic programming, artificial immune systems
and evolvable hardware), as well as an emerging area (natural genetic
engineering) which may provide useful pointers as to the way forward.Comment: Submitted to the International Journal of Unconventional Computin
- …