497 research outputs found
A vision system with collision possibility detection of an approaching object
九州工業大学博士学位論文 学位記番号:生工博甲第60号 学位授与年月日:平成18年3月23日第1章 序論|第2章 昆虫の視覚系に学び接近物体の衝突危険性を考慮した衝突回避アルゴリズム|第3章 画像特徴を利用したCCDカメラのための接近物体の衝突回避アルゴリズム|第4章 移動ロボットのためのリアルタイムビジョンシステム|第5章 結論九州工業大学平成18年
Flying Animal Inspired Behavior-Based Gap-Aiming Autonomous Flight with a Small Unmanned Rotorcraft in a Restricted Maneuverability Environment
This dissertation research shows a small unmanned rotorcraft system with onboard processing and a vision sensor can produce autonomous, collision-free flight in a restricted maneuverability environment with no a priori knowledge by using a gap-aiming behavior inspired by flying animals. Current approaches to autonomous flight with small unmanned aerial systems (SUAS) concentrate on detecting and explicitly avoiding obstacles. In contrast, biology indicates that birds, bats, and insects do the opposite; they react to open spaces, or gaps in the environment, with a gap_aiming behavior. Using flying animals as inspiration a behavior-based robotics approach is taken to implement and test their observed gap-aiming behavior in three dimensions. Because biological studies were unclear whether the flying animals were reacting to the largest gap perceived, the closest gap perceived, or all of the gaps three approaches for the perceptual schema were explored in simulation: detect_closest_gap, detect_largest_gap, and detect_all_gaps. The result of these simulations was used in a proof-of-concept implementation on a 3DRobotics Solo quadrotor platform in an environment designed to represent the navigational diffi- culties found inside a restricted maneuverability environment. The motor schema is implemented with an artificial potential field to produce the action of aiming to the center of the gap. Through two sets of field trials totaling fifteen flights conducted with a small unmanned quadrotor, the gap-aiming behavior observed in flying animals is shown to produce repeatable autonomous, collision-free flight in a restricted maneuverability environment. Additionally, using the distance from the starting location to perceived gaps, the horizontal and vertical distance traveled, and the distance from the center of the gap during traversal the implementation of the gap selection approach performs as intended, the three-dimensional movement produced by the motor schema and the accuracy of the motor schema are shown, respectively. This gap-aiming behavior provides the robotics community with the first known implementation of autonomous, collision-free flight on a small unmanned quadrotor without explicit obstacle detection and avoidance as seen with current implementations. Additionally, the testing environment described by quantitative metrics provides a benchmark for autonomous SUAS flight testing in confined environments. Finally, the success of the autonomous collision-free flight implementation on a small unmanned rotorcraft and field tested in a restricted maneuverability environment could have important societal impact in both the public and private sectors
Coping With Multiple Visual Motion Cues Under Extremely Constrained Computation Power of Micro Autonomous Robots
The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system's functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems
Mobile robot vavigation using a vision based approach
PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially
cluttered indoor environment using a mapless navigation strategy. The work focuses on
two key problems, namely vision based obstacle avoidance and vision based reactive
navigation strategy.
The estimation of optical flow plays a key role in vision based obstacle avoidance
problems, however the current view is that this technique is too sensitive to noise and
distortion under real conditions. Accordingly, practical applications in real time robotics
remain scarce. This dissertation presents a novel methodology for vision based obstacle
avoidance, using a hybrid architecture. This integrates an appearance-based obstacle
detection method into an optical flow architecture based upon a behavioural control
strategy that includes a new arbitration module. This enhances the overall performance
of conventional optical flow based navigation systems, enabling a robot to successfully
move around without experiencing collisions.
Behaviour based approaches have become the dominant methodologies for designing
control strategies for robot navigation. Two different behaviour based navigation
architectures have been proposed for the second problem, using monocular vision as the
primary sensor and equipped with a 2-D range finder. Both utilize an accelerated
version of the Scale Invariant Feature Transform (SIFT) algorithm. The first
architecture employs a qualitative-based control algorithm to steer the robot towards a
goal whilst avoiding obstacles, whereas the second employs an intelligent control
framework. This allows the components of soft computing to be integrated into the
proposed SIFT-based navigation architecture, conserving the same set of behaviours
and system structure of the previously defined architecture. The intelligent framework
incorporates a novel distance estimation technique using the scale parameters obtained
from the SIFT algorithm. The technique employs scale parameters and a corresponding
zooming factor as inputs to train a neural network which results in the determination of
physical distance. Furthermore a fuzzy controller is designed and integrated into this
framework so as to estimate linear velocity, and a neural network based solution is
adopted to estimate the steering direction of the robot. As a result, this intelligent
iv
approach allows the robot to successfully complete its task in a smooth and robust
manner without experiencing collision.
MS Robotics Studio software was used to simulate the systems, and a modified Pioneer
3-DX mobile robot was used for real-time implementation. Several realistic scenarios
were developed and comprehensive experiments conducted to evaluate the performance
of the proposed navigation systems.
KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile
robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant
Feature Transforms, Intelligent framework
Vision-Based navigation system for unmanned aerial vehicles
Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles
(UAVs) with a robust navigation system; in order to allow the UAVs to perform
complex tasks autonomously and in real-time. The proposed algorithms deal with
solving the navigation problem for outdoor as well as indoor environments, mainly
based on visual information that is captured by monocular cameras. In addition,
this dissertation presents the advantages of using the visual sensors as the main
source of data, or complementing other sensors in providing useful information; in
order to improve the accuracy and the robustness of the sensing purposes.
The dissertation mainly covers several research topics based on computer vision
techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of
the UAV. This algorithm is based on the combination of SIFT detector and FREAK
descriptor; which maintains the performance of the feature points matching and decreases
the computational time. Thereafter, the pose estimation problem is solved
based on the decomposition of the world-to-frame and frame-to-frame homographies.
(II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to
sense and detect the frontal obstacles that are situated in its path. The detection
algorithm mimics the human behaviors for detecting the approaching obstacles; by
analyzing the size changes of the detected feature points, combined with the expansion
ratios of the convex hull constructed around the detected feature points
from consecutive frames. Then, by comparing the area ratio of the obstacle and the
position of the UAV, the method decides if the detected obstacle may cause a collision.
Finally, the algorithm extracts the collision-free zones around the obstacle,
and combining with the tracked waypoints, the UAV performs the avoidance maneuver.
(III) Navigation Guidance, which generates the waypoints to determine
the flight path based on environment and the situated obstacles. Then provide
a strategy to follow the path segments and in an efficient way and perform the
flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in
order to achieve the flight stability as well as to perform the correct maneuver; to
avoid the possible collisions and track the waypoints.
All the proposed algorithms have been verified with real flights in both indoor
and outdoor environments, taking into consideration the visual conditions; such as
illumination and textures. The obtained results have been validated against other
systems; such as VICON motion capture system, DGPS in the case of pose estimate
algorithm. In addition, the proposed algorithms have been compared with several
previous works in the state of the art, and are results proves the improvement in
the accuracy and the robustness of the proposed algorithms.
Finally, this dissertation concludes that the visual sensors have the advantages
of lightweight and low consumption and provide reliable information, which is
considered as a powerful tool in the navigation systems to increase the autonomy
of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados
(UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar
tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos
tratan de resolver problemas de la navegacion tanto en ambientes interiores como
al aire libre basandose principalmente en la informacion visual captada por las camaras
monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores
visuales bien como fuente principal de datos o complementando a otros sensores
en el suministro de informacion util, con el fin de mejorar la precision y la
robustez de los procesos de deteccion.
La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas
de vision por computador: (I) Estimacion de la Posicion y la Orientacion
(Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion
en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el
descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia
y disminuye el tiempo computacional. De esta manera, se soluciona el
problema de la estimacion de la posicion basandose en la descomposicion de las
homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion
colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales
que se encuentran en su camino. El algoritmo de deteccion imita comportamientos
humanos para detectar los obstaculos que se acercan, mediante el analisis de la
magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado
con los ratios de expansion de los contornos convexos construidos alrededor
de los puntos caracteristicos detectados en frames consecutivos. A continuacion,
comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo
decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo
extrae las zonas libres de colision alrededor del obstaculo y combinandolo
con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de
vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona
una estrategia para seguir los segmentos del trazado de una manera eficiente
y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer
soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en
la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como
realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de
referencia.
Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes
exteriores e interiores, tomando en consideracion condiciones visuales como
la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros
sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del
algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos
han sido comparados con trabajos anteriores recogidos en el estado del arte
con resultados que demuestran una mejora de la precision y la robustez de los algoritmos
propuestos.
Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener
un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo
hace una poderosa herramienta en los sistemas de navegacion para aumentar la
autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver
Advances in Robot Navigation
Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics
Learning body models: from humans to humanoids
Humans and animals excel in combining information from multiple sensory
modalities, controlling their complex bodies, adapting to growth, failures, or
using tools. These capabilities are also highly desirable in robots. They are
displayed by machines to some extent. Yet, the artificial creatures are lagging
behind. The key foundation is an internal representation of the body that the
agent - human, animal, or robot - has developed. The mechanisms of operation of
body models in the brain are largely unknown and even less is known about how
they are constructed from experience after birth. In collaboration with
developmental psychologists, we conducted targeted experiments to understand
how infants acquire first "sensorimotor body knowledge". These experiments
inform our work in which we construct embodied computational models on humanoid
robots that address the mechanisms behind learning, adaptation, and operation
of multimodal body representations. At the same time, we assess which of the
features of the "body in the brain" should be transferred to robots to give
rise to more adaptive and resilient, self-calibrating machines. We extend
traditional robot kinematic calibration focusing on self-contained approaches
where no external metrology is needed: self-contact and self-observation.
Problem formulation allowing to combine several ways of closing the kinematic
chain simultaneously is presented, along with a calibration toolbox and
experimental validation on several robot platforms. Finally, next to models of
the body itself, we study peripersonal space - the space immediately
surrounding the body. Again, embodied computational models are developed and
subsequently, the possibility of turning these biologically inspired
representations into safe human-robot collaboration is studied.Comment: 34 pages, 5 figures. Habilitation thesis, Faculty of Electrical
Engineering, Czech Technical University in Prague (2021
Recommended from our members
Visual Adaptations and Behavioural Strategies to Detect and Catch Small Targets
Predatory behaviours are ideal for studying the limits of performance and control within animals. Predation naturally creates a competition between the sensors and physiology of predator and prey. Aerial predation demonstrates the greatest feats of physical performance, demanding the highest speeds and accelerations whilst both predator and prey are free to pitch, yaw, and roll. These high speeds and degrees of rotational freedom make control a complex problem. However, from the perspective of the researcher attempting to decipher the control laws that underpin predator guidance, the question is made more soluble by the predator’s fixation on its target. The goal of the pursuer is clear, to contact the target, and thus their systems are focused on the optimization of that action. This is as opposed to more mundane activities, where conflicting interests compete for the attention and behavioural response of the animal. In order to study the necessary trade-offs that underpin aerial predation, this thesis will focus on the hunting behaviour of two fly species. The first is a robber fly, Holcocephala fusca, on which the majority of the first two chapters focus. Secondarily, work with the killer fly Coenosia attenuata will be included in the latter two chapters as a direct contrast to results from Holcocephala. Both are miniature dipteran predators, but not closely related. The structure of this thesis is broken into six chapters, summarised in the following list:
1. Thecompoundeyeofinsectsgenerallyhasmuchpoorerresolutionthanthatofcameratype eyes. Poor resolution is exacerbated in smaller insects that cannot commit the resources required for eyes with large lenses that facilitate high spatial resolution. Holcocephala has developed a small number of facets into a forward-facing acute zone where the spatial acuity is reduced to ~0.28°, rivalling the very best resolution of any compound eye. The only compound eyes with a comparable spatial resolution belong to dragonflies, in excess of an order of magnitude larger than Holcocephala.
2. Numerous potential targets may be airborne within the visual range of a predator. Not all of these may be suitable. Chasing unsuitable targets may waste energy or result in direct harm should they turn out to be larger than the predator can overcome. It is thus a strong imperative for a predator to filter the targets it takes after. Targets silhouetted against the sky display a paucity of cues that a predator could use to determine their size. Holcocephala displays acute size selectivity towards smaller targets. This selectivity goes beyond heuristic rules and size/speed ratios. Instead, Holcocephala appears able to determine absolute size and distance of targets.
3. Both Holcocephala and Coenosia intercept targets, heading for where the target is going to be in the future rather than its current location. Both species plot trajectories in keeping with the guidance law of proportional navigation, an algorithm derived for modern guided missiles. There are key differences evident in the internal physiological constants applied to the control system between the species. These differences are likely linked to the specific environmental conditions and visual physiologies of the flies, especially the range at which targets are attacked.
4. Stemming from the use of the proportional navigational framework, this chapter dives into the intricacies of gain and the weighting of the navigational constant, and the geometric factors that underpin the control effort and eventual success of the control system.
5. “Falcon-diving” can be found in killer flies dropping from their enclosure ceiling, in which they miss targets after diving towards them. Through proportional navigation, it can be demonstrated that the navigational system combined with excessive speed results in acceleration demands the body cannot match.
6. Holcocephala is capable of evading static obstacle whilst intercepting targets. Application of proportional navigation and a secondary obstacle-evasive controller can demonstrate where the fly is combining multiple inputs to guide its heading.This work was funded by the United States Airforce Office of Scientific Research
- …