3,167 research outputs found
High-Performance Testbed for Vision-Aided Autonomous Navigation for Quadrotor UAVs in Cluttered Environments
This thesis presents the development of an aerial robotic testbed based on Robot Operating System (ROS). The purpose of this high-performance testbed is to develop a system capable of performing robust navigation tasks using vision tools such as a stereo camera. While ensuring the computation of robot odometery, the system is also capable of sensing the environment using the same stereo camera. Hence, all the navigation tasks are performed using a stereo camera and an inertial measurement unit (IMU) as the main sensor suite. ROS is used as a framework for software integration due to its capabilities to provide efficient communication and sensor interfaces. Moreover, it also allows us to use C++ which is efficient in performance especially on embedded platforms. Combining together ROS and C++ provides the necessary computation efficiency and tools to handle fast, real-time image processing and planning which are the vital parts of navigation and obstacle avoidance on such scale. The main application of this work revolves around proposing a real-time and efficient way to demonstrate vision-based navigation in UAVs. The proposed approach is developed for a quadrotor UAV which is capable of performing defensive maneuvers in case any obstacles are in its way, while constantly moving towards a user-defined final destination. Stereo depth computation adds a third axis to a two dimensional image coordinate frame. This can be referred to as the depth image space or depth image coordinate frame. The idea of planning in this frame of reference is utilized along with certain precomputed action primitives. The formulation of these action primitives leads to a hybrid control law for feasible trajectory generation. Further, a proof of stability of this system is also presented. The proposed approach keeps in view the fact that while performing fast maneuvers and obstacle avoidance simultaneously, many of the standard optimization approaches might not work in real-time on-board due to time and resource limitations. This leads to a need for the development of real-time techniques for vision-based autonomous navigation
On Advanced Mobility Concepts for Intelligent Planetary Surface Exploration
Surface exploration by wheeled rovers on Earth's Moon (the two Lunokhods) and Mars (Nasa's Sojourner and the two MERs) have been followed since many years already very suc-cessfully, specifically concerning operations over long time. However, despite of this success, the explored surface area was very small, having in mind a total driving distance of about 8 km (Spirit) and 21 km (Opportunity) over 6 years of operation. Moreover, ESA will send its ExoMars rover in 2018 to Mars, and NASA its MSL rover probably this year. However, all these rovers are lacking sufficient on-board intelligence in order to overcome longer dis-tances, driving much faster and deciding autonomously on path planning for the best trajec-tory to follow. In order to increase the scientific output of a rover mission it seems very nec-essary to explore much larger surface areas reliably in much less time. This is the main driver for a robotics institute to combine mechatronics functionalities to develop an intelligent mo-bile wheeled rover with four or six wheels, and having specific kinematics and locomotion suspension depending on the operational terrain of the rover to operate. DLR's Robotics and Mechatronics Center has a long tradition in developing advanced components in the field of light-weight motion actuation, intelligent and soft manipulation and skilled hands and tools, perception and cognition, and in increasing the autonomy of any kind of mechatronic systems. The whole design is supported and is based upon detailed modeling, optimization, and simula-tion tasks. We have developed efficient software tools to simulate the rover driveability per-formance on various terrain characteristics such as soft sandy and hard rocky terrains as well as on inclined planes, where wheel and grouser geometry plays a dominant role. Moreover, rover optimization is performed to support the best engineering intuitions, that will optimize structural and geometric parameters, compare various kinematics suspension concepts, and make use of realistic cost functions like mass and consumed energy minimization, static sta-bility, and more. For self-localization and safe navigation through unknown terrain we make use of fast 3D stereo algorithms that were successfully used e.g. in unmanned air vehicle ap-plications and on terrestrial mobile systems. The advanced rover design approach is applica-ble for lunar as well as Martian surface exploration purposes. A first mobility concept ap-proach for a lunar vehicle will be presented
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Percepção do ambiente urbano e navegação usando visĂŁo robĂłtica : concepção e implementação aplicado Ă veĂculo autĂ´nomo
Orientadores: Janito Vaqueiro Ferreira, Alessandro CorrĂŞa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veĂculos autĂ´nomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefĂcios na redução de acidentes, no aumentando da qualidade de vida e tambĂ©m na redução de custos. VeĂculos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera tĂŞm recebido grande atenção pelo motivo de que eles sĂŁo de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas tambĂ©m desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veĂculos podem gerar observações parciais e tambĂ©m estas observações sĂŁo muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veĂculo. Nesta tese, este problema de percepção Ă© analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisĂŁo em navegação autĂ´noma. Projeta-se um sistema de percepção que permita veĂculos robĂłticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prĂ©vio do ambiente e considerando a presença de objetos dinâmicos tais como veĂculos. Propõe-se um novo mĂ©todo baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estĂ©reo, a qual Ă© vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisĂŁo no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possĂveis caminhos a partir do centro de referencia do veĂculo e com base nisto, duas novas estratĂ©gias sĂŁo propostas. Em primeiro, uma nova estratĂ©gia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual Ă© modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle sĂŁo implementados e validados experimentalmente em condições reais usando um veĂculo autĂ´nomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos SĂłlidos e Projeto MecanicoDoutor em Engenharia Mecânic
Toward Robots with Peripersonal Space Representation for Adaptive Behaviors
The abilities to adapt and act autonomously in an unstructured and
human-oriented environment are necessarily vital for the next generation of
robots, which aim to safely cooperate with humans. While this adaptability
is natural and feasible for humans, it is still very complex and challenging
for robots. Observations and findings from psychology and neuroscience in
respect to the development of the human sensorimotor system can inform
the development of novel approaches to adaptive robotics.
Among these is the formation of the representation of space closely surrounding
the body, the Peripersonal Space (PPS) , from multisensory sources
like vision, hearing, touch and proprioception, which helps to facilitate human
activities within their surroundings.
Taking inspiration from the virtual safety margin formed by the PPS representation
in humans, this thesis first constructs an equivalent model of the
safety zone for each body part of the iCub humanoid robot. This PPS layer
serves as a distributed collision predictor, which translates visually detected
objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities
of a collision between those objects and body parts. This leads to
adaptive avoidance behaviors in the robot via an optimization-based reactive
controller. Notably, this visual reactive control pipeline can also seamlessly
incorporate tactile input to guarantee safety in both pre- and post-collision
phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller
is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components,
namely the PPS, the multi-target motion planner (for manipulation
reaching tasks), the reaching-with-avoidance controller and the humancentred
visual perception, are combined harmoniously to form a hybrid control
framework designed to provide safety for robots\u2019 interactions in a cluttered
environment shared with human partners.
Later, motivated by the development of manipulation skills in infants, in
which the multisensory integration is thought to play an important role, a
learning framework is proposed to allow a robot to learn the processes of
forming sensory representations, namely visuomotor and visuotactile, from
their own motor activities in the environment. Both multisensory integration
models are constructed with Deep Neural Networks (DNNs) in such a
way that their outputs are represented in motor space to facilitate the robot\u2019s
subsequent actions
- …