940 research outputs found

    A Survey on Human-aware Robot Navigation

    Full text link
    Intelligent systems are increasingly part of our everyday lives and have been integrated seamlessly to the point where it is difficult to imagine a world without them. Physical manifestations of those systems on the other hand, in the form of embodied agents or robots, have so far been used only for specific applications and are often limited to functional roles (e.g. in the industry, entertainment and military fields). Given the current growth and innovation in the research communities concerned with the topics of robot navigation, human-robot-interaction and human activity recognition, it seems like this might soon change. Robots are increasingly easy to obtain and use and the acceptance of them in general is growing. However, the design of a socially compliant robot that can function as a companion needs to take various areas of research into account. This paper is concerned with the navigation aspect of a socially-compliant robot and provides a survey of existing solutions for the relevant areas of research as well as an outlook on possible future directions.Comment: Robotics and Autonomous Systems, 202

    Framework of active robot learning

    Get PDF
    A thesis submitted to the University of Bedfordshire, in fulfilment of the requirements for the degree of Master of Science by researchIn recent years, cognitive robots have become an attractive research area of Artificial Intelligent (AI). High-order beliefs for cognitive robots regard the robots' thought about their users' intention and preference. The existing approaches to the development of such beliefs through machine learning rely on particular social cues or specifically defined award functions . Therefore, their applications can be limited. This study carried out primary research on active robot learning (ARL) which facilitates a robot to develop high-order beliefs by actively collecting/discovering evidence it needs. The emphasis is on active learning, but not teaching. Hence, social cues and award functions are not necessary. In this study, the framework of ARL was developed. Fuzzy logic was employed in the framework for controlling robot and for identifying high-order beliefs. A simulation environment was set up where a human and a cognitive robot were modelled using MATLAB, and ARL was implemented through simulation. Simulations were also performed in this study where the human and the robot tried to jointly lift a stick and keep the stick level. The simulation results show that under the framework a robot is able to discover the evidence it needs to confirm its user's intention

    Percepção do ambiente urbano e navegação usando visão robótica : concepção e implementação aplicado à veículo autônomo

    Get PDF
    Orientadores: Janito Vaqueiro Ferreira, Alessandro Corrêa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veículos autônomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefícios na redução de acidentes, no aumentando da qualidade de vida e também na redução de custos. Veículos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera têm recebido grande atenção pelo motivo de que eles são de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas também desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veículos podem gerar observações parciais e também estas observações são muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veículo. Nesta tese, este problema de percepção é analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisão em navegação autônoma. Projeta-se um sistema de percepção que permita veículos robóticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prévio do ambiente e considerando a presença de objetos dinâmicos tais como veículos. Propõe-se um novo método baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estéreo, a qual é vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisão no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possíveis caminhos a partir do centro de referencia do veículo e com base nisto, duas novas estratégias são propostas. Em primeiro, uma nova estratégia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual é modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle são implementados e validados experimentalmente em condições reais usando um veículo autônomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    Sensor fusion of camera, GPS and IMU using fuzzy adaptive multiple motion models

    Get PDF
    A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Methods for Wheel Slip and Sinkage Estimation in Mobile Robots

    Get PDF
    Future outdoor mobile robots will have to explore larger and larger areas, performing difficult tasks, while preserving, at the same time, their safety. This will primarily require advanced sensing and perception capabilities. Video sensors supply contact-free, precise measurements and are flexible devices that can be easily integrated with multi-sensor robotic platforms. Hence, they represent a potential answer to the need of new and improved perception capabilities for autonomous vehicles. One of the main applications of vision in mobile robotics is localization. For mobile robots operating on rough terrain, conventional dead reckoning techniques are not well suited, since wheel slipping, sinkage, and sensor drift may cause localization errors that accumulate without bound during the vehicle’s travel. Conversely, video sensors are exteroceptive devices, that is, they acquire information from the robot’s environment; therefore, vision-based motion estimates are independent of the knowledge of terrain properties and wheel-terrain interaction. Indeed, like dead reckoning, vision could lead to accumulation of errors; however, it has been proved that, compared to dead reckoning, it allows more accurate results and can be considered as a promising solution to the problem of robust robot positioning in high-slip environments. As a consequence, in the last few years, several localization methods using vision have been developed. Among them, visual odometry algorithms, based on the tracking of visual features over subsequent images, have been proved particularly effective. Accurate and reliable methods to sense slippage and sinkage are also desirable, since these effects compromise the vehicle’s traction performance, energy consumption and lead to gradual deviation of the robot from the intended path, possibly resulting in large drift and poor results of localization and control systems. For example, the use of conventional dead-reckoning technique is largely compromised, since it is based on the assumption that wheel revolutions can be translated into correspondent linear displacements. Thus, if one wheel slips, then the associated encoder will register revolutions even though these revolutions do not correspond to a linear displacement of the wheel. Conversely, if one wheel skids, fewer encoder pulses will be counted. Slippage and sinkage measurements are also valuable for terrain identification according to the classical terramechanics theory. This chapter investigates vision-based onboard technology to improve mobility of robots on natural terrain. A visual odometry algorithm and two methods for online measurement of vehicle slip angle and wheel sinkage, respectively, are discussed. Tests results are presented showing the performance of the proposed approaches using an all-terrain rover moving across uneven terrain

    Enhanced vision-based localization and control for navigation of non-holonomic omnidirectional mobile robots in GPS-denied environments

    Get PDF
    New Zealand’s economy relies on primary production to a great extent, where use of the technological advances can have a significant impact on the productivity. Robotics and automation can play a key role in increasing productivity in primary sector, leading to a boost in national economy. This thesis investigates novel methodologies for design, control, and navigation of a mobile robotic platform, aimed for field service applications, specifically in agricultural environments such as orchards to automate the agricultural tasks. The design process of this robotic platform as a non-holonomic omnidirectional mobile robot, includes an innovative integrated application of CAD, CAM, CAE, and RP for development and manufacturing of the platform. Robot Operating System (ROS) is employed for the optimum embedded software system design and development to enable control, sensing, and navigation of the platform. 3D modelling and simulation of the robotic system is performed through interfacing ROS and Gazebo simulator, aiming for off-line programming, optimal control system design, and system performance analysis. Gazebo simulator provides 3D simulation of the robotic system, sensors, and control interfaces. It also enables simulation of the world environment, allowing the simulated robot to operate in a modelled environment. The model based controller for kinematic control of the non-holonomic omnidirectional platform is tested and validated through experimental results obtained from the simulated and the physical robot. The challenges of the kinematic model based controller including the mathematical and kinematic singularities are discussed and the solution to enable an optimal kinematic model based controller is presented. The kinematic singularity associated with the non-holonomic omnidirectional robots is solved using a novel fuzzy logic based approach. The proposed approach is successfully validated and tested through the simulation and experimental results. Development of a reliable localization system is aimed to enable navigation of the platform in GPS-denied environments such as orchards. For this aim, stereo visual odometry (SVO) is considered as the core of the non-GPS localization system. Challenges of SVO are introduced and the SVO accumulative drift is considered as the main challenge to overcome. SVO drift is identified in form of rotational and translational drift. Sensor fusion is employed to improve the SVO rotational drift through the integration of IMU and SVO. A novel machine learning approach is proposed to improve the SVO translational drift using Neural-Fuzzy system and RBF neural network. The machine learning system is formulated as a drift estimator for each image frame, then correction is applied at that frame to avoid the accumulation of the drift over time. The experimental results and analyses are presented to validate the effectiveness of the methodology in improving the SVO accuracy. An enhanced SVO is aimed through combination of sensor fusion and machine learning methods to improve the SVO rotational and translational drifts. Furthermore, to achieve a robust non-GPS localization system for the platform, sensor fusion of the wheel odometry and the enhanced SVO is performed to increase the accuracy of the overall system, as well as the robustness of the non-GPS localization system. The experimental results and analyses are conducted to support the methodology

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
    corecore