24 research outputs found

    Robot localization and path planning based on potential field for map building in static environments

    Get PDF
    In static environments, and regarding the landmarks also as obstacles in the given situation, this paper suggests a map building algorithm of simultaneous localization and path planning based on the potential field. The robot can locate its movement control discipline with the help of a potential field theory and by conducting simultaneous localization and mapping; besides, the following prediction and state estimation will be done based on predicted control law. With the method of path planning in the potential field, the minimum influential range of  space obstacles with repulsive potential can be adjusted, which is in adaptation to the landmarks and environments in which the landmarks are simultaneously regarded as obstacles. The experiments show that the suggested algorithm, through which the robot  can conduct simultaneous localization and mapping in the localized landmarks, is also at the same time used as an obstacle in environments. After analyzing relevant performance indicators, the suggested algorithm has been verified as consistent estimation

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    Two-Stage Focused Inference for Resource-Constrained Collision-Free Navigation

    Get PDF
    Long-term operations of resource-constrained robots typically require hard decisions be made about which data to process and/or retain. The question then arises of how to choose which data is most useful to keep to achieve the task at hand. As spacial scale grows, the size of the map will grow without bound, and as temporal scale grows, the number of measurements will grow without bound. In this work, we present the first known approach to tackle both of these issues. The approach has two stages. First, a subset of the variables (focused variables) is selected that are most useful for a particular task. Second, a task-agnostic and principled method (focused inference) is proposed to select a subset of the measurements that maximizes the information over the focused variables. The approach is then applied to the specific task of robot navigation in an obstacle-laden environment. A landmark selection method is proposed to minimize the probability of collision and then select the set of measurements that best localizes those landmarks. It is shown that the two-stage approach outperforms both only selecting measurement and only selecting landmarks in terms of minimizing the probability of collision. The performance improvement is validated through detailed simulation and real experiments on a Pioneer robot.United States. Army Research Office. Multidisciplinary University Research Initiative (Grant W911NF-11-1-0391)United States. Office of Naval Research (Grant N00014-11-1-0688)National Science Foundation (U.S.) (Award IIS-1318392

    Aprendizado e Controle de Robôs Móveis Autônomos Utilizando Atenção Visual

    Get PDF
    Este artigo descreve um modelo de aprendizado por reforço capaz de aprender tarefas de controle complexas utilizando ações e estados contínuos. Este modelo, que é baseado no ator-crítico contínuo, utiliza redes de funções de base radial normalizadas para aprender o valor dos estados e das ações, sendo capaz de configurar a estrutura destas redes de forma automática durante o aprendizado. Além disso, um mecanismo de atenção visual seletiva é utilizado para perceber o ambiente e os estados. Para a validação do modelo proposto, foi utilizada uma tarefa relativamente complexa para os algoritmos de aprendizado por reforço: conduzir uma bola até o gol em um ambiente de futebol de robôs simulado. Os experimentos realizados demonstram que o modelo proposto é capaz realizar a tarefa em questão com bastante sucesso utilizando somente informações visuais

    Aprendizado e controle de robôs móveis autônomos utilizando atenção visual

    Get PDF
    Este artigo descreve um modelo de aprendizado por reforço capaz de aprender tarefas de controle complexas utilizando ações e estados contínuos. Este modelo, que é baseado no ator-crítico contínuo, utiliza redes de funções de base radial normalizadas para aprender o valor dos estados e das ações, sendo capaz de configurar a estrutura destas redes de forma automática durante o aprendizado. Além disso, um mecanismo de atenção visual seletiva é utilizado para perceber o ambiente e os estados. Para a validação do modelo proposto, foi utilizada uma tarefa relativamente complexa para os algoritmos de aprendizado por reforço: conduzir uma bola até o gol em um ambiente de futebol de robôs simulado. Os experimentos realizados demonstram que o modelo proposto é capaz realizar a tarefa em questão com bastante sucesso utilizando somente informações visuais.This paper describes a reinforcement learning model which is able to learn complex control tasks using continuous states and actions. This model, which is based on continuous actor-critic model, uses normalized radial basis function networks to learn the value function of states and actions, and is able to configure the network structure in an automatic way during the learning process. Besides, a visual selective attention mechanism is used to perceive the environment and the states. To validate the proposed model, a relatively complex task for reinforcement learning algorithms was used: to guide a ball to the goal in a robot soccer simulated environment. The described experiments shows that the proposed model is able to accomplish the task in a very successful way using visual information only

    Robots Looking for Interesting Things: Extremum Seeking Control on Saliency Maps

    Get PDF
    Abstract-This paper presents a novel approach to increase the amount of visual stimuli in sensor measurements using saliency maps. A saliency map is a combination of normalized feature maps in different channels (i.e. color, intensity) to represent the relative strength of visual stimuli in an image. The total saliency is higher when the camera is looking at a scene with more interesting things in the field of view and vise versa. We employ methods of extremum seeking control to find a camera position that corresponds to local maximum saliency value. We combine the global properties of simplex optimization methods with the local search properties and dynamic response of extremum seeking control to create a novel algorithm that is more likely to find a global maximum than conventional extremum seeking control. Simulations and experiments are presented to show the strength of this approach
    corecore