71 research outputs found

    Humanoid odometric localization integrating kinematic, inertial and visual information

    Get PDF
    We present a method for odometric localization of humanoid robots using standard sensing equipment, i.e., a monocular camera, an inertial measurement unit (IMU), joint encoders and foot pressure sensors. Data from all these sources are integrated using the prediction-correction paradigm of the Extended Kalman Filter. Position and orientation of the torso, defined as the representative body of the robot, are predicted through kinematic computations based on joint encoder readings; an asynchronous mechanism triggered by the pressure sensors is used to update the placement of the support foot. The correction step of the filter uses as measurements the torso orientation, provided by the IMU, and the head pose, reconstructed by a VSLAM algorithm. The proposed method is validated on the humanoid NAO through two sets of experiments: open-loop motions aimed at assessing the accuracy of localization with respect to a ground truth, and closed-loop motions where the humanoid pose estimates are used in real-time as feedback signals for trajectory control

    Direct Visual SLAM Fusing Proprioception for a Humanoid Robot

    Get PDF

    Artificial Vision in the Nao Humanoid Robot

    Get PDF
    Projecte Final de Màster UPC realitzat en col.laboració amb l'Universitat Rovira i Virgili. Departament d'Enginyeria Informàtica i MatemàtiquesRobocup is an international robotic soccer competition held yearly to promote innovative research and application in robotic intelligence. Nao humanoid robot is the new RoboCup Standard Platform robot. This platform is the new Nao robot designed and manufactured by the french company Aldebaran Robotics. The new robot is an advanced platform for developing new computer vision and robotics methods. This Master Thesis is oriented to the study of some fundamental issues for the artificial vision in the Nao humanoid robots. In particular, color representation models, real-time segmentation techniques, object detection and visual sonar approaches are the computer vision techniques applied to Nao robot in this Master Thesis. Also, Nao’s camera model, mathematical robot kinematic and stereo-vision techniques are studied and developed. This thesis also studies the integration between kinematic model and robot perception model to perform RoboCup soccer games and RoboCup technical challenges. This work is focused in the RoboCup environment but all computer vision and robotics algorithms can be easily extended to another robotics fields

    Outlier-Robust State Estimation for Humanoid Robots*

    Get PDF
    Contemporary humanoids are equipped with visual and LiDAR sensors that are effectively utilized for Visual Odometry (VO) and LiDAR Odometry (LO). Unfortunately, such measurements commonly suffer from outliers in a dynamic environment, since frequently it is assumed that only the robot is in motion and the world is static. To this end, robust state estimation schemes are mandatory in order for humanoids to symbiotically co-exist with humans in their daily dynamic environments. In this article, the robust Gaussian Error-State Kalman Filter for humanoid robot locomotion is presented. The introduced method automatically detects and rejects outliers without relying on any prior knowledge on measurement distributions or finely tuned thresholds. Subsequently, the proposed method is quantitatively and qualitatively assessed in realistic conditions with the full-size humanoid robot WALK-MAN v2.0 and the mini-size humanoid robot NAO to demonstrate its accuracy and robustness when outlier VOLO measurements are present. Finally, in order to reinforce further research endeavours, our implementation is released as an open-source ROS/C++package

    Real-time Coordinate Estimation for Self-Localization of the Humanoid Robot Soccer BarelangFC

    Get PDF
    In implementation, of the humanoid robot soccer consists of more than three robots when played soccer on the field. All the robots needed to be played the soccer as human done such as seeking, chasing, dribbling and kicking the ball. To do all of these commands, it is required a real-time localization system so that each robot will understand not only the robot position itself but also the other robots and even the object on the field’s environment. However, in real-time implementation and due to the limited ability of the robot computation, it is necessary to determine a method which has fast computation and able to save much memory. Therefore, in this paper we presented a real-time localization implementation method using the odometry and Monte Carlo Localization (MCL) method. In order to verify the performance of this method, some experiment has been carried out in real-time application. From the experimental result, the proposed method able to estimate the coordinate of each robot position in X and Y position on the field.Dalam implementasinya, robot humanoid soccer terdiri lebih dari tiga robot di lapangan ketika sedang bermain bola. Semua robot diharapkan dapat memainkan sepak bola seperti manusia seperti mencari, mengejar, menggiring bola dan menendang bola. Untuk melakukan semua perintah tersebut, diperlukan sistem lokalisasi real-time sehingga setiap robot tidak hanya memahami posisi robotnya sendiri tetapi juga robot-robot lain bahkan objek yang berada di sekitar lapangan. Namun dalam implementasi real-time dan karena keterbatasan kemampuan komputasi robot, diperlukan suatu metode komputasi yang cepat dan mampu menghemat banyak memori. Oleh karena itu, dalam makalah ini menyajikan metode implementasi lokalisasi real-time dengan menggunakan metode odometry and Monte Carlo Localization (MCL). Untuk memverifikasi kinerja metode ini, beberapa percobaan telah dilakukan dalam aplikasi real-time. Dari hasil percobaan, metode yang diusulkan mampu mengestimasi koordinat posisi robot pada posisi X dan Y di lapangan ketika sedang bermain bola

    The NAO Backpack: An Open-hardware Add-on for Fast Software Development with the NAO Robot

    Full text link
    We present an open-source accessory for the NAO robot, which enables to test computationally demanding algorithms in an external platform while preserving robot's autonomy and mobility. The platform has the form of a backpack, which can be 3D printed and replicated, and holds an ODROID XU4 board to process algorithms externally with ROS compatibility. We provide also a software bridge between the B-Human's framework and ROS to have access to the robot's sensors close to real-time. We tested the platform in several robotics applications such as data logging, visual SLAM, and robot vision with deep learning techniques. The CAD model, hardware specifications and software are available online for the benefit of the community: https://github.com/uchile-robotics/nao-backpackComment: Accepted in the RoboCup Symposium 2017. Final version will be published at Springe

    An Adaptive Augmented Vision-based Ellipsoidal SLAM for Indoor Environments

    Get PDF
    In this paper, the problem of Simultaneous Localization And Mapping (SLAM) is addressed via a novel augmented landmark vision-based ellipsoidal SLAM. The algorithm is implemented on a NAO humanoid robot and is tested in an indoor environment. The main feature of the system is the implementation of SLAM with a monocular vision system. Distinguished landmarks referred to as NAOmarks are employed to localize the robot via its monocular vision system. We henceforth introduce the notion of robotic augmented reality (RAR) and present a monocular Extended Kalman Filter (EKF)/ellipsoidal SLAM in order to improve the performance and alleviate the computational effort, to provide landmark identification, and to simplify the data association problem. The proposed SLAM algorithm is implemented in real-time to further calibrate the ellipsoidal SLAM parameters, noise bounding, and to improve its overall accuracy. The augmented EKF/ellipsoidal SLAM algorithms are compared with the regular EKF/ellipsoidal SLAM methods and the merits of each algorithm is also discussed in the paper. The real-time experimental and simulation studies suggest that the adaptive augmented ellipsoidal SLAM is more accurate than the conventional EKF/ellipsoidal SLAMs

    Controlo visual de uma cabeça humanóide usando alvos fixos

    Get PDF
    Mestrado em Engenharia de Automação IndustrialEste trabalho apresenta como tese que a visão pode ter um papel importante no equilíbrio e navegação de robôs humanóides tal como acontece nos seres humanos, em particular se se assumir a existência de características fixas no cenário envolvente. O Projeto Humanóide da Universidade de Aveiro (PHUA) é usado neste trabalho como base para desenvolver a proposição desta dissertação. Todos os componentes mecânicos do pescoço do PHUA foram reconstruídos e melhorados para assegurar uma infraestrutura fiável. Foram desenvolvidos algoritmos de processamento de imagem e seguimento visual para encontrar e seguir um alvo fixo, com o intuito de obter realimentação visual para o sistema de seguimento do pescoço. Desenvolveu-se também um algoritmo de controlo de seguimento para a cabeça do humanoide com o intuito de seguir um alvo baseado em realimentação visual. A informação da posição do pescoço pode ser integrada posteriormente com a rede sensorial do humanóide de forma a melhorar o equilíbrio do robô. Foram ainda calculadas e testadas as equações que estimar o movimento do robô, recorrendo aos ângulos da pan and tilt unit (pescoço) e sabendo a distância em cada instante da câmara ao alvo a seguir. O desenvolvimento do software foi baseado numa plataforma modular que permite a criação de vários modos de funcionamento independentes (ROS). Para simular os movimento do humanóide com a intenção de testar o sistema de seguimento desenvolvido, foi utilizado um robô industrial Fanuc. Os resultados dos testes demonstraram que os algoritmos de visão por computador tem um bom desempenho face ao contexto da aplicação. O controlo de seguimento baseado em velocidade, é o melhor para obter um sistema de seguimento visual para robôs humanóides simples e fiàvel.Assuming the existence of fixed characteristics on the scene, this work addresses the thesis that vision may play a major role in humanoids balance and navigation such as it plays in humans. The Project Humanoid of the University of Aveiro (PHUA) is used as a framework to evolve the thesis of this dissertation and all the mechanical components of the PHUA’s neck were rebuilt to guarantee a good infrastructure basis. Image processing and tracking algorithms were developed to find and track a fixed target on an image. Based on the image feedback, a neck’s tracking control algorithm was implemented to track the target. The information of the position of the neck may be further used to integrate with other sensor data aiming to improve the PHUA’s balance. Throughout the information of the angle of the pan and tilt servomotors and knowing the distance of the target, there were calculated the equations that translate the position of the pan and tilt unit in the world and therefore, the robot’s position. The software development is sustained by the Robot Operating System (ROS) framework following the philosophy of a modular and open-ended development. An industrial anthropomorphic robot was used to reproduce the humanoid movements in order to test the whole tracking and ego-motion system. The results showed that the computer vision algorithms present a satisfactory performance for the specific needs and the velocity control algorithm for the tracking system suits the best to accomplish a good and simple tracking system infrastructure in order to obtain the visual feedback for the humanoid

    On particle filter localization and mapping for Nao robot

    Get PDF
    The performance of autonomous mobile robots within an indoor environment relies on an effective detection and localization system. Self-Localization within an indoor environment has been studied and tested experimentally on humanoid robot Nao. The solution utilizes a pre-existing map with known and unknown features. The aim of this thesis is to utilize map of visual features and the Monte-Carlo Scheme (particle filters) in localization and navigation. Nao robot cameras has been used for detection Naomarks, the detection of these features provides an estimation of the relative distances of features to current robot position. These measurements are applied to a visual localization algorithm that uses a pair of known feature to localize the robot, furthermore the measurements is fused to a particle filter algorithm for estimating the pose of the robot within the map. The particle filter implementation was based on the C++ programming language. A simple path planning scheme was implemented for continuous localization while navigating a paths with obstacles. The algorithms has been tested with reference to measurements provided by an external sensor. The results of the implementations indicates that the robot can effectively navigate from a start position to a predefined location while avoiding obstacles on its path
    • …
    corecore