604 research outputs found

    Implementation of NAO robot maze navigation based on computer vision and collaborative learning.

    Get PDF
    Maze navigation using one or more robots has become a recurring challenge in scientific literature and real life practice, with fleets having to find faster and better ways to navigate environments such as a travel hub, airports, or for evacuation of disaster zones. Many methodologies have been explored to solve this issue, including the implementation of a variety of sensors and other signal receiving systems. Most interestingly, camera-based techniques have become more popular in this kind of scenarios, given their robustness and scalability. In this paper, we implement an end-to-end strategy to address this scenario, allowing a robot to solve a maze in an autonomous way, by using computer vision and path planning. In addition, this robot shares the generated knowledge to another by means of communication protocols, having to adapt its mechanical characteristics to be capable of solving the same challenge. The paper presents experimental validation of the four components of this solution, namely camera calibration, maze mapping, path planning and robot communication. Finally, we showcase some initial experimentation in a pair of robots with different mechanical characteristics. Further implementations of this work include communicating the robots for other tasks, such as teaching assistance, remote classes, and other innovations in higher education

    BCI-Based Navigation in Virtual and Real Environments

    Get PDF
    A Brain-Computer Interface (BCI) is a system that enables people to control an external device with their brain activity, without the need of any muscular activity. Researchers in the BCI field aim to develop applications to improve the quality of life of severely disabled patients, for whom a BCI can be a useful channel for interaction with their environment. Some of these systems are intended to control a mobile device (e. g. a wheelchair). Virtual Reality is a powerful tool that can provide the subjects with an opportunity to train and to test different applications in a safe environment. This technical review will focus on systems aimed at navigation, both in virtual and real environments.This work was partially supported by the Innovation, Science and Enterprise Council of the Junta de Andalucía (Spain), project P07-TIC-03310, the Spanish Ministry of Science and Innovation, project TEC 2011-26395 and by the European fund ERDF

    Spoken Language and Vision for Adaptive Human-Robot Cooperation

    Get PDF

    Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation

    Full text link
    This paper addresses the challenge of humanoid robot teleoperation in a natural indoor environment via a Brain-Computer Interface (BCI). We leverage deep Convolutional Neural Network (CNN) based image and signal understanding to facilitate both real-time bject detection and dry-Electroencephalography (EEG) based human cortical brain bio-signals decoding. We employ recent advances in dry-EEG technology to stream and collect the cortical waveforms from subjects while they fixate on variable Steady State Visual Evoked Potential (SSVEP) stimuli generated directly from the environment the robot is navigating. To these ends, we propose the use of novel variable BCI stimuli by utilising the real-time video streamed via the on-board robot camera as visual input for SSVEP, where the CNN detected natural scene objects are altered and flickered with differing frequencies (10Hz, 12Hz and 15Hz). These stimuli are not akin to traditional stimuli - as both the dimensions of the flicker regions and their on-screen position changes depending on the scene objects detected. On-screen object selection via such a dry-EEG enabled SSVEP methodology, facilitates the on-line decoding of human cortical brain signals, via a specialised secondary CNN, directly into teleoperation robot commands (approach object, move in a specific direction: right, left or back). This SSVEP decoding model is trained via a priori offline experimental data in which very similar visual input is present for all subjects. The resulting classification demonstrates high performance with mean accuracy of 85% for the real-time robot navigation experiment across multiple test subjects.Comment: Accepted as a full paper at the 2019 International Conference on Robotics and Automation (ICRA

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    A pipeline framework for robot maze navigation using computer vision, path planning and communication protocols.

    Get PDF
    Maze navigation is a recurring challenge in robotics competitions, where the aim is to design a strategy for one or several entities to traverse the optimal path in a fast and efficient way. To do so, numerous alternatives exist, relying on different sensing systems. Recently, camera-based approaches are becoming increasingly popular to address this scenario due to their reliability and given the possibility of migrating the resulting technologies to other application areas, mostly related to human-robot interaction. The aim of this paper is to present a pipeline methodology towards enabling a robot solving maze autonomously, by means of computer vision and path planning. Afterwards, the robot is capable of communicating the learned experience to a second robot, which then will solve the same challenge considering its own mechanical characteristics which may differ from the first robot. The pipeline is divided into four steps: (1) camera calibration (2) maze mapping (3) path planning and (4) communication. Experimental validation shows the efficiency of each step towards building this pipeline
    corecore