151 research outputs found

    Robot Soccer Strategy Based on Hierarchical Finite State Machine to Centralized Architectures

    Full text link
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works[EN] Coordination among the robots allows a robot soccer team to perform better through coordinated behaviors. This requires that team strategy is designed in line with the conditions of the game. This paper presents the architecture for robot soccer team coordination, involving the dynamic assignment of roles among the players. This strategy is divided into tactics, which are selected by a Hierarchical State Machine. Once a tactic has been selected, it is assigned roles to players, depending on the game conditions. Each role performs defined behaviors selected by the Hierarchical State Machine. To carry out the behaviors, robots are controlled by the lowest level of the Hierarchical State Machine. The architecture proposed is designed for robot soccer teams with a central decision-making body, with global perception. 200 games were performed against a team with constant roles, winning the 92.5% of the games, scoring more goals on average that the opponent, and showing a higher percent of ball possession. Student s t-test shows better matching with measurement uncertainty of the strategy proposed. This architecture allowed an intuitive design of the robot soccer strategy, facilitating the design of the rules for role selection and behaviors performed by the players, depending on the game conditions. Collaborative behaviors and uniformity within the players behaviors during the tactics and behaviors transitions were observedJose Guillermo Guarnizo ha sido financiado por una beca del Departamento Administrativo de Ciencia, Tecnología e Innovación COLCIENCIAS, Colombia.Guarnizo, JG.; Mellado Arteche, M. (2016). Robot Soccer Strategy Based on Hierarchical Finite State Machine to Centralized Architectures. IEEE Latin America Transactions. 14(8):3586-3596. doi:10.1109/TLA.2016.7786338S3586359614

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Sensory integration model inspired by the superior colliculus for multimodal stimuli localization

    Get PDF
    Sensory information processing is an important feature of robotic agents that must interact with humans or the environment. For example, numerous attempts have been made to develop robots that have the capability of performing interactive communication. In most cases, individual sensory information is processed and based on this, an output action is performed. In many robotic applications, visual and audio sensors are used to emulate human-like communication. The Superior Colliculus, located in the mid-brain region of the nervous system, carries out similar functionality of audio and visual stimuli integration in both humans and animals. In recent years numerous researchers have attempted integration of sensory information using biological inspiration. A common focus lies in generating a single output state (i.e. a multimodal output) that can localize the source of the audio and visual stimuli. This research addresses the problem and attempts to find an effective solution by investigating various computational and biological mechanisms involved in the generation of multimodal output. A primary goal is to develop a biologically inspired computational architecture using artificial neural networks. The advantage of this approach is that it mimics the behaviour of the Superior Colliculus, which has the potential of enabling more effective human-like communication with robotic agents. The thesis describes the design and development of the architecture, which is constructed from artificial neural networks using radial basis functions. The primary inspiration for the architecture came from emulating the function top and deep layers of the Superior Colliculus, due to their visual and audio stimuli localization mechanisms, respectively. The integration experimental results have successfully demonstrated the key issues, including low-level multimodal stimuli localization, dimensionality reduction of audio and visual input-space without affecting stimuli strength, and stimuli localization with enhancement and depression phenomena. Comparisons have been made between computational and neural network based methods, and unimodal verses multimodal integrated outputs in order to determine the effectiveness of the approach.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Arquitectura Basada en Roles Aplicada en Equipos de Fútbol de Robots con Control Centralizado

    Full text link
    [EN] Borrar Robot soccer offers an adequate domain in order to design and validate architectures for robot-coordination. One classification refers to centralized architectures, which correspond to robot soccer environments with global perception and centralized control of the robots, using only one decision-making system. In this paper it is presented a centralized robot soccer architecture based on roles, where one role is assigned to each player in order to select a specific behaviour depending on game conditions. Roles are assigned using an assignment function, which is activated when the ball changes of the quadrant in the playing field. This strategy has been compared by simulation in games against an opposition team with constant roles, and other team with a hierarchical strategy which assigns roles depending on a tactic previously selected. The results showed a better performance in the team with the role-based[ES] El fútbol de robots ofrece un entorno adecuado para el diseño y la validación de arquitecturas de sistemas multi-robot. Al clasificar las ligas de fútbol de robots existentes se encuentran ligas con arquitecturas centralizadas que poseen percepción global del entorno y donde los robots son controlados desde un ordenador a través de un único sistema de toma de decisiones. En este artículo se presenta una arquitectura basada en roles para equipos de fútbol de robots con percepción global y control centralizado. En esta arquitectura un rol es seleccionado para cada jugador por medio de una función. A partir de este rol y de las condiciones de juego presentes se selecciona un comportamiento que el jugador deberá ejecutar. La función que es utilizada para la asignación de roles es activada cuando el balón cambia de cuadrante en el campo de juego. La estrategia presentada es comparada en simulación realizando partidos contra un equipo que posee una estrategia de roles constantes y un equipo con una estrategia jerárquica basada en selección de tácticas y posteriormente asignación de roles a partir de la táctica seleccionada. Los resultados mostraron no solo un mejor rendimiento del equipo con la estrategia basada en roles, sino también uniformidad en los comportamientos realizados por los jugadores del equipo durante las transiciones de roles y comportamientos.Jose Guillermo Guarnizo há sido financiado por una beca del Departamento Administrativo de Ciencia, Tecnologia e Innovación COLCIENCIAS, Colombia.Guarnizo Marín, JG.; Mellado Arteche, M. (2016). Arquitectura Basada en Roles Aplicada en Equipos de Fútbol de Robots con Control Centralizado. Revista Iberoamericana de Automática e Informática Industrial RIAI. 13(3):370-380. doi:10.1016/j.riai.2016.05.005S37038013

    Sensory Integration Model Inspired by the Superior Colliculus For Multimodal Stimuli Localization

    Get PDF
    Sensory information processing is an important feature of robotic agents that must interact with humans or the environment. For example, numerous attempts have been made to develop robots that have the capability of performing interactive communication. In most cases, individual sensory information is processed and based on this, an output action is performed. In many robotic applications, visual and audio sensors are used to emulate human-like communication. The Superior Colliculus, located in the mid-brain region of the nervous system, carries out similar functionality of audio and visual stimuli integration in both humans and animals. In recent years numerous researchers have attempted integration of sensory information using biological inspiration. A common focus lies in generating a single output state (i.e. a multimodal output) that can localize the source of the audio and visual stimuli. This research addresses the problem and attempts to find an effective solution by investigating various computational and biological mechanisms involved in the generation of multimodal output. A primary goal is to develop a biologically inspired computational architecture using artificial neural networks. The advantage of this approach is that it mimics the behaviour of the Superior Colliculus, which has the potential of enabling more effective human-like communication with robotic agents. The thesis describes the design and development of the architecture, which is constructed from artificial neural networks using radial basis functions. The primary inspiration for the architecture came from emulating the function top and deep layers of the Superior Colliculus, due to their visual and audio stimuli localization mechanisms, respectively. The integration experimental results have successfully demonstrated the key issues, including low-level multimodal stimuli localization, dimensionality reduction of audio and visual input-space without affecting stimuli strength, and stimuli localization with enhancement and depression phenomena. Comparisons have been made between computational and neural network based methods, and unimodal verses multimodal integrated outputs in order to determine the effectiveness of the approach

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE
    • …
    corecore