17,478 research outputs found

    Visual task identification and characterisation using polynomial models

    Get PDF
    Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process. This paper proposes a method of dealing with these issues by a) establishing task-achieving sensor-motor couplings through robot training, and b) representing these couplings through transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour. We demonstrate the viability of this approach by teaching a mobile robot to track a moving football and subsequently modelling this task using the NARMAX system identification technique

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    Towards Declarative Safety Rules for Perception Specification Architectures

    Full text link
    Agriculture has a high number of fatalities compared to other blue collar fields, additionally population decreasing in rural areas is resulting in decreased work force. These issues have resulted in increased focus on improving efficiency of and introducing autonomy in agriculture. Field robots are an increasingly promising branch of robotics targeted at full automation in agriculture. The safety aspect however is rely addressed in connection with safety standards, which limits the real-world applicability. In this paper we present an analysis of a vision pipeline in connection with functional-safety standards, in order to propose solutions for how to ascertain that the system operates as required. Based on the analysis we demonstrate a simple mechanism for verifying that a vision pipeline is functioning correctly, thus improving the safety in the overall system.Comment: Presented at DSLRob 2015 (arXiv:1601.00877

    A robot swarm assisting a human fire-fighter

    Get PDF
    Emergencies in industrial warehouses are a major concern for fire-fighters. The large dimensions, together with the development of dense smoke that drastically reduces visibility, represent major challenges. The GUARDIANS robot swarm is designed to assist fire-fighters in searching a large warehouse. In this paper we discuss the technology developed for a swarm of robots assisting fire-fighters. We explain the swarming algorithms that provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also the means to locate the robots and humans. Thus, the robot swarm is able to provide guidance information to the humans. Together with the fire-fighters we explored how the robot swarm should feed information back to the human fire-fighter. We have designed and experimented with interfaces for presenting swarm-based information to human beings
    corecore