62 research outputs found

    The memory-based paradigm for vision-based robot localization

    Get PDF
    Für mobile autonome Roboter ist ein solides Modell der Umwelt eine wichtige Voraussetzung um die richtigen Entscheidungen zu treffen. Die gängigen existierenden Verfahren zur Weltmodellierung basieren auf dem Bayes-Filter und verarbeiten Informationen mit Hidden Markov Modellen. Dabei wird der geschätzte Zustand der Welt (Belief) iterativ aktualisiert, indem abwechselnd Sensordaten und das Wissen über die ausgeführten Aktionen des Roboters integriert werden; alle Informationen aus der Vergangenheit sind im Belief integriert. Wenn Sensordaten nur einen geringen Informationsgehalt haben, wie zum Beispiel Peilungsmessungen, kommen sowohl parametrische Filter (z.B. Kalman-Filter) als auch nicht-parametrische Filter (z.B. Partikel-Filter) schnell an ihre Grenzen. Das Problem ist dabei die Repräsentation des Beliefs. Es kann zum Beispiel sein, dass die gaußschen Modelle beim Kalman-Filter nicht ausreichen oder Partikel-Filter so viele Partikel benötigen, dass die Rechendauer zu groß wird. In dieser Dissertation stelle ich ein neues Verfahren zur Weltmodellierung vor, das Informationen nicht sofort integriert, sondern erst bei Bedarf kombiniert. Das Verfahren wird exemplarisch auf verschiedene Anwendungsfälle aus dem RoboCup (autonome Roboter spielen Fußball) angewendet. Es wird gezeigt, wie vierbeinige und humanoide Roboter ihre Position und Ausrichtung auf einem Spielfeld sehr präzise bestimmen können. Grundlage für die Lokalisierung sind bildbasierte Peilungsmessungen zu Objekten. Für die Roboter-Ausrichtung sind dabei Feldlinien eine wichtige Informationsquelle. In dieser Dissertation wird ein Verfahren zur Erkennung von Feldlinien in Kamerabildern vorgestellt, das ohne Kalibrierung auskommt und sehr gute Resultate liefert, auch wenn es starke Schatten und Verdeckungen im Bild gibt.For autonomous mobile robots, a solid world model is an important prerequisite for decision making. Current state estimation techniques are based on Hidden Markov Models and Bayesian filtering. These methods estimate the state of the world (belief) in an iterative manner. Data obtained from perceptions and actions is accumulated in the belief which can be represented parametrically (like in Kalman filters) or non-parametrically (like in particle filters). When the sensor''s information gain is low, as in the case of bearing-only measurements, the representation of the belief can be challenging. For instance, a Kalman filter''s Gaussian models might not be sufficient or a particle filter might need an unreasonable number of particles. In this thesis, I introduce a new state estimation method which doesn''t accumulate information in a belief. Instead, perceptions and actions are stored in a memory. Based on this, the state is calculated when needed. The system has a particular advantage when processing sparse information. This thesis presents how the memory-based technique can be applied to examples from RoboCup (autonomous robots play soccer). In experiments, it is shown how four-legged and humanoid robots can localize themselves very precisely on a soccer field. The localization is based on bearings to objects obtained from digital images. This thesis presents a new technique to recognize field lines which doesn''t need any pre-run calibration and also works when the field lines are partly concealed and affected by shadows

    Ich bin damit einverstanden, dass ein Exemplar dieser Arbeit in der Bibliothek

    No full text
    Abstract. This diploma-thesis presents a vision system for robotic soc-cer which was developed and implemented by the author and tested on Sony’s four legged robot Aibo. The input for the vision system are im-ages of the camera and the sensor readings of the robot’s head joints, the output is a set of percepts, where each percept describes the posi-tion of a recognized object in relation to the robot. There are two main features of the vision system: it is fast and it needs no manual color calibration. The high processing speed is reached by an attention based distribution of scan lines over the image which leads to a reduced num-ber of image pixels that have to be processed. During the normal oper-ation of the robot the colors are calibrated by the vision system itself using knowledge about the environment of the robot. The adaptation of colors is based on statistics which are computed when recognizing objects. Three different levels of color representation are used to refine the color calibration while the robot explores more and more parts o

    Using Layered Color Precision for a Self-Calibrating Vision System

    No full text
    This paper presents a vision system for robotic soccer which was tested on Sony's four legged robot Aibo. The input for the vision system are images of the camera and the sensor readings of the robot's head joints, the output are the positions of all recognized objects in relation to the robot. The object recognition is based on the colors of the objects and uses a color look-up table. The vision system creates the color look-up table on its own during a soccer game. Thus no prerun calibration is needed and the robot can cope with inhomogeneous or changing light on the soccer field. It is shown, how di#erent layers of color representation can be used to refine the results of color classification

    Vision-Based Fast and Reactive Monte-Carlo Localization

    No full text
    This paper presents a fast approach for vision-based self-localization in RoboCup. The vision system extracts the features required for localization without processing the whole image and is a first step towards independence of lighting conditions. In the field of self-localization, some new ideas are added to the well-known MonteCarlo localization approach that increase both stability and reactivity, while keeping the processing time low

    XABSL -- A Pragmatic Approach to Behavior Engineering

    No full text
    This paper introduces the Extensible Agent Behavior Specification Language (XABSL) as a pragmatic tool for engineering the behavior of autonomous agents in complex and dynamic environments. It is based on hierarchies of finite state machines (FSM) for action selection and supports the design of longterm and deliberative decision processes as well as of short-term and reactive behaviors. A platform-independent execution engine makes the language applicable on any robotic platform and together with a variety of visualization, editing and debugging tools, XABSL is a convenient and powerful system for the development of complex behaviors. The complete source code can be freely downloaded from the XABSL websit

    A Real-Time Auto-Adjusting Vision System for Robotic Soccer

    No full text
    This paper presents a real-time approach for object recognition in robotic soccer. The vision system does not need any calibration and adapts to changing lighting conditions during run time. The adaptation is based on statistics which are computed when recognizing objects and leads to a segmentation of the color space to di#erent color classes
    corecore