325 research outputs found

    Visual Localisation of Mobile Devices in an Indoor Environment under Network Delay Conditions

    Get PDF
    Current progresses in home automation and service robotic environment have highlighted the need to develop interoperability mechanisms that allow a standard communication between the two systems. During the development of the DHCompliant protocol, the problem of locating mobile devices in an indoor environment has been investigated. The communication of the device with the location service has been carried out to study the time delay that web services offer in front of the sockets. The importance of obtaining data from real-time location systems portends that a basic tool for interoperability, such as web services, can be ineffective in this scenario because of the delays added in the invocation of services. This paper is focused on introducing a web service to resolve a coordinates request without any significant delay in comparison with the sockets

    Optimal Assignment of Customer-Desired Items to the Fetching Robots in Superstores

    Get PDF
    This paper discusses a task assignment problem. The scenario under consideration is a superstore with a team of fetching robots. There is a set of customers each requiring a unique set of items. The goal is to assign the task of fetching the items to the available robots in such a way that the time and effort required for fetching the item is minimized. For this purpose, a Markov Decision Process based model has been proposed. The proposed-model is solvable using stochastic dynamic programming algorithms such as value iteration for the calculation of optimal task assignment policy. The analysis of the characteristics of the resulting optimal policy has been presented with the help of a numerical case study

    Interactive multiple object learning with scanty human supervision

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/We present a fast and online human-robot interaction approach that progressively learns multiple object classifiers using scanty human supervision. Given an input video stream recorded during the human robot interaction, the user just needs to annotate a small fraction of frames to compute object specific classifiers based on random ferns which share the same features. The resulting methodology is fast (in a few seconds, complex object appearances can be learned), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier.; We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in indoor and outdoor scenarios containing a multitude of different objects. We show that with little human assistance, we are able to build object classifiers robust to viewpoint changes, partial occlusions, varying lighting and cluttered backgrounds. (C) 2016 Elsevier Inc. All rights reserved.Peer ReviewedPostprint (author's final draft

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    Behavior-based Control for Service Robots inspired by Human Motion Patterns : a Robotic Shopping Assistant

    Get PDF
    Es wurde, unter Verwendung menschenĂ€hnlicher Bewegungsmuster und eines verhaltensbasierten Ansatzes, eine Steuerung fĂŒr mobile Serviceroboter entwickelt, die Aufgabenplanung, globale und lokale Navigation in dynamischen Umgebungen, sowie die gemeinsame AufgabenausfĂŒhrung mit einem Benutzer umfasst. Das Verhaltensnetzwerk besteht aus Modulen mit voneinander unabhĂ€ngigen Aufgaben. Das komplexe Gesamtverhalten des Systems ergibt sich durch die Vereinigung der Einzelverhalten (\u27Emergenz\u27)

    Probabilistische Methoden fĂŒr die Roboter-Navigation am Beispiel eines autonomen Shopping-Assistenten

    Get PDF
    Abstract Autonomous navigation, in addition to interaction, is a basic ability for the operation of a mobile service robot. Here, important subskills are selfocalization, path planning, and motion control with collision avoidance. A further pre-condition for many navigation tasks ist the generation of an environment model from sensor observationa, often in combination with autonomous exploration. In this thesis, these challenges are considered in the context of the development of an interactive mobile shopping guide, which is able to provide information about the shop's products to customers of a home improvement store and guide them to the respective location. The focus of this work lies on the initial environment mapping. A method for Simultaneous Localization and Mapping (SLAM) has been developed, which in contrast to other comparable approaches does not assume the use of high-precision laser range scanners. Instead, sonar range sensors are used mainly, which feature an inferior spatial resolution and increased measurement noise. The resulting Map-Match-SLAM algorithm is based on the well known Rao-Blackwellized Particle Filter (RBPF), in combination with local maps for representation of most recent observations and and a map matching function for comparison of local and global maps. By adding a memory-effcient global map representation and dynamic adaption of the number of particles, online mapping is possible even under high state uncertainty resulting from the sensor characteristics. The use of local maps for representation of the observations and the sensor-independent weighting function make Map-Match-SLAM applicable for a wide range of different sensors. This has been demonstrated by mapping with a stereo camera and with a single camera, in combination with a depth-from-motion algorithm for pre-processing. Furthermore, a SLAM assistant has been developed, which is generating direction hints for the human operator during the mapping phase, in order to ensure a route that enables optimal operation of the SLAM algorithm. The assistant represents an intermediate step between purely manual mapping and completely autonomous exploration. A second main part of the work presented here are methods for the autonomous operation of the robot. For selflocalization, a map matching approach with local maps is used, similar to the proposed SLAM algorithm. Improvements of robustness and precision are achieved in combination with an existing visual localization approach which is using omnidirectional camera images. Path planning is done by the utilization of standard graph search algorithms. To that purpose, the grid cells of the global map are regarded as graph nodes. Comparitive analysis is presented for search algorithms with and without heuristics (A*/Dijkstra algorithm), for the specifcs of typical operation areas. Two different algorithms have been developed for motion control and collision avoidance: A reactive method, which is an enhancement of the existing Vector Field Histogram (VFH) approach, is experimentally compared with a new anticipative method based on sampling and stochastic search in the trajectory space. All the developed methods are employed on a team of shopping robots, which have been in permanent public test operation in a home improvement store for six months currently. The description of navigation methods is complemented by an overview of further software componentsof the robots, e.g. for Human-Robot-Interaction, and a detailed description of the control architecture for coordination of the subsystems. Analysis of long term test operation proves that all the applied methods are suitable for real world applications and that the robot is accepted and regarded as a valuable service by the customers.Die autonome Navigation stellt neben der InteraktionsfĂ€higkeit eine Grundlage fĂŒr die Funktion eines mobilen Serviceroboters dar. Wichtige Teilleistungen sind dabei die Selbstlokalisation, die Pfadplanung und die Bewegungssteuerung unter Vermeidung von Kollisionen. Eine Voraussetzung fĂŒr viele Navigationsaufgaben ist zudem die Erstellung eines Umgebungsmodells aus sensorischen Beobachtungen, unter UmstĂ€nden in Verbindung mit einer selbstĂ€ndigen Exploration. Diese Teilprobleme wurden in der vorgelegten Arbeit vor dem Hintergrund der Entwicklung eines interaktiven mobilen Shopping-Lotsen bearbeitet, welcher Kunden eines Baumarktes Informationen zu Produkten zur VerfĂŒgung stellen und sie auf Wunsch zum Standort der gesuchten Waren fĂŒhren kann. Den methodischen Kern der Arbeit bildet die initiale Umgebungskartierung. DafĂŒr wurde ein Verfahren zum Simultaneous Localization and Mapping (SLAM) entwickelt, welches im Gegensatz zu vergleichbaren AnsĂ€tzen nicht auf den Einsatz hochgenauer Laser-Range-Scanner ausgerichtet ist. Stattdessen wurden hauptsĂ€chlich Sonar-Sensoren benutzt, die sich durch eine wesentlich geringere rĂ€umliche Auflösung und höhere Messunsicherheit auszeichnen. Der entwickelte Map-Match-SLAM-Algorithmus beruht auf dem bekannten Rao-Blackwellized Particle Filter (RBPF), welcher mit einer lokalen Karte zur ReprĂ€sentation der aktuellen Umgebungsbeobachtungen sowie einer Map-Matching-Methode zum Vergleich der lokalen und globalen Karte kombiniert wurde. Durch eine speichereffiziente Darstellung der globalen Karte und dynamische Adaption der Partikel-Anzahl ist trotz der aus den sensorischen BeschrĂ€nkungen resultierenden großen Zustandsunsicherheit die Online-Kartierung möglich. Durch die Transformation der Beobachtungen in eine lokale Karte und die sensorunabhĂ€ngige Bewertungsfunktion ist das Map-Match-SLAMVerfahren fĂŒr ein breites Spektrum unterschiedlicher Sensoren geeignet. Dies wurde exemplarisch durch die Kartierung unter Nutzung einer Stereo-Kamera-Anordnung und einer einfachen Kamera in Verbindung mit einem Depth-from-Motion-Verfahren gezeigt. Aufbauend auf dem Kartierungsalgorithmus wurde zudem ein SLAM-Assistent entwickelt, welcher wĂ€hrend der Kartierungsphase AktionsvorschlĂ€ge fĂŒr den menschlichen Bediener prĂ€sentiert, die eine optimale Funktion des SLAM-Algorithmus gewĂ€hrleisten. Der Assistent stellt damit eine Zwischenstufe zwischen rein manueller Steuerung und komplett autonomer Exploration dar. Einen weiteren Schwerpunkt der Arbeit stellen die Verfahren fĂŒr die autonome Funktion des Roboters dar. FĂŒr die Selbstlokalisation wird ebenso wie beim SLAM ein Map Matching mit lokalen Karten eingesetzt. Eine Verbesserung der Robustheit und Genauigkeit wird durch die Kombination dieses Ansatzes mit einem vorhandenen visuellen Selbstlokalisations-Verfahren auf Basis einer omnidirektionalen Kamera erzielt. FĂŒr die Bestimmung des optimalen Pfades zu einem Zielpunkt kommen Standard-Algorithmen zur Pfadsuche in Graphen zum Einsatz, die Zellen der Karte werden dazu als Graphknoten interpretiert. Die Arbeit prĂ€sentiert vergleichende Untersuchungen zur Effizienz von Algorithmen mit und ohne Suchheuristik (A*/Dijkstra-Algorithmus) in der konkreten Einsatzumgebung. FĂŒr die Bewegungssteuerung und Kollisionsvermeidung wurden zwei verschiedene Algorithmen entwickelt: Einem reaktiven Verfahren, welches eine Weiterentwicklung des bekannten Vector Field Histogram (VFH) darstellt, wird ein neues antizipatives Verfahren auf Basis von Sampling und stochastischer Suche im Raum der möglichen Bewegungstrajektorien gegenĂŒber gestellt und experimentell verglichen. Die entwickelten Methoden kommen auf mehreren Shopping-Robotern zum Einsatz, die sich seit ca. sechs Monaten im dauerhaften öffentlichen Testbetrieb in einem Baumarkt befinden. Neben den Navigationsmethoden gibt die Arbeit einen Überblick ĂŒber die weiteren Module des Roboters, z.B. fĂŒr die Nutzer-Interaktion, und beschreibt detailliert die Steuerarchitektur zur Koordinierung der Teilleistungen. Die Eignung aller eingesetzten Methoden fĂŒr den Einsatz in einer realen Anwendung und die hohe Akzeptanz der Nutzer fĂŒr das entwickelte Gesamtsystem werden durch die Auswertung von Langzeittests nachgewiesen

    Towards topological mapping with vision-based simultaneous localization and map building

    Full text link
    Although the theory of Simultaneous Localization and Map Building (SLAM) is well developed, there are many challenges to overcome when incorporating vision sensors into SLAM systems. Visual sensors have different properties when compared to range finding sensors and therefore require different considerations. Existing vision-based SLAM algorithms extract point landmarks, which are required for SLAM algorithms such as the Kalman filter. Under this restriction, the types of image features that can be used are limited and the full advantages of vision not realized. This thesis examines the theoretical formulation of the SLAM problem and the characteristics of visual information in the SLAM domain. It also examines different representations of uncertainty, features and environments. It identifies the necessity to develop a suitable framework for vision-based SLAM systems and proposes a framework called VisionSLAM, which utilizes an appearance-based landmark representation and topological map structure to model metric relations between landmarks. A set of Haar feature filters are used to extract image structure statistics, which are robust against illumination changes, have good uniqueness property and can be computed in real time. The algorithm is able to resolve and correct false data associations and is robust against random correlation resulting from perceptual aliasing. The algorithm has been tested extensively in a natural outdoor environment

    Visually-guided walking reference modification for humanoid robots

    Get PDF
    Humanoid robots are expected to assist humans in the future. As for any robot with mobile characteristics, autonomy is an invaluable feature for a humanoid interacting with its environment. Autonomy, along with components from artificial intelligence, requires information from sensors. Vision sensors are widely accepted as the source of richest information about the surroundings of a robot. Visual information can be exploited in tasks ranging from object recognition, localization and manipulation to scene interpretation, gesture identification and self-localization. Any autonomous action of a humanoid, trying to accomplish a high-level goal, requires the robot to move between arbitrary waypoints and inevitably relies on its selflocalization abilities. Due to the disturbances accumulating over the path, it can only be achieved by gathering feedback information from the environment. This thesis proposes a path planning and correction method for bipedal walkers based on visual odometry. A stereo camera pair is used to find distinguishable 3D scene points and track them over time, in order to estimate the 6 degrees-of-freedom position and orientation of the robot. The algorithm is developed and assessed on a benchmarking stereo video sequence taken from a wheeled robot, and then tested via experiments with the humanoid robot SURALP (Sabanci University Robotic ReseArch Laboratory Platform)
    • 

    corecore