20 research outputs found

    On Managing Knowledge for MAPE-K Loops in Self-Adaptive Robotics Using a Graph-Based Runtime Model

    Get PDF
    Service robotics involves the design of robots that work in a dynamic and very open environment, usually shared with people. In this scenario, it is very difficult for decision-making processes to be completely closed at design time, and it is necessary to define a certain variability that will be closed at runtime. MAPE-K (Monitor–Analyze–Plan–Execute over a shared Knowledge) loops are a very popular scheme to address this real-time self-adaptation. As stated in their own definition, they include monitoring, analysis, planning, and execution modules, which interact through a knowledge model. As the problems to be solved by the robot can be very complex, it may be necessary for several MAPE loops to coexist simultaneously in the robotic software architecture endowed in the robot. The loops will then need to be coordinated, for which they can use the knowledge model, a representation that will include information about the environment and the robot, but also about the actions being executed. This paper describes the use of a graph-based representation, the Deep State Representation (DSR), as the knowledge component of the MAPE-K scheme applied in robotics. The DSR manages perceptions and actions, and allows for inter- and intra-coordination of MAPE-K loops. The graph is updated at runtime, representing symbolic and geometric information. The scheme has been successfully applied in a retail intralogistics scenario, where a pallet truck robot has to manage roll containers for satisfying requests from human pickers working in the warehousePartial funding for open access charge: Universidad de Málaga. This work has been partially developed within SA3IR (an experiment funded by EU H2020 ESMERA Project under Grant Agreement 780265), the project RTI2018-099522-B-C4X, funded by the Gobierno de España and FEDER funds, and the B1-2021_26 project, funded by the University of Málaga

    Robot-assisted gait self-training: assessing the level achieved

    Get PDF
    This paper presents the technological status of robot-assisted gait self-training under real clinical environment conditions. A successful rehabilitation after surgery in hip endoprosthetics comprises self-training of the lessons taught by physiotherapists. While doing this, immediate feedback to the patient about deviations from the expected physiological gait pattern during training is important. Hence, the Socially Assistive Robot (SAR) developed for this type of training employs task-specific, user-centered navigation and autonomous, real-time gait feature classification techniques to enrich the self-training through companionship and timely corrective feedback. The evaluation of the system took place during user tests in a hospital from the point of view of technical benchmarking, considering the therapists’ and patients’ point of view with regard to training motivation and from the point of view of initial findings on medical efficacy as a prerequisite from an economic perspective. In this paper, the following research questions were primarily considered: Does the level of technology achieved enable autonomous use in everyday clinical practice? Has the gait pattern of patients who used additional robot-assisted gait self-training for several days been changed or improved compared to patients without this training? How does the use of a SAR-based self-training robot affect the motivation of the patients

    The multi-modal interface of Robot-Era multi-robot services tailored for the elderly

    Get PDF
    Socially assistive robotic platforms are now a realistic option for the long-term care of ageing populations. Elderly users may benefit from many services provided by robots operating in different environments, such as providing assistance inside apartments, serving in shared facilities of buildings or guiding people outdoors. In this paper, we present the experience gained within the EU FP7 ROBOT-ERA project towards the objective of implementing easy-to-use and acceptable service robotic system for the elderly. In particular, we detail the user-centred design and the experimental evaluation in realistic environments of a web-based multi-modal user interface tailored for elderly users of near future multi-robot services. Experimental results demonstrate positive evaluation of usability and willingness to use by elderly users, especially those less experienced with technological devices who could benefit more from the adoption of robotic services. Further analyses showed how multi-modal modes of interaction support more flexible and natural elderly–robot interaction, make clear the benefits for the users and, therefore, increase its acceptability. Finally, we provide insights and lessons learned from the extensive experimentation, which, to the best of our knowledge, is one of the largest experimentation of a multi-robot multi-service system so far

    Recherche und Evaluation von Features zur Detektion von gestürzten Personen in häuslichen Umgebungen

    Get PDF
    This thesis deals with the research and evaluation of features to detect fallen people in a home environment with a mobile robot. About one third of people aged over 65 fall at least once a year. Half of the people can't manage to get up after the fall by own means. As lying on the floor for a long time can cause serious health risks, a reliable method to detect fallen people and to call for help is needed. Commercially available products provide only a limited solution. Robotic Assistance for the Elderly promises to have great potential in this field. Especially the depth data of a Kinect offers new possibilities for a reliable system that detects fallen people. The Kinect is a standard equipment of today's assistance robots. This thesis gives an overview on known approaches for fall detection, as for featurebased people and object detection in 3D-Data and analyses the ability of the presented approaches for the application examined in this thesis. A new approach to evaluate suitable features and to detect fallen people with a mobile robot is implemented. The individual components of the new approach are presented in details. Data recorded by using the Kinect of a mobile robot, is used to evaluate the developed approach. The eavaluation compares the detection rate of different machine learning techniques and features. The achieved results are presented and discussed. The evaluation shows, that a Histogram of Local Surface Normals in combination with a Support Vector Machine is suitable to detect fallen people in 3D-Data generated by a mobile robot. Finally, a summary on the results and suggestions for further research is given.Zusammenfassung: Die vorliegende Arbeit befasst sich mit der Recherche und Evaluation von Features zur Detektion gestürzter Personen in häuslichen Umgebungen über eine mobile Roboterplattform. Etwa ein Drittel der Menschen über 65 Jahren stürzen mindestens einmal pro Jahr. Fast die Hälfte dieser Personen schafft es nach dem Sturz nicht, aus eigener Kraft wieder aufzustehen. Da eine lange Liegezeit zu ernsten Komplikationen führen kann, bedarf es einer zuverlässigen Methodik zur Erkennung gestürzter Personen, über die unverzüglich Hilfe angefordert werden kann. Die bisher kommerziell verfügbaren Produkte bieten jedoch nur bedingt eine Lösung. Großes Potential wird hier in der Service- und Assistenzrobotik gesehen. Vor allem die 3D-Daten der Tiefenkamera Kinect, mit der die meisten der aktuellen Serviceroboter ausgestattet sind, versprechen neue Möglichkeiten zur robusten Erkennung von Stürzen. Diese Arbeit gibt einen ausführlichen Überblick über bestehende Verfahren zur Sturzdetektion, sowie zur merkmalsbasierten Detektion von Personen und Objekten in 3D-Daten. Die vorgestellten Ansätze werden bezüglich ihrer Eignung für den in dieser Arbeit betrachteten Anwendungsfall bewertet. Auf Basis der Bewertungsergebnisse wird ein neuer Ansatz zur Evaluation geeigneter Features und zur Detektion gestürzter Personen über eine mobile Roboterplattform entwickelt. Die einzelnen Komponenten des neuen Ansatzes werden in der vorliegenden Arbeit ausführlich vorgestellt und die Details der Implementierung angesprochen. Mit der Kinect einer mobilen Roboterplattform selbst aufgenommenes Testmaterial dient der Evaluation des neu entwickelten Verfahrens. Im Rahmen der Evaluation wird die Detektionsleistung verschiedener maschineller Lerntechniken und Merkmale miteinander verglichen. Die Ergebnisse der Evaluation werden in der Arbeit ausführlich vorgestellt und diskutiert. Es zeigt sich, dass das Histogram of Local Surface Normals in Kombination mit einer Support Vector Machine sehr gut geeignet ist, gestürzte Personen in den 3D-Daten einer mobile Roboterplattform zu detektieren. Die Arbeit schließt mit einer Zusammenfassung der Ergebnisse und einem Ausblick für weiterführende Arbeiten.Ilmenau, Techn. Univ., Masterarbeit, 201

    The Penetration of Internet of Things in Robotics: Towards a Web of Robotic Things

    Get PDF
    As the Internet of Things (IoT) penetrates different domains and application areas, it has recently entered also the world of robotics. Robotics constitutes a modern and fast-evolving technology, increasingly being used in industrial, commercial and domestic settings. IoT, together with the Web of Things (WoT) could provide many benefits to robotic systems. Some of the benefits of IoT in robotics have been discussed in related work. This paper moves one step further, studying the actual current use of IoT in robotics, through various real-world examples encountered through a bibliographic research. The paper also examines the potential ofWoT, together with robotic systems, investigating which concepts, characteristics, architectures, hardware, software and communication methods of IoT are used in existing robotic systems, which sensors and actions are incorporated in IoT-based robots, as well as in which application areas. Finally, the current application of WoT in robotics is examined and discussed

    Klassifikation von Berührungsmustern für einen textilen haptischen Sensor

    Get PDF
    Zusammenfassung: Die Arbeit behandelt die Zusammenführung von textilen haptischen Sensoren und Berührungsklassifikation. Dabei wurden zwei Entwürfe vorgestellt. Der erste Entwurf besteht aus leitfähigem Garn, dass zu einer einzelnen Fläche verstrickt wurde. Der zweite Entwurf besteht aus silberbeschichteten Stoffen, zwischen denen eine pizeoresistive Schicht liegt. Dabei hat der Sensor ein Matrix-Layout, beim dem die einzelnen Flächen gut voneinander isoliert werden. Zur Merkmalsextrakion wurden zwei Ansätze angewandt. Für den gewirkten Sensor wurde ein Sliding Window Ansatz genutzt um Abweichungen vom Mittelwert und Nulldurchgänge als Features zu gewinnen. Für den zweiten Sensor wurde eine Sample-and-Hold-Strategie genutzt. Als Klassifikator wurde eine Support Vector Machine genutzt, bei der wahlweise mehrere 1-vs-all-SVMs oder eine 1-vs-1 SVM eingesetzt wurden. Der Sensor aus Garn konnte die Aufgabe nicht erfüllen. Der Sensor aus silberbesichteten Stoffen hingeben lieferte gute Klassifikationsergebnisse. Aufgrund unzureicheder Test lässt sich diese Arbeit als Machbarkeitsstudie auffassen.Abstract: The topic of this work is the combination of haptic fabric sensors and classification of touch gestures. Two developed sensors are presented in this work. One uses electroconductive yarn which is knitted to a flat sensor patch. The other prototype uses a layered design consisting of silver coated fabric covering a piece of piezeoresistive material. The second sensor has a matrix layout which separates the single pieces of sensor. Each sensor uses a different means to extract features from the raw signal. The sensor made from yarn uses a sliding window approach which is realised by a buffer. The sensor with silver coated fabric uses a sample and hold strategy. For classification purposes the support vector machine was chosen. The SVM is used either with mulilpe 1-vs-all SVMs or with one 1-vs-1 SVM. The Sensor made from yarn could not fulfil the task whereas the sensor with layered fabric sensor could fulfil the task of calssifying different touch patterns. Due to the limited test range, this work is to be understood as a feasibility study.Ilmenau, Techn. Univ., Bachelor-Arbeit, 201

    Implementierung und Evaluation verschiedener Bayes-Filter für das Personentracking

    Get PDF
    In dieser Arbeit wird die Kalman Filter Bibliothek "Bayes++" an die Softwareumgebung MIRA angebunden. Dazu werden verschiedene Klassen erstellt um die Schnittstellen der Bibliothek mit denen des MIRA Systems zu verbinden. Durch diese Bibliothek ist es dann möglich verschiedene Kalman Filter Arten und Systemmodelle zu nutzen. Zur Demonstration werden zwei Systemmodelle implementiert. Es wird außerdem gezeigt, dass die Bibliothek die selbe Performance liefert, wie die bisherige Implementierung eines Kalman Filters im MIRA-System. Zusätzlich werden die beiden Systemmodelle verglichen.In this Bachelor thesis the software library "Bayes++" will be connected to the MIRA Software. This library consists of multiple Kalman Filters wich will be used for people tracking. Therefore I implement different classes to connect the library with the MIRA Software. For demonstration purposes two Systemmodels will be implemented and tested with this new system

    Autonome Situationserkennung im klinischen Umfeld

    Get PDF
    The goal of the research project ROREAS (Robotic Rehabilitation Assistant for Stroke Patients) is the development of a robotic rehabilitation assistant for the self-training of stroke patients. The self-training aims at improving the walking and orientation skill of the patient. It consists mainly of a goal oriented movement in an environment of a rehabilitation center. The self-training is mostly performed on the hallways connecting the patient’s rooms. Due to the structure of the building or objects staying in the hallways the lateral space is limited forming narrow passages. Moving in such a confined space imposes deadlocks in narrow passages. Since a polite and attentive navigation is an important requirement for an assistive robot, these deadlock situations must be recognized in advance to trigger a proactive reaction. In this master thesis an approach is presented for anticipating deadlock situations caused by narrow passages. In a nutshell, situations concerning narrow passages are captured by real-valued feature vectors. Basically, the features capture the structure of the environment along the robot's movement in terms of a possible narrow passage and the possible space conflicts caused by a person. As part of the features, the movement of the persons are predicted allowing the robot to forecast possible space conflicts resulting in the considered problematic situations. By grouping the feature vectors into classes representing the appropriate treatments of their corresponding situation, the recognition task becomes a classification problem. Thus, a classifier which solve this classification problem can be used to map each feature vector which represents a situation directly to the appropriate treatment. A linear support vector machine and a handcrafted decision tree is used as a classifier. The experimental evaluations show that the used features are good suitable for recognizing deadlock situations. The decision tree performed better on the dataset of this thesis as the linear support vector machine.Zusammenfassung: Im Rahmen des Forschungsprojektes ROREAS (Interaktiver robotischer Reha-Assistent für das Lauf- und Orientierungstraining von Patienten nach Schlaganfällen) soll ein robotischer Laufassistent für eine Klinikumgebung entwickelt werden. Der Roboter soll Schlaganfallpatienten bei ihrem eigenständigen Lauftraining unterstützend begleiten. Das Lauftraining wird hauptsächlich auf den Korridoren der Klinik ausgeführt. Auf diesen Korridoren können Engstellen auftreten. Wenn der Roboter und eine Person sich gleichzeitig durch eine Engstelle bewegen, kann dies zu Verklemmungssituationen führen. Zur Gewährleistung einer höflichen und nutzerzentrierten Navigation, die ein wichtiger Bestandteil für die soziale Akzeptanz des Roboters ist, müssen Verklemmungssituationen im Voraus erkannt werden, um rechtzeitig eine Behandlung einzuleiten. Ziel der vorliegenden Masterarbeit ist die Entwicklung eines Verfahren zur vorausschauenden Erkennung von Verklemmungssituationen. Zur Erkennung von Verklemmungen werden Engstellensituationen durch reellwertige Merkmalsvektoren beschrieben. Diese erfassen die Platzkonflikte des Roboters mit einer Person in Bezug auf erkannte Engstellen. Als Teil der Merkmale wird die Bewegung von Personen prädiziert, um Platzkonflikte vorherzusagen, die möglicherweise zu Verklemmungen führen können. Indem die Merkmalsvektoren zu Klassen zusammengefasst werden, welche die Behandlung ihrer entsprechenden Situation repräsentiert, kann die Erkennung von Verklemmungssituationen als Klassifikationsproblem betrachtet werden. Folglich kann ein Klassifikator eingesetzt werden, um die Situationen direkt auf ihre Behandlung abzubilden. Als Klassifikatoren wurden in dieser Arbeit eine lineare Support Vektor Maschine und ein manuell designter Entscheidungsbaum genutzt. Die experimentellen Untersuchungen dieser Arbeit zeigen, dass der gewählte Merkmalsraum geeignet ist, um Verklemmungssituationen zu erkennen. Der Entscheidungsbaum erzielte dabei auf den Datensätzen dieser Arbeit eine bessere Erkennungsrate als die lineare Support Vektor Maschine.Ilmenau, Techn. Univ., Masterarbeit, 201

    The icub software architecture: evolution and lessons learned

    Get PDF
    The complexity of humanoid robots is increasing with the availability of new sensors, embedded CPUs and actuators. This wealth of technologies allows researchers to investigate new problems like whole-body force control, multi-modal human-robot interaction and sensory fusion. Under the hood of these robots, the software architecture has an important role: it allows researchers to get access to the robot functionalities focusing primarily on their research problems, it supports code reuse to minimize development and debugging, especially when new hardware becomes available. But more importantly it allows increasing the complexity of the experiments that can be implemented before system integration becomes unmanageable and debugging draws more resources than research itself. In this paper we illustrate the software architecture of the iCub humanoid robot and the software engineering best practices that have emerged driven by the needs of our research community. We describe the latest developments at the level of the middleware supporting interface definition and automatic code generation, logging, ROS compatibility and channel prioritization. We show the robot abstraction layer and how it has been modified to better address the requirements of the users and to support new hardware as it became available. We also describe the testing framework we have recently adopted for developing code using a test driven methodology. We conclude the paper discussing the lessons we have learned during the past eleven years of software development on the iCub humanoid robot
    corecore