4 research outputs found

    Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach

    Full text link
    We explore the probabilistic foundations of shared control in complex dynamic environments. In order to do this, we formulate shared control as a random process and describe the joint distribution that governs its behavior. For tractability, we model the relationships between the operator, autonomy, and crowd as an undirected graphical model. Further, we introduce an interaction function between the operator and the robot, that we call "agreeability"; in combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend a cooperative collision avoidance autonomy to shared control. We therefore quantify the notion of simultaneously optimizing over agreeability (between the operator and autonomy), and safety and efficiency in crowded environments. We show that for a particular form of interaction function between the autonomy and the operator, linear blending is recovered exactly. Additionally, to recover linear blending, unimodal restrictions must be placed on the models describing the operator and the autonomy. In turn, these restrictions raise questions about the flexibility and applicability of the linear blending framework. Additionally, we present an extension of linear blending called "operator biased linear trajectory blending" (which formalizes some recent approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that not only is this also a restrictive special case of our probabilistic approach, but more importantly, is statistically unsound, and thus, mathematically, unsuitable for implementation. Instead, we suggest a statistically principled approach that guarantees data is used in a consistent manner, and show how this alternative approach converges to the full probabilistic framework. We conclude by proving that, in general, linear blending is suboptimal with respect to the joint metric of agreeability, safety, and efficiency

    Feature determination from powered wheelchair user joystick input characteristics for adapting driving assistance

    Get PDF
    Background: Many powered wheelchair users find their medical condition and their ability to drive the wheelchair will change over time. In order to maintain their independent mobility, the powered chair will require adjustment over time to suit the user's needs, thus regular input from healthcare professionals is required. These limited resources can result in the user having to wait weeks for appointments, resulting in the user losing independent mobility, consequently affecting their quality of life and that of their family and carers. In order to provide an adaptive assistive driving system, a range of features need to be identified which are suitable for initial system setup and can automatically provide data for re-calibration over the long term. Methods: A questionnaire was designed to collect information from powered wheelchair users with regard to their symptoms and how they changed over time. Another group of volunteer participants were asked to drive a test platform and complete a course which represented manoeuvring in a very confined space as quickly as possible. Two of those participants were also monitored over a longer period in their normal home daily environment. Features, thought to be suitable, were examined using pattern recognition classifiers to determine their suitability for identifying the changing user input over time. Results: The results are not designed to provide absolute insight into the individual user behaviour, as no ground truth of their ability has been determined, they do nevertheless demonstrate the utility of the measured features to provide evidence of the users’ changing ability over time whilst driving a powered wheelchair. Conclusions: Determining the driving features and adjustable elements provides the initial step towards developing an adaptable assistive technology for the user when the ground truths of the individual and their machine have been learned by a smart pattern recognition syste

    Novel Methods For Human-robot Shared Control In Collaborative Robotics

    Get PDF
    Blended shared control is a method to continuously combine control inputs from traditional automatic control systems and human operators for control of machines. An automatic control system generates control input based on feedback of measured signals, whereas a human operator generates control input based on experience, task knowledge, and awareness and sensing of the environment in which the machine is operating. Such active blending of inputs from the automatic control agent and the human agent to jointly control machines is expected to provide benefits in terms of utilizing the unique features of both agents, i.e., better task execution performance of automatic control systems based on sensed signals and maintaining situation awareness by having the human in the loop to handle safety concerns and environmental uncertainties. The shared control approach in this sense provides an alternative to full autonomy. Many existing and future applications of such an approach include automobiles, underwater vehicles, ships, airplanes, construction machines, space manipulators, surgery robots, and power wheelchairs, where machines are still mostly operated by human operators for safety concerns. Developing machines for full autonomy requires not only advances in machines but also the ability to sense the environment by placing sensors in it; the latter could be a very difficult task for many such applications due to perceived uncertainties and changing conditions. The notion of blended shared control, as a more practical alternative to full autonomy, enables keeping the human operator in the loop to initiate machine actions with real-time intelligent assistance provided by automatic control. The problem of how to blend the two inputs and development of associated scientific tools to formalize and achieve blended shared control is the focus of this work. Specifically, the following essential aspects are investigated and studied. Task learning: modeling of a human-operated robotic task from demonstration into subgoals such that execution patterns are captured in a simple manner and provide reference for human intent prediction and automatic control generation. Intent prediction: prediction of human operator's intent in the framework of subgoal models such that it encodes the probability of a human operator seeking a particular subgoal. Input blending: generating automatic control input and dynamically combining it with human operator's input based on prediction probability; this will also account for situations where the human operator may take unexpected actions to avoid danger by yielding full control authority to the human operator. Subgoal adjustment: adjusting the learned, nominal task model dynamically to adapt to task changes, such as changes to target object, which will cause the nominal model learned from demonstration to lose its effectiveness. This dissertation formalizes these notions and develops novel tools and algorithms for enabling blended shared control. To evaluate the developed scientific tools and algorithms, a scaled hydraulic excavator for a typical trenching and truck-loading task is employed as a specific example. Experimental results are provided to corroborate the tools and methods. To expand the developed methods and further explore shared control with different applications, this dissertation also studied the collaborative operation of robot manipulators. Specifically, various operational interfaces are systematically designed, a hybrid force-motion controller is integrated with shared control in a mixed world-robot frame to facilitate human-robot collaboration, and a method that utilizes vision-based feedback to predict the human operator's intent and provides shared control assistance is proposed. These methods provide ways for human operators to remotely control robotic manipulators effectively while receiving assistance by intelligent shared control in different applications. Several robotic manipulation experiments were conducted to corroborate the expanded shared control methods by utilizing different industrial robots

    Adaptive Shared Autonomy between Human and Robot to Assist Mobile Robot Teleoperation

    Get PDF
    Die Teleoperation vom mobilen Roboter wird in großem Umfang eingesetzt, wenn es für Mensch unpraktisch oder undurchführbar ist, anwesend zu sein, aber die Entscheidung von Mensch wird dennoch verlangt. Es ist für Mensch stressig und fehleranfällig wegen Zeitverzögerung und Abwesenheit des Situationsbewusstseins, ohne Unterstützung den Roboter zu steuern einerseits, andererseits kann der völlig autonome Roboter, trotz jüngsten Errungenschaften, noch keine Aufgabe basiert auf die aktuellen Modelle der Wahrnehmung und Steuerung unabhängig ausführen. Deswegen müssen beide der Mensch und der Roboter in der Regelschleife bleiben, um gleichzeitig Intelligenz zur Durchführung von Aufgaben beizutragen. Das bedeut, dass der Mensch die Autonomie mit dem Roboter während des Betriebes zusammenhaben sollte. Allerdings besteht die Herausforderung darin, die beiden Quellen der Intelligenz vom Mensch und dem Roboter am besten zu koordinieren, um eine sichere und effiziente Aufgabenausführung in der Fernbedienung zu gewährleisten. Daher wird in dieser Arbeit eine neuartige Strategie vorgeschlagen. Sie modelliert die Benutzerabsicht als eine kontextuelle Aufgabe, um eine Aktionsprimitive zu vervollständigen, und stellt dem Bediener eine angemessene Bewegungshilfe bei der Erkennung der Aufgabe zur Verfügung. Auf diese Weise bewältigt der Roboter intelligent mit den laufenden Aufgaben auf der Grundlage der kontextuellen Informationen, entlastet die Arbeitsbelastung des Bedieners und verbessert die Aufgabenleistung. Um diese Strategie umzusetzen und die Unsicherheiten bei der Erfassung und Verarbeitung von Umgebungsinformationen und Benutzereingaben (i.e. der Kontextinformationen) zu berücksichtigen, wird ein probabilistischer Rahmen von Shared Autonomy eingeführt, um die kontextuelle Aufgabe mit Unsicherheitsmessungen zu erkennen, die der Bediener mit dem Roboter durchführt, und dem Bediener die angemesse Unterstützung der Aufgabenausführung nach diesen Messungen anzubieten. Da die Weise, wie der Bediener eine Aufgabe ausführt, implizit ist, ist es nicht trivial, das Bewegungsmuster der Aufgabenausführung manuell zu modellieren, so dass eine Reihe von der datengesteuerten Ansätzen verwendet wird, um das Muster der verschiedenen Aufgabenausführungen von menschlichen Demonstrationen abzuleiten, sich an die Bedürfnisse des Bedieners in einer intuitiven Weise über lange Zeit anzupassen. Die Praxistauglichkeit und Skalierbarkeit der vorgeschlagenen Ansätze wird durch umfangreiche Experimente sowohl in der Simulation als auch auf dem realen Roboter demonstriert. Mit den vorgeschlagenen Ansätzen kann der Bediener aktiv und angemessen unterstützt werden, indem die Kognitionsfähigkeit und Autonomieflexibilität des Roboters zu erhöhen
    corecore