82 research outputs found

    When and how to help: An iterative probabilistic model for learning assistance by demonstration

    Get PDF

    The advancement of an obstacle avoidance bayesian neural network for an intelligent wheelchair

    Full text link
    In this paper, an advanced obstacle avoidance system is developed for an intelligent wheelchair designed to support people with mobility impairments who also have visual, upper limb, or cognitive impairment. To avoid obstacles, immediate environment information is continuously updated with range data sampled by an on-board laser range finder URG-04LX. Then, the data is transformed to find the relevant information to the navigating process before being presented to a trained obstacle avoidance neural network which is optimized under the supervision of a Bayesian framework to find its structure and weight values. The experiment results showed that this method allows the wheelchair to avoid collisions while simultaneously navigating through an unknown environment in real-time. More importantly, this new approach significantly enhances the performance of the system to pass narrow openings such as door passing. © 2013 IEEE

    Optimal path-following control of a smart powered wheelchair.

    Full text link
    This paper proposes an optimal path-following control approach for a smart powered wheelchair. Lyapunov's second method is employed to find a stable position tracking control rule. To guarantee robust performance of this wheelchair system even under model uncertainties, an advanced robust tracking is utilised based on the combination of a systematic decoupling technique and a neural network design. A calibration procedure is adopted for the wheelchair system to improve positioning accuracy. After the calibration, the accuracy is improved significantly. Two real-time experimental results obtained from square tracking and door passing tasks confirm the performance of proposed approach

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    When and How to Help: An Iterative Probabilistic Model for Learning Assistance by Demonstration

    Get PDF
    Abstract Crafting a proper assistance policy is a difficult endeavour but essential for the development of robotic assistants. Indeed, assistance is a complex issue that depends not only on the task-at-hand, but also on the state of the user, environment and competing objectives. As a way forward, this paper proposes learning the task of assistance through observation; an approach we term Learning Assistance by Demonstration (LAD). Our methodology is a subclass of Learning-by-Demonstration (LbD), yet directly addresses difficult issues associated with proper assistance such as when and how to appropriately assist. To learn assistive policies, we develop a probabilistic model that explicitly captures these elements and provide efficient, online, training methods. Experimental results on smart mobility assistance -using both simulation and a real-world smart wheelchair platform -demonstrate the effectiveness of our approach; the LAD model quickly learns when to assist (achieving an AUC score of 0.95 after only one demonstration) and improves with additional examples. Results show that this translates into better task-performance; our LAD-enabled smart wheelchair improved participant driving performance (measured in lap seconds) by 20.6s (a speedup of 137%), after a single teacher demonstration

    Rehabilitation Technologies: Biomechatronics Point of View

    Get PDF

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    A multi-hierarchical symbolic model of the environment for improving mobile robot operation

    Get PDF
    El trabajo desarrollado en esta tesis se centra en el estudio y aplicación de estructuras multijerárquicas, que representan el entorno de un robot móvil, con el objetivo de mejorar su capacidad de realizar tareas complejas en escenarios humanos. Un robot móvil debe poseer una representación simbólica de su entorno para poder llevar a cabo operaciones deliberativas, por ejemplo planificar tareas. Sin embargo a la hora de representar simbólicamente entornos reales, dado su complejidad, es imprescindible contar con mecanismos capaces de organizar y facilitar el acceso a la ingente cantidad de información que de ellos se deriva. Aparte del inconveniente de tratar con grandes cantidades de información, existen otros problemas subyacentes de la representación simbólica de entornos reales, los cuales aún no han sido resueltos por completo en la literatura científica. Uno de ellos consiste en el mantenimiento de la representación simbólica optimizada con respecto a las tareas que el robot debe realizar, y coherente con el entorno en el que se desenvuelve. Otro problema, relacionado con el anterior es la creación/modificación de la información simbólica a partir de información meramente sensorial (este problema es conocido como symbol-grounding). Esta tesis estudia estos problemas y aporta soluciones mediante estructuras multijerárquicas. Estas estructuras simbólicas, basadas en el concepto de abstracción, imitan la forma en la que los humanos organizamos la información espacial y permite a un robot móvil mejorar sus habilidades en entornos complejos. Las principales contribuciones de este trabajo son: - Se ha formalizado matemáticamente un modelo simbólico basado en múltiples abstracciones (multijerarquías) mediante Teoría de Categorías. Se ha desarrollado un planificador de tareas eficiente que es capaz de aprovechar la organización jerárquica del modelo simbólico del entorno. Nuestro método ha sido validado matemáticamente y se han implementado y comparado dos variantes del mismo (HPWA-1 y HPWA-2). - Una instancia particular del modelo multijerárquico ha sido estudiada e implementada para organizar información simbólica con el objetivo de mejorar simultáneamente diferentes tareas a realizar por un robot móvil. - Se ha desarrollado un procedimiento que (1) construye un modelo jerárquico del entorno de un robot, (2) lo mantiene coherente y actualizado y (3) lo optimiza con el fin de mejorar las tareas realizadas por un robot móvil. - Finalmente, se ha implementado una arquitectura robótica que engloba todas las cuestiones anteriormente citadas. Se han realizado pruebas reales con una silla de ruedas robotizada que ponen de manifiesto la utilidad del uso de estructuras multijerárquicas en robótica móvil
    corecore