319 research outputs found

    Development of a Bayesian recursive algorithm to find free-spaces for an intelligent wheelchair

    Full text link
    This paper introduces a new shared control strategy for an intelligent wheelchair using a Bayesian recursive algorithm. Using the local environment information gathered by a laser range finder sensor and commands acquired through a user interface, a Bayesian recursive algorithm has been developed to find the most appropriate free-space, which corresponds to the highest posterior probability value. Then, an autonomous navigation algorithm will assist to manoeuvre the wheelchair in the chosen free-space. Experiment results demonstrate that the new method provides excellent performance with great flexibility and fast response. © 2011 IEEE

    The advancement of an obstacle avoidance bayesian neural network for an intelligent wheelchair

    Full text link
    In this paper, an advanced obstacle avoidance system is developed for an intelligent wheelchair designed to support people with mobility impairments who also have visual, upper limb, or cognitive impairment. To avoid obstacles, immediate environment information is continuously updated with range data sampled by an on-board laser range finder URG-04LX. Then, the data is transformed to find the relevant information to the navigating process before being presented to a trained obstacle avoidance neural network which is optimized under the supervision of a Bayesian framework to find its structure and weight values. The experiment results showed that this method allows the wheelchair to avoid collisions while simultaneously navigating through an unknown environment in real-time. More importantly, this new approach significantly enhances the performance of the system to pass narrow openings such as door passing. © 2013 IEEE

    Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes

    Get PDF
    Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair

    Shared control strategies for human - Machine interface in an intelligent wheelchair

    Full text link
    In this paper, we introduce a shared control mechanism for an intelligent wheelchair designed to support people with mobility impairments, who also have visual, upper limb, or cognitive impairment. The method is designed to allow users to be involved in the movement as much as possible, while still providing the assistance needed to achieve the goal safely. The data collected through URG-04LX and user interface are analyzed to determine whether the desired action is safe to perform. The system then decides to provide assistance or to allow the user input to control the wheelchair. The experiment results indicate that the method performs effectively with high satisfaction. © 2013 IEEE

    Development of an assistive patient mobile system for hospital environments

    Full text link
    This paper presents an assistive patient mobile system for hospital environments, which focuses on transferring the patient without nursing help. The system is a combination of an advanced hospital bed and an autonomous navigating robot. This intelligent bed can track the robot and routinely navigates and communicates with the bed. The work centralizes in building a structure, hardware design and robot detection and tracking algorithms by using laser range finder. The assistive patient mobile system has been tested and the real experiments are shown with a high performance of reliability and practicality. The accuracy of the method proposed in this paper is 91% for the targeted testing object with the error rate of classification by 6%. Additionally, a comparison between our method and a related one is also described including the comparison of results. © 2013 IEEE

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Collective cluster-based map merging in multi robot SLAM

    Get PDF
    New challenges arise with multi-robotics, while information integration is among the most important problems need to be solved in this field. For mobile robots, information integration usually refers to map merging . Map merging is the process of combining partial maps constructed by individual robots in order to build a global map of the environment. Different approaches have been made toward solving map merging problem. Our method is based on transformational approach, in which the idea is to find regions of overlap between local maps and fuse them together using a set of transformations and similarity heuristic algorithms. The contribution of this work is an improvement made in the search space of candidate transformations. This was achieved by enforcing pair-wise partial localization technique over the local maps prior to any attempt to transform them. The experimental results show a noticeable improvement (15-20%) made in the overall mapping time using our technique

    Adaptive Real-Time Image Processing for Human-Computer Interaction

    Get PDF

    Nonstrict hierarchical reinforcement learning for interactive systems and robots

    Get PDF
    Conversational systems and robots that use reinforcement learning for policy optimization in large domains often face the problem of limited scalability. This problem has been addressed either by using function approximation techniques that estimate the approximate true value function of a policy or by using a hierarchical decomposition of a learning task into subtasks. We present a novel approach for dialogue policy optimization that combines the benefits of both hierarchical control and function approximation and that allows flexible transitions between dialogue subtasks to give human users more control over the dialogue. To this end, each reinforcement learning agent in the hierarchy is extended with a subtask transition function and a dynamic state space to allow flexible switching between subdialogues. In addition, the subtask policies are represented with linear function approximation in order to generalize the decision making to situations unseen in training. Our proposed approach is evaluated in an interactive conversational robot that learns to play quiz games. Experimental results, using simulation and real users, provide evidence that our proposed approach can lead to more flexible (natural) interactions than strict hierarchical control and that it is preferred by human users
    corecore