986 research outputs found

    Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes

    Get PDF
    Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair

    A short curriculum of the robotics and technology of computer lab

    Get PDF
    Our research Lab is directed by Prof. Anton Civit. It is an interdisciplinary group of 23 researchers that carry out their teaching and researching labor at the Escuela Politécnica Superior (Higher Polytechnic School) and the Escuela de Ingeniería Informática (Computer Engineering School). The main research fields are: a) Industrial and mobile Robotics, b) Neuro-inspired processing using electronic spikes, c) Embedded and real-time systems, d) Parallel and massive processing computer architecture, d) Information Technologies for rehabilitation, handicapped and elder people, e) Web accessibility and usability In this paper, the Lab history is presented and its main publications and research projects over the last few years are summarized.Nuestro grupo de investigación está liderado por el profesor Civit. Somos un grupo multidisciplinar de 23 investigadores que realizan su labor docente e investigadora en la Escuela Politécnica Superior y en Escuela de Ingeniería Informática. Las principales líneas de investigaciones son: a) Robótica industrial y móvil. b) Procesamiento neuro-inspirado basado en pulsos electrónicos. c) Sistemas empotrados y de tiempo real. d) Arquitecturas paralelas y de procesamiento masivo. e) Tecnología de la información aplicada a la discapacidad, rehabilitación y a las personas mayores. f) Usabilidad y accesibilidad Web. En este artículo se reseña la historia del grupo y se resumen las principales publicaciones y proyectos que ha conseguido en los últimos años

    Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks

    Full text link
    Social intelligence is an important requirement for enabling robots to collaborate with people. In particular, human path prediction is an essential capability for robots in that it prevents potential collision with a human and allows the robot to safely make larger movements. In this paper, we present a method for predicting the trajectory of a human who follows a haptic robotic guide without using sight, which is valuable for assistive robots that aid the visually impaired. We apply a deep learning method based on recurrent neural networks using multimodal data: (1) human trajectory, (2) movement of the robotic guide, (3) haptic input data measured from the physical interaction between the human and the robot, (4) human depth data. We collected actual human trajectory and multimodal response data through indoor experiments. Our model outperformed the baseline result while using only the robot data with the observed human trajectory, and it shows even better results when using additional haptic and depth data.Comment: 6 pages, Submitted to IEEE World Haptics Conference 201

    From virtual demonstration to real-world manipulation using LSTM and MDN

    Full text link
    Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes

    Neural network decoupling technique and its application to a powered wheelchair system

    Full text link
    © 2015 IEEE. This paper proposes a neural network decoupling technique for an uncertain multivariable system. Based on a linear diagonalization technique, a reference model is designed using nominal parameters to provide training signals for a neural network decoupler. A neural network model is designed to learn the dynamics of the uncertain multivariable system in order to avoid required calculations of the plant Jacobian. To avoid overfitting problem, both neural networks are trained by the Lavenberg-Marquardt with Bayesian regulation algorithm that uses a real-time recurrent learning algorithm to obtain gradient information. Three experimental results in the powered wheelchair control application confirm that the proposed technique effectively minimises the coupling effects caused by input-output interactions even under the condition of system uncertainties

    Review of real brain-controlled wheelchairs

    Get PDF
    This paper presents a review of the state of the art regarding wheelchairs driven by a brain-computer interface (BCI). Using a brain-controlled wheelchair (BCW), disabled users could handle a wheelchair through their brain activity, granting autonomy to move through an experimental environment. A classification is established, based on the characteristics of the BCW, such as the type of electroencephalographic (EEG) signal used, the navigation system employed by the wheelchair, the task for the participants, or the metrics used to evaluate the performance. Furthermore, these factors are compared according to the type of signal used, in order to clarify the differences among them. Finally, the trend of current research in this field is discussed, as well as the challenges that should be solved in the future

    Bio-signal based control in assistive robots: a survey

    Get PDF
    Recently, bio-signal based control has been gradually deployed in biomedical devices and assistive robots for improving the quality of life of disabled and elderly people, among which electromyography (EMG) and electroencephalography (EEG) bio-signals are being used widely. This paper reviews the deployment of these bio-signals in the state of art of control systems. The main aim of this paper is to describe the techniques used for (i) collecting EMG and EEG signals and diving these signals into segments (data acquisition and data segmentation stage), (ii) dividing the important data and removing redundant data from the EMG and EEG segments (feature extraction stage), and (iii) identifying categories from the relevant data obtained in the previous stage (classification stage). Furthermore, this paper presents a summary of applications controlled through these two bio-signals and some research challenges in the creation of these control systems. Finally, a brief conclusion is summarized

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces
    corecore