5,539 research outputs found

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network

    Get PDF
    It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories.Comment: 30 pages, 19 figure

    Arc Welding Automation

    Get PDF

    Safe feeding strategies for a physically assistive robot

    Get PDF
    With aging societies and the increase of handicapped people the demand for robots that can help nursing humans on-site is increasing. Concretely, according to World Health Organization (WHO) by 2030 more than 2 billion people will need one or more assistive products. With this perspective it becomes vital to develop assistive technology products as they maintain or improve disabled people’s functioning and independence. One of the most important activities that a person needs to be able to perform in order to feel independent is self-feeding. The main objective of this thesis is to develop software that controls a robot in order to feed a disabled person autonomously. Special attention has been given to the safety and naturalness of the task performance. The resulting system has been tested in the Barrett WAM® robot. In order to fulfill this goal an RGB-D camera has been used to detect the head orientation and the state of the mouth. The first detection has been realized with the OpenFace library whereas the second one has been realized with the OpenPose library. Finally, the depth obtained by the camera has been used to identify and cope with wrong detections. Safety is an essential part of this thesis as it exists direct contact between the user and the robot. Therefore, the feeding task must be completely safe for the user. In order to achieve this safety two di˙erent types of security have been considered: passive safety and active safety. The passive safety is achieved with the compliance of the robot whereas active safety is achieved limiting the maximum force that is obtained with a force sensor. Some experiments have been carried out to determine which is the best setup for the robot to ensure a safe task performance. The designed system is capable of automatically detecting head orientation and mouth state and decide which action to take at any moment given this information. It is also capable of stopping the robot movement when certain forces are reached, return to the previous position and wait in this position until it is safe to perform that action again. A set of experiments with healthy users has been carried out to validate the proposed system and the results are presented here

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table

    Bovine and human becomings in histories of dairy technologies: robotic milking systems and remaking animal and human subjectivity

    Get PDF
    This paper positions the recent emergence of robotic or automatic milking systems (AMS) in relation to discourses surrounding the longer history of milking technologies in the UK and elsewhere. The mechanisation of milking has been associated with sets of hopes and anxieties which permeated the transition from hand to increasingly automated forms of milking. This transition has affected the relationships between humans and cows on dairy farms, producing different modes of cow and human agency and subjectivity. In this paper, drawing on empirical evidence from a research project exploring AMS use in contemporary farms, we examine how ongoing debates about the benefits (or otherwise) of AMS relate to longer-term discursive currents surrounding the historical emergence of milking technologies and their implications for efficient farming and the human and bovine experience of milk production. We illustrate how technological change is in part based on understandings of people and cows, at the same time as bovine and human agency and subjectivity are entrained and reconfigured in relation to emerging milking technologies, so that what it is to be a cow or human becomes different as technologies change. We illustrate how this results from – and in – competing ways of understanding cows: as active agents, as contributing to technological design, as ‘free’, as ‘responsible’ and/or as requiring surveillance and discipline, and as efficient co-producers, with milking technologies, of milk
    corecore