6,368 research outputs found

    HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    Get PDF
    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    NASA space station automation: AI-based technology review. Executive summary

    Get PDF
    Research and Development projects in automation technology for the Space Station are described. Artificial Intelligence (AI) based technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics

    Computational neurorehabilitation: modeling plasticity and learning to predict recovery

    Get PDF
    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling – regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity

    Making intelligent systems team players: Overview for designers

    Get PDF
    This report is a guide and companion to the NASA Technical Memorandum 104738, 'Making Intelligent Systems Team Players,' Volumes 1 and 2. The first two volumes of this Technical Memorandum provide comprehensive guidance to designers of intelligent systems for real-time fault management of space systems, with the objective of achieving more effective human interaction. This report provides an analysis of the material discussed in the Technical Memorandum. It clarifies what it means for an intelligent system to be a team player, and how such systems are designed. It identifies significant intelligent system design problems and their impacts on reliability and usability. Where common design practice is not effective in solving these problems, we make recommendations for these situations. In this report, we summarize the main points in the Technical Memorandum and identify where to look for further information

    On Specifying for Trustworthiness

    Get PDF
    As autonomous systems (AS) increasingly become part of our daily lives, ensuring their trustworthiness is crucial. In order to demonstrate the trustworthiness of an AS, we first need to specify what is required for an AS to be considered trustworthy. This roadmap paper identifies key challenges for specifying for trustworthiness in AS, as identified during the "Specifying for Trustworthiness" workshop held as part of the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme. We look across a range of AS domains with consideration of the resilience, trust, functionality, verifiability, security, and governance and regulation of AS and identify some of the key specification challenges in these domains. We then highlight the intellectual challenges that are involved with specifying for trustworthiness in AS that cut across domains and are exacerbated by the inherent uncertainty involved with the environments in which AS need to operate.Comment: Accepted version of paper. 13 pages, 1 table, 1 figur

    A metric for characterizing the arm nonuse workspace in poststroke individuals using a robot arm

    Full text link
    An over-reliance on the less-affected limb for functional tasks at the expense of the paretic limb and in spite of recovered capacity is an often-observed phenomenon in survivors of hemispheric stroke. The difference between capacity for use and actual spontaneous use is referred to as arm nonuse. Obtaining an ecologically valid evaluation of arm nonuse is challenging because it requires the observation of spontaneous arm choice for different tasks, which can easily be influenced by instructions, presumed expectations, and awareness that one is being tested. To better quantify arm nonuse, we developed the Bimanual Arm Reaching Test with a Robot (BARTR) for quantitatively assessing arm nonuse in chronic stroke survivors. The BARTR is an instrument that utilizes a robot arm as a means of remote and unbiased data collection of nuanced spatial data for clinical evaluations of arm nonuse. This approach shows promise for determining the efficacy of interventions designed to reduce paretic arm nonuse and enhance functional recovery after stroke. We show that the BARTR satisfies the criteria of an appropriate metric for neurorehabilitative contexts: it is valid, reliable, and simple to use.Comment: Accepted to Science Robotics at https://www.science.org/doi/10.1126/scirobotics.adf7723 on November 15th, 202

    Programming Robots by Demonstration using Augmented Reality

    Get PDF
    O mundo está a viver a quarta revolução industrial, a Indústria 4.0; marcada pela crescente inteligência e automação dos sistemas industriais. No entanto, existem tarefas que são muito complexas ou caras para serem totalmente automatizadas, seria mais eficiente se a máquina pudesse trabalhar com o ser humano, não apenas partilhando o mesmo espaço de trabalho, mas como colaboradores úteis. O foco da investigação para solucionar esse problema está em sistemas de interação homem-robô, percebendo em que aplicações podem ser úteis para implementar e quais são os desafios que enfrentam. Neste contexto, uma melhor interação entre as máquinas e os operadores pode levar a múltiplos benefícios, como menos, melhor e mais fácil treino, um ambiente mais seguro para o operador e a capacidade de resolver problemas mais rapidamente. O tema desta dissertação é relevante na medida em que é necessário aprender e implementar as tecnologias que mais contribuem para encontrar soluções para um trabalho mais simples e eficiente na indústria. Assim, é proposto o desenvolvimento de um protótipo industrial de um sistema de interação homem-máquina através de Realidade Estendida, no qual o objetivo é habilitar um operador industrial sem experiência em programação, a programar um robô colaborativo utilizando o Microsoft HoloLens 2. O sistema desenvolvido é dividido em duas partes distintas: o sistema de tracking, que regista o movimento das mãos do operador, e o sistema de tradução da programação por demonstração, que constrói o programa a ser enviado ao robô para que ele se mova. O sistema de monitorização e supervisão é executado pelo Microsoft HoloLens 2, utilizando a plataforma Unity e Visual Studio para programá-lo. A base do sistema de programação por demonstração foi desenvolvida em Robot Operating System (ROS). Os robôs incluídos nesta interface são Universal Robots UR5 (robô colaborativo) e ABB IRB 2600 (robô industrial). Adicionalmente, a interface foi construída para incorporar facilmente mais robôs.The world is living the fourth industrial revolution, Industry 4.0; marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machine were able to work with the human, not only by sharing the same workspace but also as useful collaborators. A possible solution to that problem is on human-robot interactions systems, understanding the applications where they can be helpful to implement and what are the challenges they face. In this context a better interaction between the machines and the operators can lead to multiples benefits, like less, better, and easier training, a safer environment for the operator and the capacity to solve problems quicker. The focus of this dissertation is relevant as it is necessary to learn and implement the technologies which most contribute to find solutions for a simpler and more efficient work in industry. This dissertation proposes the development of an industrial prototype of a human machine interaction system through Extended Reality (XR), in which the objective is to enable an industrial operator without any programming experience to program a collaborative robot using the Microsoft HoloLens 2. The system itself is divided into two different parts: the tracking system, which records the operator's hand movement, and the translator of the programming by demonstration system, which builds the program to be sent to the robot to execute the task. The monitoring and supervision system is executed by the Microsoft HoloLens 2, using the Unity platform and Visual Studio to program it. The programming by demonstration system's core was developed in Robot Operating System (ROS). The robots included in this interface are Universal Robots UR5 (collaborative robot) and ABB IRB 2600 (industrial robot). Moreover, the interface was built to easily add other robots
    corecore