17 research outputs found

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces

    Collaborative human-machine interfaces for mobile manipulators.

    Get PDF
    The use of mobile manipulators in service industries as both agents in physical Human Robot Interaction (pHRI) and for social interactions has been on the increase in recent times due to necessities like compensating for workforce shortages and enabling safer and more efficient operations amongst other reasons. Collaborative robots, or co-bots, are robots that are developed for use with human interaction through direct contact or close proximity in a shared space with the human users. The work presented in this dissertation focuses on the design, implementation and analysis of components for the next-generation collaborative human machine interfaces (CHMI) needed for mobile manipulator co-bots that can be used in various service industries. The particular components of these CHMI\u27s that are considered in this dissertation include: Robot Control: A Neuroadaptive Controller (NAC)-based admittance control strategy for pHRI applications with a co-bot. Robot state estimation: A novel methodology and placement strategy for using arrays of IMUs that can be embedded in robot skin for pose estimation in complex robot mechanisms. User perception of co-bot CHMI\u27s: Evaluation of human perceptions of usefulness and ease of use of a mobile manipulator co-bot in a nursing assistant application scenario. To facilitate advanced control for the Adaptive Robotic Nursing Assistant (ARNA) mobile manipulator co-bot that was designed and developed in our lab, we describe and evaluate an admittance control strategy that features a Neuroadaptive Controller (NAC). The NAC has been specifically formulated for pHRI applications such as patient walking. The controller continuously tunes weights of a neural network to cancel robot non-linearities, including drive train backlash, kinematic or dynamic coupling, variable patient pushing effort, or slope surfaces with unknown inclines. The advantage of our control strategy consists of Lyapunov stability guarantees during interaction, less need for parameter tuning and better performance across a variety of users and operating conditions. We conduct simulations and experiments with 10 users to confirm that the NAC outperforms a classic Proportional-Derivative (PD) joint controller in terms of resulting interaction jerk, user effort, and trajectory tracking error during patient walking. To tackle complex mechanisms of these next-gen robots wherein the use of encoder or other classic pose measuring device is not feasible, we present a study effects of design parameters on methods that use data from Inertial Measurement Units (IMU) in robot skins to provide robot state estimates. These parameters include number of sensors, their placement on the robot, as well as noise properties on the quality of robot pose estimation and its signal-to-noise Ratio (SNR). The results from that study facilitate the creation of robot skin, and in order to enable their use in complex robots, we propose a novel pose estimation method, the Generalized Common Mode Rejection (GCMR) algorithm, for estimation of joint angles in robot chains containing composite joints. The placement study and GCMR are demonstrated using both Gazebo simulation and experiments with a 3-DoF robotic arm containing 2 non-zero link lengths, 1 revolute joint and a 2-DoF composite joint. In addition to yielding insights on the predicted usage of co-bots, the design of control and sensing mechanisms in their CHMI benefits from evaluating the perception of the eventual users of these robots. With co-bots being only increasingly developed and used, there is a need for studies into these user perceptions using existing models that have been used in predicting usage of comparable technology. To this end, we use the Technology Acceptance Model (TAM) to evaluate the CHMI of the ARNA robot in a scenario via analysis of quantitative and questionnaire data collected during experiments with eventual uses. The results from the works conducted in this dissertation demonstrate insightful contributions to the realization of control and sensing systems that are part of CHMI\u27s for next generation co-bots

    Automatic testing of organic strain gauge tactile sensors.

    Get PDF
    Human-Robot Interaction is a developing field of science, that is posed to augment everything we do in life. Skin sensors that can detect touch, temperature, distance, and other physical interaction parameters at the human-robot interface are very important to enhancing the collaboration between humans and machines. As such, these sensors must be efficiently tested and characterized to give accurate feedback from the sensor to the robot. The objective of this work is to create a diversified software testing suite that removes as much human intervention as possible. The tests and methodology discussed here provide multiple realistic scenarios that the sensors undergo during repeated experiments. This capability allows for easy repeatable tests without interference from the test engineer, increasing productivity and efficiency. The foundation of this work has two main pieces: force feedback control to drive the test actuator, and computer vision functionality to guide alignment of the test actuator and sensors arranged in a 2D array. The software running automated tests was also made compatible with the testbench hardware via LabVIEW programs. The program uses set coordinates to complete a raster scan of the SkinCell that locates individual sensors. Tests are then applied at each sensor using a force controller. The force feedback control system uses a Proportional Integral Derivative (PID) controller that reads in force readings from a load cell to correct itself or follow a desired trajectory. The motion of the force actuator was compared to that of the projected trajectory to test for accuracy and time delay. The proposed motor control allows for dynamic force to stimulate the sensors giving a more realistic test then a stable force. A top facing camera was introduced to take in the starting position of a SkinCell before testing. Then, computer vision algorithms were proposed to extract the location of the cell and individual sensors before generating a coordinate plane. This allows for the engineer to skip over manual alignment of the sensors, saving more time and providing more accurate destinations. Finally, the testbench was applied to numerous sensors developed by the research team at the Louisville Automation and Robotics Research Institute (LARRI) for testing and data analysis. Force loads are applied to the individual sensors while recording response. Afterwards, postprocessing of the data was conducted to compare responses within the SkinCell as well as to other sensors manufactured using different methods

    Intent Classification during Human-Robot Contact

    Get PDF
    Robots are used in many areas of industry and automation. Currently, human safety is ensured through physical separation and safeguards. However, there is increasing interest in allowing robots and humans to work in close proximity or on collaborative tasks. In these cases, there is a need for the robot itself to recognize if a collision has occurred and respond in a way which prevents further damage or harm. At the same time, there is a need for robots to respond appropriately to intentional contact during interactive and collaborative tasks. This thesis proposes a classification-based approach for differentiating between several intentional contact types, accidental contact, and no-contact situations. A dataset is de- veloped using the Franka Emika Panda robot arm. Several machine learning algorithms, including Support Vector Machines, Convolutional Neural Networks, and Long Short-Term Memory Networks, are applied and used to perform classification on this dataset. First, Support Vector Machines were used to perform feature identification. Compar- isons were made between classification on raw sensor data compared to data calculated from a robot dynamic model, as well as between linear and nonlinear features. The results show that very few features can be used to achieve the best results, and accuracy is highest when combining raw data from sensors with model-based data. Accuracies of up to 87% were achieved. Methods of performing classification on the basis of each individual joint, compared to the whole arm, are tested, and shown not to provide additional benefits. Second, Convolutional Neural Networks and Long Short-Term Memory Networks were evaluated for the classification task. A simulated dataset was generated and augmented with noise for training the classifiers. Experiments show that additional simulated and augmented data can improve accuracy in some cases, as well as lower the amount of real- world data required to train the networks. Accuracies up to 93% and 84% we achieved by the CNN and LSTM networks, respectively. The CNN achieved an accuracy of 87% using all real data, and up to 93% using only 50% of the real data with simulated data added to the training set, as well as with augmented data. The LSTM achieved an accuracy of 75% using all real data, and nearly 80% accuracy using 75% of real data with augmented simulation data

    Human–Robot Role Arbitration via Differential Game Theory

    Get PDF
    The industry needs controllers that allow smooth and natural physical Human-Robot Interaction (pHRI) to make production scenarios more flexible and user-friendly. Within this context, particularly interesting is Role Arbitration, which is the mechanism that assigns the role of the leader to either the human or the robot. This paper investigates Game-Theory (GT) to model pHRI, and specifically, Cooperative Game Theory (CGT) and Non-Cooperative Game Theory (NCGT) are considered. This work proposes a possible solution to the Role Arbitration problem and defines a Role Arbitration framework based on differential game theory to allow pHRI. The proposed method can allow trajectory deformation according to human will, avoiding reaching dangerous situations such as collisions with environmental features, robot joints and workspace limits, and possibly safety constraints. Three sets of experiments are proposed to evaluate different situations and compared with two other standard methods for pHRI, the Impedance Control, and the Manual Guidance. Experiments show that with our Role Arbitration method, different situations can be handled safely and smoothly with a low human effort. In particular, the performances of the IMP and MG vary according to the task. In some cases, MG performs well, and IMP does not. In some others, IMP performs excellently, and MG does not. The proposed Role Arbitration controller performs well in all the cases, showing its superiority and generality. The proposed method generally requires less force and ensures better accuracy in performing all tasks than standard controllers. Note to Practitioners—This work presents a method that allows role arbitration for physical Human-Robot Interaction, motivated by the need to adjust the role of leader/follower in a shared task according to the specific phase of the task or the knowledge of one of the two agents. This method suits applications such as object co-transportation, which requires final precise positioning but allows some trajectory deformation on the fly. It can also handle situations where the carried obstacle occludes human sight, and the robot helps the human to avoid possible environmental obstacles and position the objects at the target pose precisely. Currently, this method does not consider external contact, which is likely to arise in many situations. Future studies will investigate the modeling and detection of external contacts to include them in the interaction models this work addresses

    Proceedings of the 3rd International Mobile Brain/Body Imaging Conference : Berlin, July 12th to July 14th 2018

    Get PDF
    The 3rd International Mobile Brain/Body Imaging (MoBI) conference in Berlin 2018 brought together researchers from various disciplines interested in understanding the human brain in its natural environment and during active behavior. MoBI is a new imaging modality, employing mobile brain imaging methods like the electroencephalogram (EEG) or near infrared spectroscopy (NIRS) synchronized to motion capture and other data streams to investigate brain activity while participants actively move in and interact with their environment. Mobile Brain / Body Imaging allows to investigate brain dynamics accompanying more natural cognitive and affective processes as it allows the human to interact with the environment without restriction regarding physical movement. Overcoming the movement restrictions of established imaging modalities like functional magnetic resonance tomography (MRI), MoBI can provide new insights into the human brain function in mobile participants. This imaging approach will lead to new insights into the brain functions underlying active behavior and the impact of behavior on brain dynamics and vice versa, it can be used for the development of more robust human-machine interfaces as well as state assessment in mobile humans.DFG, GR2627/10-1, 3rd International MoBI Conference 201

    Contact force and torque estimation for collaborative manipulators based on an adaptive Kalman filter with variable time period.

    Get PDF
    Contact force and torque sensing approaches enable manipulators to cooperate with humans and to interact appropriately with unexpected collisions. In this thesis, various moving averages are investigated and Weighted Moving Averages and Hull Moving Average are employed to generate a mode-switching moving average to support force sensing. The proposed moving averages with variable time period were used to reduce the effects of measured motor current noise and thus provide improved confidence in joint output torque estimation. The time period of the filter adapts continuously to achieve an optimal trade-off between response time and precision of estimation in real-time. An adaptive Kalman filter that consists of the proposed moving averages and the conventional Kalman filter is proposed. Calibration routines for the adaptive Kalman filter interpret the measured motor current noise and errors in the speed data from the individual joints into. The combination of the proposed adaptive Kalman filter with variable time period and its calibration method facilitates force and torque estimation without direct measurement via force/torque sensors. Contact force/torque sensing and response time assessments from the proposed approach are performed on both the single Universal Robot 5 manipulator and the collaborative UR5 arrangement (dual-arm robot) with differing unexpected end effector loads. The combined force and torque sensing method leads to a reduction of the estimation errors and response time in comparison with the pioneering method (55.2% and 20.8 %, respectively), and the positive performance of the proposed approach is further improved as the payload rises. The proposed method can potentially be applied to any robotic manipulators as long as the motor information (current, joint position, and joint velocities) are available. Consequently the cost of implementation will be significantly lower than methods that require load cells

    Variable autonomy assignment algorithms for human-robot interactions.

    Get PDF
    As robotic agents become increasingly present in human environments, task completion rates during human-robot interaction has grown into an increasingly important topic of research. Safe collaborative robots executing tasks under human supervision often augment their perception and planning capabilities through traded or shared control schemes. However, such systems are often proscribed only at the most abstract level, with the meticulous details of implementation left to the designer\u27s prerogative. Without a rigorous structure for implementing controls, the work of design is frequently left to ad hoc mechanism with only bespoke guarantees of systematic efficacy, if any such proof is forthcoming at all. Herein, I present two quantitatively defined models for implementing sliding-scale variable autonomy, in which levels of autonomy are determined by the relative efficacy of autonomous subroutines. I experimentally test the resulting Variable Autonomy Planning (VAP) algorithm and against a traditional traded control scheme in a pick-and-place task, and apply the Variable Autonomy Tasking algorithm to the implementation of a robot performing a complex sanitation task in real-world environs. Results show that prioritizing autonomy levels with higher success rates, as encoded into VAP, allows users to effectively and intuitively select optimal autonomy levels for efficient task completion. Further, the Pareto optimal design structure of the VAP+ algorithm allows for significant performance improvements to be made through intervention planning based on systematic input determining failure probabilities through sensorized measurements. This thesis describes the design, analysis, and implementation of these two algorithms, with a particular focus on the VAP+ algorithm. The core conceit is that they are methods for rigorously defining locally optimal plans for traded control being shared between a human and one or more autonomous processes. It is derived from an earlier algorithmic model, the VAP algorithm, developed to address the issue of rigorous, repeatable assignment of autonomy levels based on system data which provides guarantees on basis of the failure-rate sorting of paired autonomous and manual subtask achievement systems. Using only probability ranking to define levels of autonomy, the VAP algorithm is able to sort modules into optimizable ordered sets, but is limited to only solving sequential task assignments. By constructing a joint cost metric for the entire plan, and by implementing a back-to-front calculation scheme for this metric, it is possible for the VAP+ algorithm to generate optimal planning solutions which minimize the expected cost, as amortized over time, funds, accuracy, or any metric combination thereof. The algorithm is additionally very efficient, and able to perform on-line assessments of environmental changes to the conditional probabilities associated with plan choices, should a suitable model for determining these probabilities be present. This system, as a paired set of two algorithms and a design augmentation, form the VAP+ algorithm in full

    De animais a máquinas : humanos tecnicamente melhores nos imaginários de futuro da convergência tecnológica

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Sociais, Departamento de Sociologia, 2020.O tema desta investigação é discutir os imaginários sociais de ciência e tecnologia que emergem a partir da área da neuroengenharia, em sua relação com a Convergência Tecnológica de quatro disciplinas: Nanotecnologia, Biotecnologia, tecnologias da Informação e tecnologias Cognitivas - neurociências- (CT-NBIC). Estas áreas desenvolvem-se e são articuladas por meio de discursos que ressaltam o aprimoramento das capacidades físicas e cognitivas dos seres humanos, com o intuito de construir uma sociedade melhor por meio do progresso científico e tecnológico, nos limites das agendas de pesquisa e desenvolvimento (P&D). Objetivos: Os objetivos nesse cenário, são discutir as implicações éticas, econômicas, políticas e sociais deste modelo de sistema sociotécnico. Nos referimos, tanto as aplicações tecnológicas, quanto as consequências das mesmas na formação dos imaginários sociais, que tipo de relações se estabelecem e como são criadas dentro desse contexto. Conclusão: Concluímos na busca por refletir criticamente sobre as propostas de aprimoramento humano mediado pela tecnologia, que surgem enquanto parte da agenda da Convergência Tecnológica NBIC. No entanto, as propostas de melhoramento humano vão muito além de uma agenda de investigação. Há todo um quadro de referências filosóficas e políticas que defendem o aprimoramento da espécie, vertentes estas que se aliam a movimentos trans-humanistas e pós- humanistas, posições que são ao mesmo tempo éticas, políticas e econômicas. A partir de nossa análise, entendemos que ciência, tecnologia e política estão articuladas, em coprodução, em relação às expectativas de futuros que são esperados ou desejados. Ainda assim, acreditamos que há um espaço de diálogo possível, a partir do qual buscamos abrir propostas para o debate público sobre questões de ciência e tecnologia relacionadas ao aprimoramento da espécie humana.Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)The subject of this research is to discuss the social imaginaries of science and technology that emerge from the area of neuroengineering in relation with the Technological Convergence of four disciplines: Nanotechnology, Biotechnology, Information technologies and Cognitive technologies -neurosciences- (CT-NBIC). These areas are developed and articulated through discourses that emphasize the enhancement of human physical and cognitive capacities, the intuition it is to build a better society, through the scientific and technological progress, at the limits of the research and development (R&D) agendas. Objectives: The objective in this scenery, is to discuss the ethic, economic, politic and social implications of this model of sociotechnical system. We refer about the technological applications and the consequences of them in the formation of social imaginaries as well as the kind of social relations that are created and established in this context. Conclusion: We conclude looking for critical reflections about the proposals of human enhancement mediated by the technology. That appear as a part of the NBIC technologies agenda. Even so, the proposals of human enhancement go beyond boundaries that an investigation agenda. There is a frame of philosophical and political references that defend the enhancement of the human beings. These currents that ally to the transhumanism and posthumanism movements, positions that are ethic, politic and economic at the same time. From our analysis, we understand that science, technology and politics are articulated, are in co-production, regarding the expected and desired futures. Even so, we believe that there is a space of possible dialog, from which we look to open proposals for the public discussion on questions of science and technology related to enhancement of human beings

    Motor learning induced neuroplasticity in minimally invasive surgery

    Get PDF
    Technical skills in surgery have become more complex and challenging to acquire since the introduction of technological aids, particularly in the arena of Minimally Invasive Surgery. Additional challenges posed by reforms to surgical careers and increased public scrutiny, have propelled identification of methods to assess and acquire MIS technical skills. Although validated objective assessments have been developed to assess motor skills requisite for MIS, they poorly understand the development of expertise. Motor skills learning, is indirectly observable, an internal process leading to relative permanent changes in the central nervous system. Advances in functional neuroimaging permit direct interrogation of evolving patterns of brain function associated with motor learning due to the property of neuroplasticity and has been used on surgeons to identify the neural correlates for technical skills acquisition and the impact of new technology. However significant gaps exist in understanding neuroplasticity underlying learning complex bimanual MIS skills. In this thesis the available evidence on applying functional neuroimaging towards assessment and enhancing operative performance in the field of surgery has been synthesized. The purpose of this thesis was to evaluate frontal lobe neuroplasticity associated with learning a complex bimanual MIS skill using functional near-infrared spectroscopy an indirect neuroimaging technique. Laparoscopic suturing and knot-tying a technically challenging bimanual skill is selected to demonstrate learning related reorganisation of cortical behaviour within the frontal lobe by shifts in activation from the prefrontal cortex (PFC) subserving attention to primary and secondary motor centres (premotor cortex, supplementary motor area and primary motor cortex) in which motor sequences are encoded and executed. In the cross-sectional study, participants of varying expertise demonstrate frontal lobe neuroplasticity commensurate with motor learning. The longitudinal study involves tracking evolution in cortical behaviour of novices in response to receipt of eight hours distributed training over a fortnight. Despite novices achieving expert like performance and stabilisation on the technical task, this study demonstrates that novices displayed persistent PFC activity. This study establishes for complex bimanual tasks, that improvements in technical performance do not accompany a reduced reliance in attention to support performance. Finally, least-squares support vector machine is used to classify expertise based on frontal lobe functional connectivity. Findings of this thesis demonstrate the value of interrogating cortical behaviour towards assessing MIS skills development and credentialing.Open Acces
    corecore