19 research outputs found

    Collaborative human-machine interfaces for mobile manipulators.

    Get PDF
    The use of mobile manipulators in service industries as both agents in physical Human Robot Interaction (pHRI) and for social interactions has been on the increase in recent times due to necessities like compensating for workforce shortages and enabling safer and more efficient operations amongst other reasons. Collaborative robots, or co-bots, are robots that are developed for use with human interaction through direct contact or close proximity in a shared space with the human users. The work presented in this dissertation focuses on the design, implementation and analysis of components for the next-generation collaborative human machine interfaces (CHMI) needed for mobile manipulator co-bots that can be used in various service industries. The particular components of these CHMI\u27s that are considered in this dissertation include: Robot Control: A Neuroadaptive Controller (NAC)-based admittance control strategy for pHRI applications with a co-bot. Robot state estimation: A novel methodology and placement strategy for using arrays of IMUs that can be embedded in robot skin for pose estimation in complex robot mechanisms. User perception of co-bot CHMI\u27s: Evaluation of human perceptions of usefulness and ease of use of a mobile manipulator co-bot in a nursing assistant application scenario. To facilitate advanced control for the Adaptive Robotic Nursing Assistant (ARNA) mobile manipulator co-bot that was designed and developed in our lab, we describe and evaluate an admittance control strategy that features a Neuroadaptive Controller (NAC). The NAC has been specifically formulated for pHRI applications such as patient walking. The controller continuously tunes weights of a neural network to cancel robot non-linearities, including drive train backlash, kinematic or dynamic coupling, variable patient pushing effort, or slope surfaces with unknown inclines. The advantage of our control strategy consists of Lyapunov stability guarantees during interaction, less need for parameter tuning and better performance across a variety of users and operating conditions. We conduct simulations and experiments with 10 users to confirm that the NAC outperforms a classic Proportional-Derivative (PD) joint controller in terms of resulting interaction jerk, user effort, and trajectory tracking error during patient walking. To tackle complex mechanisms of these next-gen robots wherein the use of encoder or other classic pose measuring device is not feasible, we present a study effects of design parameters on methods that use data from Inertial Measurement Units (IMU) in robot skins to provide robot state estimates. These parameters include number of sensors, their placement on the robot, as well as noise properties on the quality of robot pose estimation and its signal-to-noise Ratio (SNR). The results from that study facilitate the creation of robot skin, and in order to enable their use in complex robots, we propose a novel pose estimation method, the Generalized Common Mode Rejection (GCMR) algorithm, for estimation of joint angles in robot chains containing composite joints. The placement study and GCMR are demonstrated using both Gazebo simulation and experiments with a 3-DoF robotic arm containing 2 non-zero link lengths, 1 revolute joint and a 2-DoF composite joint. In addition to yielding insights on the predicted usage of co-bots, the design of control and sensing mechanisms in their CHMI benefits from evaluating the perception of the eventual users of these robots. With co-bots being only increasingly developed and used, there is a need for studies into these user perceptions using existing models that have been used in predicting usage of comparable technology. To this end, we use the Technology Acceptance Model (TAM) to evaluate the CHMI of the ARNA robot in a scenario via analysis of quantitative and questionnaire data collected during experiments with eventual uses. The results from the works conducted in this dissertation demonstrate insightful contributions to the realization of control and sensing systems that are part of CHMI\u27s for next generation co-bots

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces

    Automatic testing of organic strain gauge tactile sensors.

    Get PDF
    Human-Robot Interaction is a developing field of science, that is posed to augment everything we do in life. Skin sensors that can detect touch, temperature, distance, and other physical interaction parameters at the human-robot interface are very important to enhancing the collaboration between humans and machines. As such, these sensors must be efficiently tested and characterized to give accurate feedback from the sensor to the robot. The objective of this work is to create a diversified software testing suite that removes as much human intervention as possible. The tests and methodology discussed here provide multiple realistic scenarios that the sensors undergo during repeated experiments. This capability allows for easy repeatable tests without interference from the test engineer, increasing productivity and efficiency. The foundation of this work has two main pieces: force feedback control to drive the test actuator, and computer vision functionality to guide alignment of the test actuator and sensors arranged in a 2D array. The software running automated tests was also made compatible with the testbench hardware via LabVIEW programs. The program uses set coordinates to complete a raster scan of the SkinCell that locates individual sensors. Tests are then applied at each sensor using a force controller. The force feedback control system uses a Proportional Integral Derivative (PID) controller that reads in force readings from a load cell to correct itself or follow a desired trajectory. The motion of the force actuator was compared to that of the projected trajectory to test for accuracy and time delay. The proposed motor control allows for dynamic force to stimulate the sensors giving a more realistic test then a stable force. A top facing camera was introduced to take in the starting position of a SkinCell before testing. Then, computer vision algorithms were proposed to extract the location of the cell and individual sensors before generating a coordinate plane. This allows for the engineer to skip over manual alignment of the sensors, saving more time and providing more accurate destinations. Finally, the testbench was applied to numerous sensors developed by the research team at the Louisville Automation and Robotics Research Institute (LARRI) for testing and data analysis. Force loads are applied to the individual sensors while recording response. Afterwards, postprocessing of the data was conducted to compare responses within the SkinCell as well as to other sensors manufactured using different methods

    Soft pneumatic devices for blood circulation improvement

    Get PDF
    The research activity I am presenting in this thesis lies within the framework of a cooperation between the University of Cagliari (Applied Mechanics and Robotics lab, headed by professor Andrea Manuello Bertetto, and the research group of physicians referencing to professor Alberto Concu at the Laboratory of Sports Physiology, Department of Medical Sciences), and the Polytechnic of Turin (professor Carlo Ferraresi and his equipe at the Group of Automation and Robotics, Department of Mechanical and Aerospace Engineering) This research was also funded by the Italian Ministry of Research (MIUR – PRIN 2009). My activity has been mainly carried on at the Department of Mechanics, Robotics lab under the supervision of prof. Manuello; I have also spent one year at the Control Lab of the School of Electrical Engineering at Aalto University (Helsinki, Finland). The tests on the patients were taken at the Laboratory of Sports Physiology, Cagliari. I will be describing the design, development and testing of some soft pneumatic flexible devices meant to apply an intermittent massage and to restore blood circulation in lower limbs in order to improve cardiac output and wellness in general. The choice of the actuators, as well as the pneumatic circuits and air distribution system and PLC control patterns will be outlined. The trial run of the devices have been field--‐tested as soon a prototype was ready, so as to tune its features step--‐by--‐ step. I am also giving a characterization of a commercial thin force sensor after briefly reviewing some other type of thin pressure transducer. It has been used to gauge the contact pressure between the actuator and the subject’s skin in order to correlate the level of discomfort to the supply pressure, and to feed this value back to regulate the supply air flow. In order for the massage to be still effective without causing pain or distress or any cutoff to the blood flow, some control objective have been set, consisting in the regulation of the contact force so that it comes to the constant set point smoothly and its value holds constant until unloading occurs. The targets of such mechatronic devices range from paraplegic patients lacking of muscle tone because of their spinal cord damage, to elite endurance athletes needing a circulation booster when resting from practicing after serious injuries leading to bed rest. Encouraging results have been attained for both these two categories, based on the monitored hemodynamic variables

    Intent Classification during Human-Robot Contact

    Get PDF
    Robots are used in many areas of industry and automation. Currently, human safety is ensured through physical separation and safeguards. However, there is increasing interest in allowing robots and humans to work in close proximity or on collaborative tasks. In these cases, there is a need for the robot itself to recognize if a collision has occurred and respond in a way which prevents further damage or harm. At the same time, there is a need for robots to respond appropriately to intentional contact during interactive and collaborative tasks. This thesis proposes a classification-based approach for differentiating between several intentional contact types, accidental contact, and no-contact situations. A dataset is de- veloped using the Franka Emika Panda robot arm. Several machine learning algorithms, including Support Vector Machines, Convolutional Neural Networks, and Long Short-Term Memory Networks, are applied and used to perform classification on this dataset. First, Support Vector Machines were used to perform feature identification. Compar- isons were made between classification on raw sensor data compared to data calculated from a robot dynamic model, as well as between linear and nonlinear features. The results show that very few features can be used to achieve the best results, and accuracy is highest when combining raw data from sensors with model-based data. Accuracies of up to 87% were achieved. Methods of performing classification on the basis of each individual joint, compared to the whole arm, are tested, and shown not to provide additional benefits. Second, Convolutional Neural Networks and Long Short-Term Memory Networks were evaluated for the classification task. A simulated dataset was generated and augmented with noise for training the classifiers. Experiments show that additional simulated and augmented data can improve accuracy in some cases, as well as lower the amount of real- world data required to train the networks. Accuracies up to 93% and 84% we achieved by the CNN and LSTM networks, respectively. The CNN achieved an accuracy of 87% using all real data, and up to 93% using only 50% of the real data with simulated data added to the training set, as well as with augmented data. The LSTM achieved an accuracy of 75% using all real data, and nearly 80% accuracy using 75% of real data with augmented simulation data

    Affective Brain-Computer Interfaces

    Get PDF

    Proceedings of the 3rd International Mobile Brain/Body Imaging Conference : Berlin, July 12th to July 14th 2018

    Get PDF
    The 3rd International Mobile Brain/Body Imaging (MoBI) conference in Berlin 2018 brought together researchers from various disciplines interested in understanding the human brain in its natural environment and during active behavior. MoBI is a new imaging modality, employing mobile brain imaging methods like the electroencephalogram (EEG) or near infrared spectroscopy (NIRS) synchronized to motion capture and other data streams to investigate brain activity while participants actively move in and interact with their environment. Mobile Brain / Body Imaging allows to investigate brain dynamics accompanying more natural cognitive and affective processes as it allows the human to interact with the environment without restriction regarding physical movement. Overcoming the movement restrictions of established imaging modalities like functional magnetic resonance tomography (MRI), MoBI can provide new insights into the human brain function in mobile participants. This imaging approach will lead to new insights into the brain functions underlying active behavior and the impact of behavior on brain dynamics and vice versa, it can be used for the development of more robust human-machine interfaces as well as state assessment in mobile humans.DFG, GR2627/10-1, 3rd International MoBI Conference 201

    Automatic Posture Correction Utilizing Electrical Muscle Stimulation

    Get PDF
    Habitually poor posture can lead to repetitive strain injuries that lower an individual\u27s quality of life and productivity. Slouching over computer screens and smart phones, asymmetric weight distribution due to uneven leg loading, and improper loading posture are some of the common examples that lead to postural problems and health ramifications. To help cultivate good postural habits, researchers have proposed slouching, balance, and improper loading posture detection systems that alert users through traditional visual, auditory or vibro-tactile feedbacks when posture requires attention. However, such notifications are disruptive and can be easily ignored. We address these issues with a new physiological feedback system that uses sensors to detect these poor postures, and electrical muscle stimulation to automatically correct the poor posture. We compare our automatic approach against other alternative feedback systems and through different unique contexts. We find that our approach outperformed alternative traditional feedback systems by being faster and more accurate while delivering an equally comfortable user experience

    Accessible Integration of Physiological Adaptation in Human-Robot Interaction

    Get PDF
    Technological advancements in creating and commercializing novel unobtrusive wearable physiological sensors have generated new opportunities to develop adaptive human-robot interaction (HRI). Detecting complex human states such as engagement and stress when interacting with social agents could bring numerous advantages to creating meaningful interactive experiences. Bodily signals have classically been used for post-interaction analysis in HRI. Despite this, real-time measurements of autonomic responses have been used in other research domains to develop physiologically adaptive systems with great success; increasing user-experience, task performance, and reducing cognitive workload. This thesis presents the HRI Physio Lib, a conceptual framework, and open-source software library to facilitate the development of physiologically adaptive HRI scenarios. Both the framework and architecture of the library are described in-depth, along with descriptions of additional software tools that were developed to make the inclusion of physiological signals easier for robotics frameworks. The framework is structured around four main components for designing physiologically adaptive experimental scenarios: signal acquisition, processing and analysis; social robot and communication; and scenario and adaptation. Open-source software tools have been developed to assist in the individual creation of each described component. To showcase our framework and test the software library, we developed, as a proof-of-concept, a simple scenario revolving around a physiologically aware exercise coach, that modulates the speed and intensity of the activity to promote an effective cardiorespiratory exercise. We employed the socially assistive QT robot for our exercise scenario, as it provides a comprehensive ROS interface, making prototyping of behavioral responses fast and simple. Our exercise routine was designed following guidelines by the American College of Sports Medicine. We describe our physiologically adaptive algorithm and propose an alternative second one with stochastic elements. Finally, a discussion about other HRI domains where the addition of a physiologically adaptive mechanism could result in novel advances in interaction quality is provided as future extensions for this work. From the literature, we identified improving engagement, providing deeper social connections, health care scenarios, and also applications for self-driving vehicles as promising avenues for future research where a physiologically adaptive social robot could improve user experience
    corecore