5 research outputs found

    Nonlinear Control Synthesis for Facilitation of Human-Robot Interaction

    Get PDF
    Human-robot interaction is an area of interest that is becoming increasingly important in robotics research. Nonlinear control design techniques allow researchers to guarantee stability, performance, as well as safety, especially in cases involving physical human-robot interaction (PHRI). In this dissertation, we will propose two different nonlinear controllers and detail the design of an assistive robotic system to facilitate human-robot interaction. In Chapter 2, to facilitate physical human-robot interaction, the problem of making a safe compliant contact between a human and an assistive robot is considered. Users with disabilities have a need to utilize their assistive robots for physical interaction during activities such as hair-grooming, scratching, face-sponging, etc. Specifically, we propose a hybrid force/velocity/attitude control for our physical human-robot interaction system which is based on measurements from a force/torque sensor mounted on the robot wrist. While automatically aligning the end-effector surface with the unknown environmental (human) surface, a desired commanded force is applied in the normal direction while following desired velocity commands in the tangential directions. A Lyapunov based stability analysis is provided to prove both convergence as well as passivity of the interaction to ensure both performance and safety. Simulation as well as experimental results verify the performance and robustness of the proposed hybrid force/velocity/attitude controller in the presence of dynamic uncertainties as well as safety compliance of human-robot interactions for a redundant robot manipulator. Chapter 3 presents the design, analysis, and experimental implementation of an adaptive control enabled intelligent algorithm to facilitate 1-click grasping of novel objects by a robotic gripper since one of the most common types of tasks for an assistive robot is pick and place/object retrieval tasks. But there are a variety of objects in our daily life all of which need different optimal force to grasp them. This algorithm facilitates automated grasping force adjustment. The use of object-geometry free modeling coupled with utilization of interaction force and slip velocity measurements allows for the design of an adaptive backstepping controller that is shown to be asymptotically stable via a Lyapunov-based analysis. Experiments with multiple objects using a prototype gripper with embedded sensing show that the proposed scheme is able to effectively immobilize novel objects within the gripper fingers. Furthermore, it is seen that the adaptation allows for close estimation of the minimum grasp force required for safe grasping which results in minimal deformation of the grasped object. In Chapter 4, we present the design and implementation of the motion controller and adaptive interface for the second generation of the UCF-MANUS intelligent assistive robotic manipulator system. Based on usability testing for the system, several features were implemented in the interface that could reduce the complexity of the human-robot interaction while also compensating for the deficits in different human factors, such as Working Memory, Response Inhibition, Processing Speed; , Depth Perception, Spatial Ability, Contrast Sensitivity. For the controller part, we designed several new features to provide the user has a less complex and safer interaction with the robot, such as \u27One-click mode\u27, \u27Move suggestion mode\u27 and \u27Gripper Control Assistant\u27. As for the adaptive interface design, we designed and implemented compensators such as \u27Contrast Enhancement\u27, \u27Object Proximity Velocity Reduction\u27 and \u27Orientation Indicator\u27

    Informing Assistive Robots with Models of Contact Forces from Able-Bodied Face Wiping and Shaving

    Get PDF
    ©2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, September 9-13, 2012.DOI: 10.1109/ROMAN.2012.6343762Hygiene and feeding are activities of daily living (ADLs) that often involve contact with a person's face. Robots can assist people with motor impairments to perform these tasks by holding a tool that makes contact with the care receiver's face. By sensing the forces applied to the face with the tool, robots could potentially provide assistance that is more comfortable, safe, and effective. In order to inform the design of robotic controllers and assistive robots, we investigated the forces able-bodied people apply to themselves when wiping and shaving their faces. We present our methods for capturing and modeling these forces, results from a study with 9 participants, and recommendations for assistive robots. Our contributions include a trapezoidal force model that assumes participants have a target force they attempt to achieve for each stroke of the tool. We discuss advantages of this 3 parameter model and show that it fits our data well relative to other candidate models. We also provide statistics of the models' rise rates, fall rates, and target forces for the 9 participants in our study. In addition, we illustrate how the target forces varied based on the task, participant, and location on the face

    Uncertainty and social considerations for mobile assistive robot navigation

    Get PDF
    An increased interest in mobile robots has been seen over the past years. The wide range of possible applications, from vacuum cleaners to assistant robots, makes such robots an interesting solution to many everyday problems. A key requirement for the mass deployment of such robots is to ensure they can safely navigate around our daily living environments. A robot colliding with or bumping into a person may be, in some contexts, unacceptable. For example, if a robot working around elderly people collides with one of them, it may cause serious injuries. This thesis explores four major components required for effective robot navigation: sensing the static environment, detection and tracking of moving people, obstacle and people avoidance with uncertainty measurement, and basic social navigation considerations. First, to guarantee adherence to basic safety constraints, sensors and algorithms required to measure the complex structure of our daily living environments are explored. Not only do the static components of the environment have to be measured, but so do any people present. A people detection and tracking algorithm, aimed for a crowded environment is proposed, thus enhancing the robot's perception capabilities. Our daily living environments present many inherent sources of uncertainty for robots, one of them arising due to the robot's inability to know people's intentions as they move. To solve this problem, a motion model that assumes unknown long-term intentions is proposed. This is used in conjunction with a novel uncertainty aware local planner to create feasible trajectories. In social situations, the presence of groups of people cannot be neglected when navigating. To avoid the robot interrupting groups of people, it first needs to be able to detect such groups. A group detector is proposed which relies on a set of gaze- and geometric-based features. Avoiding group disruption is finally incorporated into the navigation algorithm by means of taking into account the probability of disrupting a group's activities. The effectiveness of the four different components is evaluated using real world and simulated data, demonstrating the benefits for mobile robot navigation.Open Acces

    2015, UMaine News Press Releases

    Get PDF
    This is a catalog of press releases put out by the University of Maine Division of Marketing and Communications between January 2, 2015 and December 31, 2015
    corecore