7 research outputs found

    Visual Odometry and Control for an Omnidirectional Mobile Robot with a Downward-Facing Camera

    Get PDF
    ©2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.DOI: 10.1109/IROS.2010.5649749Presented at the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 18-22 Oct. 2010, Taipei.An omnidirectional Mecanum base allows for more flexible mobile manipulation. However, slipping of the Mecanum wheels results in poor dead-reckoning estimates from wheel encoders, limiting the accuracy and overall utility of this type of base. We present a system with a downwardfacing camera and light ring to provide robust visual odometry estimates. We mounted the system under the robot which allows it to operate in conditions such as large crowds or low ambient lighting. We demonstrate that the visual odometry estimates are sufficient to generate closed-loop PID (Proportional Integral Derivative) and LQR (Linear Quadratic Regulator) controllers for motion control in three different scenarios: waypoint tracking, small disturbance rejection, and sideways motion. We report quantitative measurements that demonstrate superior control performance when using visual odometry compared to wheel encoders. Finally, we show that this system provides highfidelity odometry estimates and is able to compensate for wheel slip on a four-wheeled omnidirectional mobile robot base

    Human-Robot Interaction Studies for Autonomous Mobile Manipulation for the Motor Impaired

    Get PDF
    ©2009, Association for the Advancement of Artificial Intelligence (www.aaai.org). The original publication is available at : http://www.aaai.org/Papers/Symposia/Spring/2009/SS-09-03/SS09-03-003.pdf.Presented at Experimental Design for Real-World Systems, AAAI Spring Symposium, March 23-25, 2009 at Stanford University, Stanford, California USA.We are developing an autonomous mobile assistive robot named El-E to help individuals with severe motor impairments by performing various object manipulation tasks such as fetching, transporting, placing, and delivering. El-E can autonomously approach a location specified by the user through an interface such as a standard laser pointer and pick up a nearby object. The initial target user population of the robot is individuals suffering from amyotrophic lateral sclerosis (ALS). ALS, also known as Lou Gehrig’s disease, is a progressive neuro-degenerative disease resulting in motor impairments throughout the entire body. Due to the severity and progressive nature of ALS, the results from developing robotic technologies to assist ALS patients could be applied to wider motor impaired populations. To accomplish successful development and real world application of assistive robot technology, we have to acquire familiarity with the needs and everyday living conditions of these individuals. We also believe the participation of prospective users throughout the design and development process is essential in improving the usability and accessibility of the robot for the target user population. To assess the needs of prospective users and to evaluate the technology being developed, we applied various methodologies of human studies including interviewing, photographing, and conducting controlled experiments. We present an overview of research from the Healthcare Robotics Lab related to patient needs assessment and human experiments with emphasis on the methods of human centered approach

    Hand It Over or Set It Down: A User Study of Object Delivery with an Assistive Mobile Manipulator

    Get PDF
    ©2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at The 18th IEEE International Symposium on Robot and Human Interactive Communication, Sept. 27 2009-Oct. 2 2009, Toyama.Delivering an object to a user would be a generally useful capability for service robots. Within this paper, we look at this capability in the context of assistive object retrieval for motor-impaired users. We first describe a behavior-based system that enables our mobile robot EL-E to autonomously deliver an object to a motor-impaired user. We then present our evaluation of this system with 8 motor-impaired patients from the Emory ALS Center. As part of this study, we compared handing the object to the user (direct delivery) with placing the object on a nearby table (indirect delivery). We tested the robot delivering a cordless phone, a medicine bottle, and a TV remote, which were ranked as three of the top four most important objects for robotic delivery by ALS patients in a previous study. Overall, the robot successfully delivered these objects in 126 out of 144 trials (88%) with a success rate of 97% for indirect delivery and 78% for direct delivery. In an accompanying survey, participants showed high satisfaction with the robot with 4 people preferring direct delivery and 4 people preferring indirect delivery. Our results indicate that indirect delivery to a surface can be a robust and reliable delivery method with high user satisfaction, and that robust direct delivery will require methods that handle diverse postures and body types

    Laser Pointers and a Touch Screen: Intuitive Interfaces for Autonomous Mobile Manipulation for the Motor Impaired

    Get PDF
    ©ACM, 2008. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in: Assets '08, Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility, October 13–15, 2008. http://doi.acm.org/10.1145/1414471.1414512.This is a digitized copy derived from an ACM-copyrighted work. ACM did not prepare this copy and does not guarantee that it is an accurate copy of the originally published work.Presented at ASSETS’08, October 13–15, 2008, Halifax, Nova Scotia, Canada.DOI: 10.1145/1414471.1414512El-E (“Ellie”) is a prototype assistive robot designed to help people with severe motor impairments manipulate everyday objects. When given a 3D location, El-E can autonomously approach the location and pick up a nearby object. Based on interviews of patients with amyotrophic lateral sclerosis (ALS), we have developed and tested three distinct interfaces that enable a user to provide a 3D location to El-E and thereby select an object to be manipulated: an ear-mounted laser pointer, a hand-held laser pointer, and a touch screen interface. Within this paper, we present the results from a user study comparing these three user interfaces with a total of 134 trials involving eight patients with varying levels of impairment recruited from the Emory ALS Clinic. During this study, participants used the three interfaces to select everyday objects to be approached, grasped, and lifted off of the ground. The three interfaces enabled motor impaired users to command a robot to pick up an object with a 94.8% success rate overall after less than 10 minutes of learning to use each interface. On average, users selected objects 69% more quickly with the laser pointer interfaces than with the touch screen interface. We also found substantial variation in user preference. With respect to the Revised ALS Functional Rating Scale (ALSFRS-R), users with greater upper-limb mobility tended to prefer the hand-held laser pointer, while those with less upper-limb mobility tended to prefer the ear-mounted laser pointer. Despite the extra efficiency of the laser pointer interfaces, three patients preferred the touch screen interface, which has unique potential for manipulating remote objects out of the user’s line of sight. In summary, these results indicate that robots can enhance accessibility by supporting multiple interfaces. Furthermore, this work demonstrates that the communication of 3D locations during human-robot interaction can serve as a powerful abstraction barrier that supports distinct interfaces to assistive robots while using identical, underlying robotic functionality

    A foveated passive UHF RFID system for mobile manipulation

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at IROS 2008, IEEE/RSJ International Conference on Intelligent Robots and Systems, 22-26 Sept. 2008, Nice, FranceDOI: 10.1109/IROS.2008.4651047We present a novel antenna and system architecture for mobile manipulation based on passive RFID technology operating in the 850 MHz - 950 MHz ultra-high-frequency (UHF) spectrum. This system exploits the electromagnetic properties of UHF radio signals to present a mobile robot with both wide-angle dasiaperipheral visionpsila, sensing multiple tagged objects in the area in front of the robot, and focused, high-acuity dasiacentral visionpsila, sensing only tagged objects close to the end effector of the manipulator. These disparate tasks are performed using the same UHF RFID tag, coupled in two different electromagnetic modes. Wide-angle sensing is performed with an antenna designed for far-field electromagnetic wave propagation, while focused sensing is performed with a specially designed antenna mounted on the end effector that optimizes near-field magnetic coupling. We refer to this RFID system as dasiafoveatedpsila, by analogy with the anatomy of the human eye. We report a series of experiments on an untethered autonomous mobile manipulator in a 2.5D environment that demonstrate the features of this architecture using two novel behaviors, one in which data from the far-field antenna is used to determine if a specific tagged object is present in the robotpsilas working area and to navigate to that object, and a second using data from the near-field antenna to grasp a specified object from a collection of visually identical objects. The same UHF RFID tag is used to facilitate both the navigation and grasping tasks

    A Point-and-Click Interface for the Real World: Laser Designation of Objects for Mobile Manipulation

    Get PDF
    ©ACM, 2008. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in: Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction. http://doi.acm.org/.This is a digitized copy derived from an ACM-copyrighted work. ACM did not prepare this copy and does not guarantee that it is an accurate copy of the originally published workPresented at HRI ’08, 3rd ACM/IEEE Conference on Human-Robot Interaction, March 12-15, 2008, Amsterdam, The Netherlands.DOI: 10.1145/1349822.1349854We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously se- lect a 3D location in the world and communicate it to a mo- bile robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and esti- mates the location’s 3D position with respect to the robot’s frame of reference. Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications. We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by “clicking” on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor

    EL-E: An Assistive Mobile Manipulator that Autonomously Fetches Objects from Flat Surfaces

    No full text
    Presented at Robotic Helpers: User Interaction, Interfaces and Companions in Assistive and Therapy Robotics, a full-day workshop at ACM/IEEE Human-Robot Interaction Conference (HRI08) 12 March 2008, Amsterdam, the Netherlands.Objects within human environments are usually found on flat surfaces that are orthogonal to gravity, such as floors, tables, and shelves. We first present a new assistive robot that is explicitly designed to take advantage of this common structure in order to retrieve unmodeled, everyday objects for people with motor impairments. This compact, stati- cally stable mobile manipulator has a novel kinematic and sensory configuration that facilitates autonomy and human- robot interaction within indoor human environments. Sec- ond, we present a behavior system that enables this robot to fetch objects selected with a laser pointer from the floor and tables. The robot can approach an object selected with the laser pointer interface, detect if the object is on an elevated surface, raise or lower its arm and sensors to this surface, and visually and tacitly grasp the object. Once the object is acquired, the robot can place the object on a laser des- ignated surface above the floor, follow the laser pointer on the floor, or deliver the object to a seated person selected with the laser pointer. Within this paper we present initial results for object acquisition and delivery to a seated, able- bodied individual. For this test, the robot succeeded in 6 out of 7 trials (86%)
    corecore