1,224 research outputs found

    Towards Early Mobility Independence: An Intelligent Paediatric Wheelchair with Case Studies

    No full text
    Standard powered wheelchairs are still heavily dependent on the cognitive capabilities of users. Unfortunately, this excludes disabled users who lack the required problem-solving and spatial skills, particularly young children. For these children to be denied powered mobility is a crucial set-back; exploration is important for their cognitive, emotional and psychosocial development. In this paper, we present a safer paediatric wheelchair: the Assistive Robot Transport for Youngsters (ARTY). The fundamental goal of this research is to provide a key-enabling technology to young children who would otherwise be unable to navigate independently in their environment. In addition to the technical details of our smart wheelchair, we present user-trials with able-bodied individuals as well as one 5-year-old child with special needs. ARTY promises to provide young children with early access to the path towards mobility independence

    Learning shared control by demonstration for personalized wheelchair assistance

    Get PDF
    An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e. in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance

    Towards an Architecture for Semiautonomous Robot Telecontrol Systems.

    Get PDF
    The design and development of a computational system to support robot–operator collaboration is a challenging task, not only because of the overall system complexity, but furthermore because of the involvement of different technical and scientific disciplines, namely, Software Engineering, Psychology and Artificial Intelligence, among others. In our opinion the approach generally used to face this type of project is based on system architectures inherited from the development of autonomous robots and therefore fails to incorporate explicitly the role of the operator, i.e. these architectures lack a view that help the operator to see him/herself as an integral part of the system. The goal of this paper is to provide a human-centered paradigm that makes it possible to create this kind of view of the system architecture. This architectural description includes the definition of the role of operator and autonomous behaviour of the robot, it identifies the shared knowledge, and it helps the operator to see the robot as an intentional being as himself/herself

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Haptic dancing: human performance at haptic decoding with a vocabulary

    Get PDF
    The inspiration for this study is the observation that swing dancing involves coordination of actions between two humans that can be accomplished by pure haptic signaling. This study implements a leader-follower dance to be executed between a human and a PHANToM haptic device. The data demonstrates that the participants' understanding of the motion as a random sequence of known moves informs their following, making this vocabulary-based interaction fundamentally different from closed loop pursuit tracking. This robot leader does not respond to the follower's movement other than to display error from a nominal path. This work is the first step in an investigation of the successful haptic coordination between dancers, which will inform a subsequent design of a truly interactive robot leader

    Co-design of forward-control and force-feedback methods for teleoperation of an unmanned aerial vehicle

    Full text link
    The core hypothesis of this ongoing research project is that co-designing haptic-feedback and forward-control methods for shared-control teleoperation will enable the operator to more readily understand the shared-control algorithm, better enabling him or her to work collaboratively with the shared-control technology.} This paper presents a novel method that can be used to co-design forward control and force feedback in unmanned aerial vehicle (UAV) teleoperation. In our method, a potential field is developed to quickly calculate the UAV's risk of collision online. We also create a simple proxy to represent the operator's confidence, using the swiftness with which the operator sends commands the to UAV. We use these two factors to generate both a scale factor for a position-control scheme and the magnitude of the force feedback to the operator. Currently, this methodology is being implemented and refined in a 2D-simulated environment. In the future, we will evaluate our methods with user study experiments using a real UAV in a 3D environment.Accepted manuscrip

    Nonlinear Modeling and Control of Driving Interfaces and Continuum Robots for System Performance Gains

    Get PDF
    With the rise of (semi)autonomous vehicles and continuum robotics technology and applications, there has been an increasing interest in controller and haptic interface designs. The presence of nonlinearities in the vehicle dynamics is the main challenge in the selection of control algorithms for real-time regulation and tracking of (semi)autonomous vehicles. Moreover, control of continuum structures with infinite dimensions proves to be difficult due to their complex dynamics plus the soft and flexible nature of the manipulator body. The trajectory tracking and control of automobile and robotic systems requires control algorithms that can effectively deal with the nonlinearities of the system without the need for approximation, modeling uncertainties, and input disturbances. Control strategies based on a linearized model are often inadequate in meeting precise performance requirements. To cope with these challenges, one must consider nonlinear techniques. Nonlinear control systems provide tools and methodologies for enabling the design and realization of (semi)autonomous vehicle and continuum robots with extended specifications based on the operational mission profiles. This dissertation provides an insight into various nonlinear controllers developed for (semi)autonomous vehicles and continuum robots as a guideline for future applications in the automobile and soft robotics field. A comprehensive assessment of the approaches and control strategies, as well as insight into the future areas of research in this field, are presented.First, two vehicle haptic interfaces, including a robotic grip and a joystick, both of which are accompanied by nonlinear sliding mode control, have been developed and studied on a steer-by-wire platform integrated with a virtual reality driving environment. An operator-in-the-loop evaluation that included 30 human test subjects was used to investigate these haptic steering interfaces over a prescribed series of driving maneuvers through real time data logging and post-test questionnaires. A conventional steering wheel with a robust sliding mode controller was used for all the driving events for comparison. Test subjects operated these interfaces for a given track comprised of a double lane-change maneuver and a country road driving event. Subjective and objective results demonstrate that the driver’s experience can be enhanced up to 75.3% with a robotic steering input when compared to the traditional steering wheel during extreme maneuvers such as high-speed driving and sharp turn (e.g., hairpin turn) passing. Second, a cellphone-inspired portable human-machine-interface (HMI) that incorporated the directional control of the vehicle as well as the brake and throttle functionality into a single holistic device will be presented. A nonlinear adaptive control technique and an optimal control approach based on driver intent were also proposed to accompany the mechatronic system for combined longitudinal and lateral vehicle guidance. Assisting the disabled drivers by excluding extensive arm and leg movements ergonomically, the device has been tested in a driving simulator platform. Human test subjects evaluated the mechatronic system with various control configurations through obstacle avoidance and city road driving test, and a conventional set of steering wheel and pedals were also utilized for comparison. Subjective and objective results from the tests demonstrate that the mobile driving interface with the proposed control scheme can enhance the driver’s performance by up to 55.8% when compared to the traditional driving system during aggressive maneuvers. The system’s superior performance during certain vehicle maneuvers and approval received from the participants demonstrated its potential as an alternative driving adaptation for disabled drivers. Third, a novel strategy is designed for trajectory control of a multi-section continuum robot in three-dimensional space to achieve accurate orientation, curvature, and section length tracking. The formulation connects the continuum manipulator dynamic behavior to a virtual discrete-jointed robot whose degrees of freedom are directly mapped to those of a continuum robot section under the hypothesis of constant curvature. Based on this connection, a computed torque control architecture is developed for the virtual robot, for which inverse kinematics and dynamic equations are constructed and exploited, with appropriate transformations developed for implementation on the continuum robot. The control algorithm is validated in a realistic simulation and implemented on a six degree-of-freedom two-section OctArm continuum manipulator. Both simulation and experimental results show that the proposed method could manage simultaneous extension/contraction, bending, and torsion actions on multi-section continuum robots with decent tracking performance (e.g. steady state arc length and curvature tracking error of 3.3mm and 130mm-1, respectively). Last, semi-autonomous vehicles equipped with assistive control systems may experience degraded lateral behaviors when aggressive driver steering commands compete with high levels of autonomy. This challenge can be mitigated with effective operator intent recognition, which can configure automated systems in context-specific situations where the driver intends to perform a steering maneuver. In this article, an ensemble learning-based driver intent recognition strategy has been developed. A nonlinear model predictive control algorithm has been designed and implemented to generate haptic feedback for lateral vehicle guidance, assisting the drivers in accomplishing their intended action. To validate the framework, operator-in-the-loop testing with 30 human subjects was conducted on a steer-by-wire platform with a virtual reality driving environment. The roadway scenarios included lane change, obstacle avoidance, intersection turns, and highway exit. The automated system with learning-based driver intent recognition was compared to both the automated system with a finite state machine-based driver intent estimator and the automated system without any driver intent prediction for all driving events. Test results demonstrate that semi-autonomous vehicle performance can be enhanced by up to 74.1% with a learning-based intent predictor. The proposed holistic framework that integrates human intelligence, machine learning algorithms, and vehicle control can help solve the driver-system conflict problem leading to safer vehicle operations
    corecore