24 research outputs found

    Haptic Control of Mobile Manipulators Interacting with the Environment

    Get PDF
    In the modern society the haptic control of robotic manipulators plays a central role in many industrial fields because of the improvement of human capabilities and the prevention of many hazards that it can provide. Many different studies are focusing on the improvement of the operator experience, aiming at simplifying the control interface and increasing the level of intuitiveness that the system can provide to a non-trained user. This work focus on the control of mobile manipulator platforms, that are gaining popularity in the industrial world because of their capability to merge the manipulation of the environment with a potentially infinite workspace. In particular three different aspects concerning the haptic shared control of mobile manipulators will be studied. Initially the manipulation of liquid container is analyzed and a new feed-forward filtering technique able to guarantee a slosh free motion without any a priori knowledge of the imposed trajectory is proposed. Then the trajectory planning for a mobile base in an unstructured environment is considered. A new planner based on the properties of B-spline curves is studied and tested for both the haptic and the autonomous case. Eventually the control of a mobile manipulator by means of a single commercial haptic device is addressed. A new mapping technique able to provide an intuitive interface for the control for the human operator is presented. The effectiveness of the proposed works is confirmed viaseveral experimental tests

    A Plug-In Feed-Forward Control for Sloshing Suppression in Robotic Teleoperation Tasks

    Get PDF
    In this paper, the problem of suppressing sloshing dynamics in liquid handling robotic systems has been faced by designing a dynamic filter that starting from the desired motion of the liquid container calculates the complete position/orientation trajectory for the robot end-effector. Specifically, a design philosophy mixing a filtering technique that suppresses the frequency contributions of the reference motion that may cause liquid oscillations and an active compensation of lateral accelerations by a proper container re-orientation has been adopted. In principle, the latter contribution requires the knowledge of acceleration of the reference trajectory, but because of the use of an harmonic smoother that performs a shaping of the original motion, it is possible to obtain the value of the acceleration in runtime. In this way, the proposed methods can be applied also to reference motions that are not known in advance, e.g. commands directly provided by a human operator. This possibility has been demonstrated by means of a number of experimental tests in which the user teleoperates the robot carrying the container with the liquid by simply moving in the free space its hand, whose 3D position is detected by a motion capture system

    Toward Future Automatic Warehouses: An Autonomous Depalletizing System Based on Mobile Manipulation and 3D Perception

    Get PDF
    This paper presents a mobile manipulation platform designed for autonomous depalletizing tasks. The proposed solution integrates machine vision, control and mechanical components to increase flexibility and ease of deployment in industrial environments such as warehouses. A collaborative robot mounted on a mobile base is proposed, equipped with a simple manipulation tool and a 3D in-hand vision system that detects parcel boxes on a pallet, and that pulls them one by one on the mobile base for transportation. The robot setup allows to avoid the cumbersome implementation of pick-and-place operations, since it does not require lifting the boxes. The 3D vision system is used to provide an initial estimation of the pose of the boxes on the top layer of the pallet, and to accurately detect the separation between the boxes for manipulation. Force measurement provided by the robot together with admittance control are exploited to verify the correct execution of the manipulation task. The proposed system was implemented and tested in a simplified laboratory scenario and the results of experimental trials are reported

    A hybrid teleoperation control scheme for a single-arm mobile manipulator with omnidirectional wheels

    No full text
    In this paper, an hybrid position-position and position-velocity teleoperation control scheme for a generic mobile manipulator is presented and discussed. The mobile manipulator is composed by a mobile platform and a 5 dof arm, and the proposed control scheme allows the simultaneous control of both the devices by means of a single haptic device characterized by an open kinematic chain and not specifically designed for mobile manipulators teleoperation (e.g. a Phantom Omni). The proposed teleoperation controller overcomes the mismatch of the control signals to be sent to the arm (position) and to the mobile platform (velocity) through a proper partition of the master device workspace. Tests have been performed both by simulation and with a real setup. The setup is composed by a 6 dof Phantom Omni haptic device acting as master, and a single-arm Kuka youBot omnidirectional manipulator acting as slave. Experimental results related to a pick and place task, performed on the real setup and involving the motion of both the arm and the platform are reported and commented

    Toward the Next Generation of Robotic Waiters

    No full text
    The gap between human waiters and state-of-the-art robot systems that try to serve something to drink is often embarrassing, with the former able to manipulate glasses and trays or glasses on trays with incredible dexterity and the latter that move at incredible slowness. In this video, we want to show that robots can do it better by moving a bottle or a tankard full of beer that are simply placed on a flat steel plate connected the flange of a robot manipulator. The robot tracks the trajectory defined by a human operator that moves its hand in the 3D space, with a motion capture system that acquires in real time the position. A feed-forward controller, placed between the user and the robot and based on the combination of a smoother and proper orientation compensation, counteracts the lateral accelerations and suppress sloshing phenomena of the liquids. Eventually a camera mounted on the robot arm provides a visual feedback to the operator with monitoring purposes. The challenge for the operator was to drop the carried object. will the feed-forward control be robust enough to avoid this event, even at high speed? Watch the video and find out

    sEMG-Based Human-in-the-Loop Control of Elbow Assistive Robots for Physical Tasks and Muscle Strength Training

    No full text
    In this letter we present a sEMG-driven human-in-the-loop (HITL) control designed to allow an assistive robot produce proper support forces for both muscular effort compensations , i.e. for assistance in physical tasks, and muscular effort generations , i.e. for the application in muscle strength training exercises related to the elbow joint. By employing our control strategy based on a Double Threshold Strategy (DTS) with a standard PID regulator, we report that our approach can be successfully used to achieve a target, quantifiable muscle activity assistance. In this relation, an experimental concept validation was carried out involving four healthy subjects in physical and muscle strength training tasks, reporting with single-subject and global results that the proposed sEMG-driven control strategy was able to successfully limit the elbow muscular activity to an arbitrary level for effort compensation objectives, and to impose a lower bound to the sEMG signals during effort generation goals. Also, a subjective qualitative evaluation of the robotic assistance was carried out by means of a questionnaire. The obtained results open future possibilities for a simplified usage of the sEMG measurements to obtain a target, quantitatively defined, robot assistance for human joints and muscles

    REMODEL. WP3. User And System Interface. T3_3. Teaching By Demonstration Of Skills For New Assembly References And Tasks. Evaluation of physical human-robot interaction modalities. v0

    No full text
    The datasets contain the data related to the experiment was carried out involving four subjects – named U1, U2, U3, U4 – in a series of physical and muscle strength training tasks, related to the publication: R. Meattini, D. Chiaravalli, G. Palli and C. Melchiorri, "sEMG-Based Human-in-theLoop Control of Elbow Assistive Robots for Physical Tasks and Muscle Strength Training," in IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5795-5802, Oct. 2020. (DOI: 10.1109/LRA.2020.3010741

    REMODEL. WP3. User And System Interface. T3_4. Teaching By Demonstration Of Skills For New Assembly References And Tasks. Augmented Kinesthetic Teaching. v0

    No full text
    The datasets contain the data related to an augmented kinesthetic teaching system, which is based on surface electromyographic (sEMG) measurements from the operator forearm. Specifically, sEMG signals are used for minimal-training unsupervised estimation of forearm's muscles co-contraction level. In this way, also exploiting a vibrotactile bio-feedback, we evaluate the ability of operators in stiffening their hand - during kinesthetic teaching - in order to modulate the estimated level of muscle co-contraction to (i) match target levels and (ii) command the opening/closing of a gripper, i.e. in exploiting their sEMG signals for effective augmented robot kinesthetic teaching tasks. The data are related to the publication: R. Meattini, D. Chiaravalli, L. Biagiotti, G. Palli and C. Melchiorri, "Combining Unsupervised Muscle Co-Contraction Estimation With Bio-Feedback Allows Augmented Kinesthetic Teaching," in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6180-6187, Oct. 2021, doi: 10.1109/LRA.2021.3092269

    REMODEL. WP3. User And System Interface. T3_3. Teaching By Demonstration Of Skills For New Assembly References And Tasks. Human to robot hand motion mapping method. v0;

    No full text
    The datasets contain the data related to a novel hybrid approach that combines both joint and Cartesian mappings in a single solution. In particular, we exploit the a priori, in-hand information related to the areas of the workspace in which thumb and finger fingertips can get in contact. This allows to define, for each finger, a zone of transition from joint to Cartesian mapping. As a consequence, both hand shape during volar grasps and correctness of the fingertip positions for precision grasps are preserved, despite the master-slave kinematic dissimilarities. The data are related to the publication: R. Meattini, D. Chiaravalli, G. Palli and C. Melchiorri, "Exploiting In-Hand Knowledge in Hybrid Joint-Cartesian Mapping for Anthropomorphic Robotic Hands," in IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5517-5524, July 2021, doi: 10.1109/LRA.2021.3078658

    REMODEL. WP6. Sensory Systems And Mechatronic Tools. T6_2. Evaluation of a deformable skin tactile sensor. v0

    No full text
    The dataset contains the data related to three different types of data acquisitions, on which we trained and tested an artificial neural network (ANN). The procedure for the training and testing of the ANN is realized for each combination of inflated air and vertical force levels, by means of a nested cross-validation (CV). In detail, the CV is composed by two nested loops. The first data acquisition is composed by the output of the Inertial Measurement Unit (IMU) while the robotic manipulator UR5 is pressing on its surface with a metal stick end-effector on a grid on 42 different locations (namely: the 42-locations-session); the data acquired during this process from the tactile sensor are labeled based on the Cartesian position of the robot, therefore associating the signals with 42 different classes. The second data acquisition is related to the IMU data when the robot is pressing on the tactile sensor by means of a linear-like end-effector, applying the orientations of 0o, 30o, 60o, 90o, 120o and 150o (namely: the 6-orientations-session); in this case, the signals are labeled according to 6 classes, that corresponds to the six orientations of the linear region of contact points. Finally, the third data acquisition is built in the same way of the second, but considering the orientations of the linear region of contact points related to 0o, 45o, 90o and 135o (namely: the 4-orientations-session), corresponding to the labeling of the signals according to 4 classes. For each type of data acquisition, we repeated the experiment two times, and, for each of this repetition, we acquired the data for 3 levels of vertical force applied on the tactile sensor – 0.5 N, 1 N and 2 N (using the information from the force sensor at the base of the tactile sensor) – and 3 levels of inflating air – 5 ml, 7 ml and 10 ml (measured by using a syringe). In this way, we obtained a total amount of 54 datasets (27 datasets for the first session, and 27 datasets for the second session.) The data is related to the publication: Y. Iwamoto, R. Meattini, D. Chiaravalli, G. Palli, K. Shibuya and C. Melchiorri, "A Low Cost Tactile Sensor for Large Surfaces Based on Deformable Skin with Embedded IMU," 2020 IEEE Conference on Industrial Cyberphysical Systems (ICPS), Tampere, Finland, 2020, pp. 501-506, doi: 10.1109/ICPS48405.2020.9274737
    corecore