61 research outputs found
a general framework for shared control in robot teleoperation with force and visual feedback
In the last decade, the topic of human robot interaction has received increasing interest from research and industry, as robots must now interface with human users to accomplish complex tasks. In this scenario, robotics engineers are required to take the human component into account in the robot design and control. This is especially true in telerobotics, where interaction with the user plays an important role in the controlled system stability. By means of a thorough analysis and practical experiments, this contribution aims at giving a concrete idea of the aspects that need to be considered in the design of a complete control framework for teleoperated systems, that are able to seamlessly integrate with a human operator through shared control
Haptic-Based Shared-Control Methods for a Dual-Arm System
We propose novel haptic guidance methods for a dual-arm telerobotic manipulation system, which are able to deal with several different constraints, such as collisions, joint limits, and singularities. We combine the haptic guidance with shared-control algorithms for autonomous orientation control and collision avoidance meant to further simplify the execution of grasping tasks. The stability of the overall system in various control modalities is presented and analyzed via passivity arguments. In addition, a human subject study is carried out to assess the effectiveness and applicability of the proposed control approaches both in simulated and real scenarios. Results show that the proposed haptic-enabled shared-control methods significantly improve the performance of grasping tasks with respect to the use of classic teleoperation with neither haptic guidance nor shared control
The classification and new trends of shared control strategies in telerobotic systems: A survey
Shared control, which permits a human operator and an autonomous controller to share the control of a telerobotic system, can reduce the operator's workload and/or improve performances during the execution of tasks. Due to the great benefits of combining the human intelligence with the higher power/precision abilities of robots, the shared control architecture occupies a wide spectrum among telerobotic systems. Although various shared control strategies have been proposed, a systematic overview to tease out the relation among different strategies is still absent. This survey, therefore, aims to provide a big picture for existing shared control strategies. To achieve this, we propose a categorization method and classify the shared control strategies into 3 categories: Semi-Autonomous control (SAC), State-Guidance Shared Control (SGSC), and State-Fusion Shared Control (SFSC), according to the different sharing ways between human operators and autonomous controllers. The typical scenarios in using each category are listed and the advantages/disadvantages and open issues of each category are discussed. Then, based on the overview of the existing strategies, new trends in shared control strategies, including the “autonomy from learning” and the “autonomy-levels adaptation,” are summarized and discussed
Virtual Reality-Based Interface for Advanced Assisted Mobile Robot Teleoperation
[EN] This work proposes a new interface for the teleoperation of mobile robots based on virtual reality that allows a natural and intuitive interaction and cooperation between the human and the robot, which is useful for many situations, such as inspection tasks, the mapping of complex environments, etc. Contrary to previous works, the proposed interface does not seek the realism of the virtual environment but provides all the minimum necessary elements that allow the user to carry out the teleoperation task in a more natural and intuitive way. The teleoperation is carried out in such a way that the human user and the mobile robot cooperate in a synergistic way to properly accomplish the task: the user guides the robot through the environment in order to benefit from the intelligence and adaptability of the human, whereas the robot is able to automatically avoid collisions with the objects in the environment in order to benefit from its fast response. The latter is carried out using the well-known potential field-based navigation method. The efficacy of the proposed method is demonstrated through experimentation with the Turtlebot3 Burger mobile robot in both simulation and real-world scenarios. In addition, usability and presence questionnaires were also conducted with users of different ages and backgrounds to demonstrate the benefits of the proposed approach. In particular, the results of these questionnaires show that the proposed virtual reality based interface is intuitive, ergonomic and easy to use.This research was funded by the Spanish Government (Grant PID2020-117421RB-C21 funded byMCIN/AEI/10.13039/501100011033) and by the Generalitat Valenciana (Grant GV/2021/181).Solanes, JE.; Muñoz GarcĂa, A.; Gracia Calandin, LI.; Tornero Montserrat, J. (2022). Virtual Reality-Based Interface for Advanced Assisted Mobile Robot Teleoperation. Applied Sciences. 12(12):1-22. https://doi.org/10.3390/app12126071122121
Robot Assisted Object Manipulation for Minimally Invasive Surgery
Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available.
In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy,
conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted
surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent
quality of procedures. Ultimately, allowing the surgeons to interpret the ample
and intelligent information from the system will enhance the surgical outcome and
positively reflect both on patients and society. Three main aspects are required to
introduce automation into surgery: the surgical robot must move with high precision,
have motion planning capabilities and understand the surgical scene. Besides
these main factors, depending on the type of surgery, there could be other aspects
that might play a fundamental role, to name some compliance, stiffness, etc. This
thesis addresses three technological challenges encountered when trying to achieve
the aforementioned goals, in the specific case of robot-object interaction. First,
how to overcome the inaccuracy of cable-driven systems when executing fine and
precise movements. Second, planning different tasks in dynamically changing environments.
Lastly, how the understanding of a surgical scene can be used to solve
more than one manipulation task.
To address the first challenge, a control scheme relying on accurate calibration is
implemented to execute the pick-up of a surgical needle. Regarding the planning of
surgical tasks, two approaches are explored: one is learning from demonstration to
pick and place a surgical object, and the second is using a gradient-based approach
to trigger a smoother object repositioning phase during intraoperative procedures.
Finally, to improve scene understanding, this thesis focuses on developing a simulation
environment where multiple tasks can be learned based on the surgical scene
and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully
able to autonomously pick up a suturing needle, position a surgical device for
intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ
retraction. Despite automation of surgical subtasks has been demonstrated in
this work, several challenges remain open, such as the capabilities of the generated
algorithm to generalise over different environment conditions and different patients
Leveraging Haptic Feedback to Improve Data Quality and Quantity for Deep Imitation Learning Models
Learning from demonstration (LfD) is a proven technique to teach robots new
skills. Data quality and quantity play a critical role in LfD trained model
performance. In this paper we analyze the effect of enhancing an existing
teleoperation data collection system with real-time haptic feedback; we observe
improvements in the collected data throughput and its quality for model
training. Our experiment testbed was a mobile manipulator robot that opened
doors with latch handles. Evaluation of teleoperated data collection on eight
real world conference room doors found that adding the haptic feedback improved
the data throughput by 6%. We additionally used the collected data to train six
image-based deep imitation learning models, three with haptic feedback and
three without it. These models were used to implement autonomous door-opening
with the same type of robot used during data collection. Our results show that
a policy from a behavior cloning model trained with haptic data performed on
average 11% better than its counterpart with no haptic feedback data,
indicating that haptic feedback resulted in collection of a higher quality
dataset
Bimanual robot control for surface treatment tasks
This work develops a method to perform surface treatment tasks using a bimanual robotic system, i.e. two robot arms cooperatively performing the task. In particular, one robot arm holds the workpiece while the other robot arm has the treatment tool attached to its end-effector. Moreover, the human user teleoperates all the six coordinates of the former robot arm and two coordinates of the latter robot arm, i.e. the teleoperator can move the treatment tool on the plane given by the workpiece surface. Furthermore, a force sensor attached to the treatment tool is used to automatically attain the desired pressure between the tool and the workpiece and to automatically keep the tool orientation orthogonal to the workpiece surface. In addition, to assist the human user during the teleoperation, several constraints are defined for both robot arms in order to avoid exceeding the allowed workspace, e.g. to avoid collisions with other objects in the environment. The theory used in this work to develop the bimanual robot control relies on sliding mode control and task prioritisation. Finally, the feasibility and effectiveness of the method are shown through experimental results using two robot arms
Bimanual robot control for surface treatment tasks
This is an Author's Accepted Manuscript of an article published in Alberto GarcĂa, J. Ernesto Solanes, Luis Gracia, Pau Muñoz-Benavent, Vicent GirbĂ©s-Juan & Josep Tornero (2022) Bimanual robot control for surface treatment tasks, International Journal of Systems Science, 53:1, 74-107, DOI: 10.1080/00207721.2021.1938279 [copyright Taylor & Francis], available online at: http://www.tandfonline.com/10.1080/00207721.2021.1938279[EN] This work develops a method to perform surface treatment tasks using a bimanual robotic system, i.e. two robot arms cooperatively performing the task. In particular, one robot arm holds the work-piece while the other robot arm has the treatment tool attached to its end-effector. Moreover, the human user teleoperates all the six coordinates of the former robot arm and two coordinates of the latter robot arm, i.e. the teleoperator can move the treatment tool on the plane given by the work- piece surface. Furthermore, a force sensor attached to the treatment tool is used to automatically attain the desired pressure between the tool and the workpiece and to automatically keep the tool orientation orthogonal to the workpiece surface. In addition, to assist the human user during the teleoperation, several constraints are defined for both robot arms in order to avoid exceeding the allowed workspace, e.g. to avoid collisions with other objects in the environment. The theory used in this work to develop the bimanual robot control relies on sliding mode control and task prioritisation. Finally, the feasibility and effectiveness of the method are shown through experimental results using two robot arms.This work was supported by Generalitat Valenciana [grant numbers ACIF/2019/007 and GV/2021/181] and Spanish Ministry of Science and Innovation [grant number PID2020117421RB-C21].GarcĂa-Fernández, A.; Solanes, JE.; Gracia Calandin, LI.; Muñoz-Benavent, P.; GirbĂ©s-Juan, V.; Tornero, J. (2022). Bimanual robot control for surface treatment tasks. International Journal of Systems Science. 53(1):74-107. https://doi.org/10.1080/00207721.2021.19382797410753
- …