32 research outputs found

    Motion Tracking for Minimally Invasive Robotic Surgery

    Get PDF

    Image-based visual servoing using improved image moments in 6-DOF robot systems

    Get PDF
    Visual servoing has played an important role in automated robotic manufacturing systems. This thesis will focus on this issue and proposes an improved method which includes an ameliorative image pre-processing (IP) algorithm and an amendatory IBVS algorithm As the first contribution, an improved IP algorithm based on the morphological theory has been discussed for the purpose of removing the unexpected speckles and balancing the illumination during the image processing. After this enhancing process, the useful information in the image becomes prominent and can be utilized to extract the accurate image features. Then, an improved IBVS algorithm is therefore further introduced for an eye-in-hand system as the second contribution. This eye-in-hand system includes a 6 Degree of Freedom (DOF) robot and a camera. The improved IBVS algorithm utilizes the image moment as the image features instead of detecting the special points for feature extraction in traditional IBVS. Comparing with traditional IBVS, choosing image moment as the image features can increase the stability of the system and extend the applied range of objects. The obtained image features will then be used to generate the control signals for the robot to track the target object. The Jacobian matrix describing the relationship between the motion of camera and velocity of image features is also discussed, where a new simple method has been proposed for the estimation of depth involved in the Jacobian matrix. In order to decouple the obtained Jacobian matrix for controlling the motion of camera with individual image features, a four stages sequence control has also been introduced to improve the control performance

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Smart Camera Robotic Assistant for Laparoscopic Surgery

    Get PDF
    The cognitive architecture also includes learning mechanisms to adapt the behavior of the robot to the different ways of working of surgeons, and to improve the robot behavior through experience, in a similar way as a human assistant would do. The theoretical concepts of this dissertation have been validated both through in-vitro experimentation in the labs of medical robotics of the University of Malaga and through in-vivo experimentation with pigs in the IACE Center (Instituto Andaluz de Cirugía Experimental), performed by expert surgeons.In the last decades, laparoscopic surgery has become a daily practice in operating rooms worldwide, which evolution is tending towards less invasive techniques. In this scenario, robotics has found a wide field of application, from slave robotic systems that replicate the movements of the surgeon to autonomous robots able to assist the surgeon in certain maneuvers or to perform autonomous surgical tasks. However, these systems require the direct supervision of the surgeon, and its capacity of making decisions and adapting to dynamic environments is very limited. This PhD dissertation presents the design and implementation of a smart camera robotic assistant to collaborate with the surgeon in a real surgical environment. First, it presents the design of a novel camera robotic assistant able to augment the capacities of current vision systems. This robotic assistant is based on an intra-abdominal camera robot, which is completely inserted into the patient’s abdomen and it can be freely moved along the abdominal cavity by means of magnetic interaction with an external magnet. To provide the camera with the autonomy of motion, the external magnet is coupled to the end effector of a robotic arm, which controls the shift of the camera robot along the abdominal wall. This way, the robotic assistant proposed in this dissertation has six degrees of freedom, which allow providing a wider field of view compared to the traditional vision systems, and also to have different perspectives of the operating area. On the other hand, the intelligence of the system is based on a cognitive architecture specially designed for autonomous collaboration with the surgeon in real surgical environments. The proposed architecture simulates the behavior of a human assistant, with a natural and intuitive human-robot interface for the communication between the robot and the surgeon

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Visual Servoing For Robotic Positioning And Tracking Systems

    Get PDF
    Visual servoing is a robot control method in which camera sensors are used inside the control loop and visual feedback is introduced into the robot control loop to enhance the robot control performance in accomplishing tasks in unstructured environments. In general, visual servoing can be categorized into image-based visual servoing (IBVS), position-based visual servoing (PBVS), and hybrid approach. To improve the performance and robustness of visual servoing systems, the research on IBVS for robotic positioning and tracking systems mainly focuses on aspects of camera configuration, image features, pose estimation, and depth determination. In the first part of this research, two novel multiple camera configurations of visual servoing systems are proposed for robotic manufacturing systems for positioning large-scale workpieces. The main advantage of these two multiple camera configurations is that the depths of target objects or target features are constant or can be determined precisely by using computer vision. Hence the accuracy of the interaction matrix is guaranteed, and thus the positioning performances of visual servoing systems can be improved remarkably. The simulation results show that the proposed multiple camera configurations of visual servoing for large-scale manufacturing systems can satisfy the demand of high-precision positioning and assembly in the aerospace industry. In the second part of this research, two improved image features for planar central symmetrical-shaped objects are proposed based on image moment invariants, which can represent the pose of target objects with respect to camera frame. A visual servoing controller based on the proposed image moment features is designed and thus the control performance of the robotic tracking system is improved compared with the method based on the commonly used image moment features. Experimental results on a 6-DOF robot visual servoing system demonstrate the efficiency of the proposed method. Lastly, to address the challenge of choosing proper image features for planar objects to get maximal decoupled structure of the interaction matrix, the neural network (NN) is applied as the estimator of target object poses with respect to camera frame based on the image moment invariants. Compared with previous methods, this scheme avoids image interaction matrix singularity and image local minima in IBVS. Furthermore, the analytical form of depth computation is given by using classical geometrical primitives and image moment invariants. A visual servoing controller is designed and the tracking performance is enhanced for robotic tracking systems. Experimental results on a 6-DOF robot system are provided to illustrate the effectiveness of the proposed scheme
    corecore