1,210 research outputs found

    Middle-Ear Microsurgery Simulation to Improve New Robotic Procedures

    Get PDF

    A robotic microsurgical forceps for transoral laser microsurgery

    Get PDF
    Purpose: In transoral laser microsurgery (TLM), the close curved cylindrical structure of the laryngeal region offers functional challenges to surgeons who operate on its malignancies with rigid, single degree-of-freedom (DOF) forceps. These challenges include surgeon hand tremors, poor reachability, poor tissue surface perception, and reduced ergonomy in design. The integrated robotic microsurgical forceps presented here is capable of addressing the above challenges through tele-operated tissue manipulation in TLM. Methods: The proposed device is designed in compliance with the spatial constraints in TLM. It incorporates a novel 2-DOF motorized microsurgical forceps end-effector, which is integrated with a commercial 6-DOF serial robotic manipulator. The integrated device is tele-operated through the haptic master interface, Omega.7. The device is augmented with a force sensor to measure tissue gripping force. The device is called RMF-2F, i.e. robotic microsurgical forceps with 2-DOF end-effector and force sensing. RMF-2F is evaluated through validation trials and pick-n-place experiments with subjects. Furthermore, the device is trialled with expert surgeons through preliminary tasks in a simulated surgical scenario. Results: RMF-2F shows a motion tracking error of less than 400 μm. User trials demonstrate the device’s accuracy in task completion and ease of manoeuvrability using the Omega.7 through improved trajectory following and execution times. The tissue gripping force shows better regulation with haptic feedback (1.624 N) than without haptic feedback (2.116 N). Surgeons positively evaluated the device with appreciation for improved access in the larynx and gripping force feedback. Conclusions: RMF-2F offers an ergonomic and intuitive interface for intraoperative tissue manipulation in TLM. The device performance, usability, and haptic feedback capability were positively evaluated by users as well as expert surgeons. RMF-2F introduces the benefits of robotic teleoperation including, (i) overcoming hand tremors and wrist excursions, (ii) improved reachability and accuracy, and (iii) tissue gripping feedback for safe tissue manipulation

    Human-Robot Collaborative Force-Controlled Micro-Drilling for Advanced Manufacturing and Medical Applications

    Get PDF
    Robotic drilling finds applications in diverse fields ranging from advanced manufacturing to the medical industry. Recent advances in low-cost, and human-safe, collaborative robots (e.g., Sawyer) are enabling us to rethink the possibilities in which robots can be deployed for such tedious and time-consuming tasks. This thesis presents a robotic drilling methodology with features of force-control enabled micro-drilling and human-robot collaboration to reduce programming efforts and enhance drilling performance. A Sawyer robot from Rethink Robotics, which offers safe physical interactions with a human co-worker, kinesthetic teaching, and force control, is used as the test bed. The robot’s end-effector was equipped with a Dremel drill fit into a housing, which was custom designed and 3D-printed using an Object Prime 3D-printer. The proposed approach applies human-robot collaboration in two cases. First, a human kinesthetically teaches a set of drill coordinates by physically holding the robot and guiding it to those locations. The robot then executes the drilling task by moving to these recorded locations. This thereby avoids the need to specify the drill coordinates with respect to a fixed reference frame, leading to reduction in programming effort and setup time while transitioning between different drilling jobs. Second, drilled hole quality is shown to be enhanced when a human provides nominal physical support to the robot during certain drilling tasks. An experimental analysis of the impact of force control on micro-drilling revealed that the proposed robotic system is capable of successfully drilling holes with a drill bit of 0.5 mm diameter with an error of +/- 0.05 mm, without breaking it for more than 100 holes. The proposed robotic drilling was validated in the following application domain: micro-drilling for composite repairs based on the through-thickness reinforcement (TTR) technique. For this purpose, sandwich beam samples were prepared by using pre-preg unidirectional carbon fabric face sheets with a honeycomb core, and they were subjected to four-point static loading until de-bonding occurred between the face sheet and the core. The samples were then repaired using the TTR technique, where the proposed robotic drilling was used to drill holes of 0.75 mm diameter in the damaged area of the sample and carbon fiber rods and with low-viscosity epoxy, were manually inserted into these drilled holes. The results revealed that the sandwich beam regained effective compressive strength after going through the TTR technique. Experiments also reveal the potential of the proposed robotic drilling technique in aerospace and automotive manufacturing involving drilling in complex postures and micro-drilling for orthopedic applications

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Proof of Concept: Wearable Augmented Reality Video See-Through Display for Neuro-Endoscopy

    Get PDF
    In mini-invasive surgery and in endoscopic procedures, the surgeon operates without a direct visualization of the patient’s anatomy. In image-guided surgery, solutions based on wearable augmented reality (AR) represent the most promising ones. The authors describe the characteristics that an ideal Head Mounted Display (HMD) must have to guarantee safety and accuracy in AR-guided neurosurgical interventions and design the ideal virtual content for guiding crucial task in neuro endoscopic surgery. The selected sequence of AR content to obtain an effective guidance during surgery is tested in a Microsoft Hololens based app

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Ultralight smart patch with reduced sensing array based on reduced graphene oxide for hand gesture recognition

    Get PDF
    Flexible sensors for hand gesture recognition and human–machine interface (HMI) applications have witnessed tremendous advancements during the last decades. Current state-of-the-art sensors placed on fingers or embedded into gloves are incapable of fully capturing all hand gestures and are often uncomfortable for the wearer. Herein, a flake-sphere hybrid structure of reduced graphene oxide (rGO) doped with polystyrene (PS) spheres is fabricated to construct the highly sensitive, fast response, and flexible piezoresistive sensor array, which is ultralight in the weight of only 2.8 g and possesses the remarkable curved-surface conformability. The flexible wrist-worn device with a five-sensing array is used to measure pressure distribution around the wrist for accurate and comfortable hand gesture recognition. The intelligent wristband is able to classify 12 hand gestures with 96.33% accuracy for five participants using a machine learning algorithm. To showcase our wristband, a real-time system is developed to control a robotic hand via the classification results, which further demonstrates the potential of this work for HMI applications

    Wearable pressure sensing for intelligent gesture recognition

    Get PDF
    The development of wearable sensors has become a major area of interest due to their wide range of promising applications, including health monitoring, human motion detection, human-machine interfaces, electronic skin and soft robotics. Particularly, pressure sensors have attracted considerable attention in wearable applications. However, traditional pressure sensing systems are using rigid sensors to detect the human motions. Lightweight and flexible pressure sensors are required to improve the comfortability of devices. Furthermore, in comparison with conventional sensing techniques without smart algorithm, machine learning-assisted wearable systems are capable of intelligently analysing data for classification or prediction purposes, making the system ‘smarter’ for more demanding tasks. Therefore, combining flexible pressure sensors and machine learning is a promising method to deal with human motion recognition. This thesis focuses on fabricating flexible pressure sensors and developing wearable applications to recognize human gestures. Firstly, a comprehensive literature review was conducted, including current state-of-the-art on pressure sensing techniques and machine learning algorithms. Secondly, a piezoelectric smart wristband was developed to distinguish finger typing movements. Three machine learning algorithms, K Nearest Neighbour (KNN), Decision Tree (DT) and Support Vector Machine (SVM), were used to classify the movement of different fingers. The SVM algorithm outperformed other classifiers with an overall accuracy of 98.67% and 100% when processing raw data and extracted features. Thirdly, a piezoresistive wristband was fabricated based on a flake-sphere composite configuration in which reduced graphene oxide fragments are doped with polystyrene spheres to achieve both high sensitivity and flexibility. The flexible wristband measured the pressure distribution around the wrist for accurate and comfortable hand gesture classification. The intelligent wristband was able to classify 12 hand gestures with 96.33% accuracy for five participants using a machine learning algorithm. Moreover, for demonstrating the practical applications of the proposed method, a realtime system was developed to control a robotic hand according to the classification results. Finally, this thesis also demonstrates an intelligent piezoresistive sensor to recognize different throat movements during pronunciation. The piezoresistive sensor was fabricated using two PolyDimethylsiloxane (PDMS) layers that were coated with silver nanowires and reduced graphene oxide films, where the microstructures were fabricated by the polystyrene spheres between the layers. The highly sensitive sensor was able to distinguish throat vibrations from five different spoken words with an accuracy of 96% using the artificial neural network algorithm
    • …
    corecore