155 research outputs found

    Face, content, and construct validity of the EndoViS training system for objective assessment of psychomotor skills of laparoscopic surgeons

    Full text link
    Background The aim of this study is to present face, content, and constructs validity of the endoscopic orthogonal video system (EndoViS) training system and determines its efficiency as a training and objective assessment tool of the surgeons’ psychomotor skills. Methods Thirty-five surgeons and medical students participated in this study: 11 medical students, 19 residents, and 5 experts. All participants performed four basic skill tasks using conventional laparoscopic instruments and EndoViS training system. Subsequently, participants filled out a questionnaire regarding the design, realism, overall functionality, and its capabilities to train hand–eye coordination and depth perception, rated on a 5-point Likert scale. Motion data of the instruments were obtained by means of two webcams built into a laparoscopic physical trainer. To identify the surgical instruments in the images, colored markers were placed in each instrument. Thirteen motion-related metrics were used to assess laparoscopic performance of the participants. Statistical analysis of performance was made between novice, intermediate, and expert groups. Internal consistency of all metrics was analyzed with Cronbach’s α test. Results Overall scores about features of the EndoViS system were positives. Participants agreed with the usefulness of tasks and the training capacities of EndoViS system (score >4). Results presented significant differences in the execution of three skill tasks performed by participants. Seven metrics showed construct validity for assessment of performance with high consistency levels. Conclusions EndoViS training system has been successfully validated. Results showed that EndoViS was able to differentiate between participants of varying laparoscopic experience. This simulator is a useful and effective tool to objectively assess laparoscopic psychomotor skills of the surgeons

    AUTOMATIC PERFORMANCE LEVEL ASSESSMENT IN MINIMALLY INVASIVE SURGERY USING COORDINATED SENSORS AND COMPOSITE METRICS

    Get PDF
    Skills assessment in Minimally Invasive Surgery (MIS) has been a challenge for training centers for a long time. The emerging maturity of camera-based systems has the potential to transform problems into solutions in many different areas, including MIS. The current evaluation techniques for assessing the performance of surgeons and trainees are direct observation, global assessments, and checklists. These techniques are mostly subjective and can, therefore, involve a margin of bias. The current automated approaches are all implemented using mechanical or electromagnetic sensors, which suffer limitations and influence the surgeon’s motion. Thus, evaluating the skills of the MIS surgeons and trainees objectively has become an increasing concern. In this work, we integrate and coordinate multiple camera sensors to assess the performance of MIS trainees and surgeons. This study aims at developing an objective data-driven assessment that takes advantage of multiple coordinated sensors. The technical framework for the study is a synchronized network of sensors that captures large sets of measures from the training environment. The measures are then, processed to produce a reliable set of individual and composed metrics, coordinated in time, that suggest patterns of skill development. The sensors are non-invasive, real-time, and coordinated over many cues such as, eye movement, external shots of body and instruments, and internal shots of the operative field. The platform is validated by a case study of 17 subjects and 70 sessions. The results show that the platform output is highly accurate and reliable in detecting patterns of skills development and predicting the skill level of the trainees

    Real-time 3D tracking of laparoscopy training instruments for assessment and feedback

    Get PDF
    Assessment of minimally invasive surgical skills is a non-trivial task, usually requiring the presence and time of expert observers, including subjectivity and requiring special and expensive equipment and software. Although there are virtual simulators that provide self-assessment features, they are limited as the trainee loses the immediate feedback from realistic physical interaction. The physical training boxes, on the other hand, preserve the immediate physical feedback, but lack the automated self-assessment facilities. This study develops an algorithm for real-time tracking of laparoscopy instruments in the video cues of a standard physical laparoscopy training box with a single fisheye camera. The developed visual tracking algorithm recovers the 3D positions of the laparoscopic instrument tips, to which simple colored tapes (markers) are attached. With such system, the extracted instrument trajectories can be digitally processed, and automated self-assessment feedback can be provided. In this way, both the physical interaction feedback would be preserved and the need for the observance of an expert would be overcome. Real-time instrument tracking with a suitable assessment criterion would constitute a significant step towards provision of real-time (immediate) feedback to correct trainee actions and show them how the action should be performed. This study is a step towards achieving this with a low cost, automated, and widely applicable laparoscopy training and assessment system using a standard physical training box equipped with a fisheye camera

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space

    Ανάπτυξη τεχνολογιών επαυξημένης πραγματικότητας στην ιατρική εκπαίδευση με προσομοιωτές

    Get PDF
    Στην παρούσα διδακτορική διατριβή παρουσιάζουμε ένα πρωτοπόρο σύστημα εκπαίδευσης και αξιολόγησης βασικών δεξιοτήτων λαπαροσκοπικής χειρουργικής σε περιβάλλον Επαυξημένης Πραγματικότητας (ΕΠ). Το προτεινόμενο σύστημα αποτελεί μια πλήρως λειτουργική πλατφόρμα εκπαίδευσης η οποία επιτρέπει σε χειρουργούς να εξασκηθούν χρησιμοποιώντας πραγματικά λαπαροσκοπικά εργαλεία και αλληλεπιδρώντας με ψηφιακά αντικείμενα εντός ενός πραγματικού περιβάλλοντος εκπαίδευσης. Το σύστημα αποτελείται από ένα τυπικό κουτί λαπαροσκοπικής εκπαίδευσης, πραγματικά χειρουργικά εργαλεία, κάμερα και συστοιχία αισθητήρων που επιτρέπουν την ανίχνευση και καταγραφή των κινήσεων του χειρουργού σε πραγματικό χρόνο. Χρησιμοποιώντας το προτεινόμενο σύστημα, σχεδιάσαμε και υλοποιήσαμε σενάρια εκπαίδευσης παρόμοια με τις ασκήσεις του προγράμματος FLS®, στοχεύοντας σε δεξιότητες όπως η αίσθηση βάθους, ο συντονισμός χεριού-ματιού, και η παράλληλη χρήση δύο χεριών. Επιπλέον των βασικών δεξιοτήτων, το προτεινόμενο σύστημα χρησιμοποιήθηκε για τον σχεδιασμό σεναρίου εξάσκησης διαδικαστικών δεξιοτήτων, οι οποίες περιλάμβανουν την εφαρμογή χειρουργικών clips καθώς και την απολίνωση εικονικής αρτηρίας, σε περιβάλλον ΕΠ. Τα αποτελέσματα συγκριτικών μελετών μεταξύ έμπειρων και αρχαρίων χειρουργών που πραγματοποιήθηκαν στα πλαίσια της παρούσας διατριβής υποδηλώνουν την εγκυρότητα του προτεινόμενου συστήματος. Επιπλέον, εξήχθησαν σημαντικά συμπεράσματα σχετικά με την πιθανή χρήση της ΕΑ στην λαπαροσκοπική προσομοίωση. Η συγκεκριμένη τεχνολογία προσφέρει αυξημένη αίσθηση οπτικού ρεαλισμού και ευελιξία στον σχεδιασμό εκπαιδευτικών σεναρίων, παρουσιάζοντας σημαντικά μικρότερες απαιτήσεις από πλευράς εξοπλισμού σε σύγκριση με τις υπάρχουσες εμπορικές πλατφόρμες. Βάσει των αποτελεσμάτων της παρούσας διατριβής μπορεί με ασφάλεια να εξαχθεί το συμπέρασμα πως η ΕΠ αποτελεί μια πολλά υποσχόμενη τεχνολογία που θα μπορούσε να χρησιμοποιηθεί για τον σχεδιασμό προσομοιωτών λαπαροσκοπικής χειρουργικής ως εναλλακτική των υπαρχόντων τεχνολογιών και συστημάτων.In this thesis we present what is, to the best of our knowledge, the first framework for training and assessment of fundamental psychomotor and procedural laparoscopic skills in an interactive Augmented Reality (AR) environment. The proposed system is a fully-featured laparoscopic training platform, allowing surgeons to practice by manipulating real instruments while interacting with virtual objects within a real environment. It consists of a standard laparoscopic box-trainer, real instruments, a camera and a set of sensory devices for real-time tracking of surgeons’ actions. The proposed framework has been used for the implementation of AR-based training scenarios similar to the drills of the FLS® program, focusing on fundamental laparoscopic skills such as depth-perception, hand-eye coordination and bimanual operation. Moreover, this framework allowed the implementation of a proof-of-concept procedural skills training scenario, which involved clipping and cutting of a virtual artery within an AR environment. Comparison studies conducted for the evaluation of the presented framework indicated high content and face validity. In addition, significant conclusions regarding the potentials of introducing AR in laparoscopic simulation training and assessment were drawn. This technology provides an advanced sense of visual realism combined with a great flexibility in training task prototyping, with minimum requirements in terms of hardware as compared to commercially available platforms. Thereby, it can be safely stated that AR is a promising technology which can indeed provide a valuable alternative to the training modalities currently used in MIS

    Advanced Augmented Reality Telestration Techniques With Applications In Laparoscopic And Robotic Surgery

    Get PDF
    The art of teaching laparoscopic or robotic surgery currently has a primary reliance on an expert surgeon tutoring a student during a live surgery. During these operations, surgeons are viewing the inside of the body through a manipulatable camera. Due to the viewpoint translation and narrow field of view, these techniques have a substantial learning curve in order to gain the mastery necessary to operate safely. In addition to moving and rotating the camera, the surgeon must also manipulate tools inserted into the body. These tools are only visible on camera, and pass through a pivot point on the body that, in non-robotic cases, reverses their directions of motion when compared to the surgeon\u27s hands. These difficulties spurred on this dissertation. The main hypothesis of this research is that advanced augmented reality techniques can improve telementoring for use between expert surgeons and surgical students. In addition, it can provide a better method of communication between surgeon and camera operator. This research has two specific aims: (1) Create a head-mounted direction of focus indicator to provide non-verbal assistance for camera operation. A system was created to track where the surgeon is looking and provides augmented reality cues to the camera operator explaining the camera desires of the surgeon. (2) Create a hardware / software environment for the tracking of a camera and an object, allowing for the display of registered pre-operative imaging that can be manipulated during the procedure. A set of augmented reality cues describing the translation, zoom, and roll of a laparoscopic camera were developed for Aim 1. An experiment was run to determine whether using augmented reality cues or verbal cues was faster and more efficient at acquiring targets on camera at a specific location, zoom level, and roll angle. The study found that in all instances, the augmented reality cues resulted in faster completion of the task with better economy of movement than with the verbal cues. A large number of environmentally registered augmented reality telestration and visualization features were added to a hardware / software platform for Aim 2. The implemented manipulation of pre-operative imaging and the ability to provide different types of registered annotation in the working environment has provided numerous examples of improved utility in telementoring systems. The results of this work provide potential improvements to the utilization of pre-operative imaging in the operating room, to the effectiveness of telementoring as a surgical teaching tool, and to the effective communication between the surgeon and the camera operator in laparoscopic surgery
    corecore