252 research outputs found

    Perception and Orientation in Minimally Invasive Surgery

    No full text
    During the last two decades, we have seen a revolution in the way that we perform abdominal surgery with increased reliance on minimally invasive techniques. This paradigm shift has come at a rapid pace, with laparoscopic surgery now representing the gold standard for many surgical procedures and further minimisation of invasiveness being seen with the recent clinical introduction of novel techniques such as single-incision laparoscopic surgery and natural orifice translumenal endoscopic surgery. Despite the obvious benefits conferred on the patient in terms of morbidity, length of hospital stay and post-operative pain, this paradigm shift comes at a significantly higher demand on the surgeon, in terms of both perception and manual dexterity. The issues involved include degradation of sensory input to the operator compared to conventional open surgery owing to a loss of three-dimensional vision through the use of the two-dimensional operative interface, and decreased haptic feedback from the instruments. These changes have led to a much higher cognitive load on the surgeon and a greater risk of operator disorientation leading to potential surgical errors. This thesis represents a detailed investigation of disorientation in minimally invasive surgery. In this thesis, eye tracking methodology is identified as the method of choice for evaluating behavioural patterns during orientation. An analysis framework is proposed to profile orientation behaviour using eye tracking data validated in a laboratory model. This framework is used to characterise and quantify successful orientation strategies at critical stages of laparoscopic cholecystectomy and furthermore use these strategies to prove that focused teaching of this behaviour in novices can significantly increase performance in this task. Orientation strategies are then characterised for common clinical scenarios in natural orifice translumenal endoscopic surgery and the concept of image saliency is introduced to further investigate the importance of specific visual cues associated with effective orientation. Profiling of behavioural patterns is related to performance in orientation and implications on education and construction of smart surgical robots are drawn. Finally, a method for potentially decreasing operator disorientation is investigated in the form of endoscopic horizon stabilization in a simulated operative model for transgastric surgery. The major original contributions of this thesis include: Validation of a profiling methodology/framework to characterise orientation behaviour Identification of high performance orientation strategies in specific clinical scenarios including laparoscopic cholecystectomy and natural orifice translumenal endoscopic surgery Evaluation of the efficacy of teaching orientation strategies Evaluation of automatic endoscopic horizon stabilization in natural orifice translumenal endoscopic surgery The impact of the results presented in this thesis, as well as the potential for further high impact research is discussed in the context of both eye tracking as an evaluation tool in minimally invasive surgery as well as implementation of means to combat operator disorientation in a surgical platform. The work also provides further insight into the practical implementation of computer-assistance and technological innovation in future flexible access surgical platforms

    Gaze patterns hold key to unlocking successful search strategies and increasing polyp detection rate in colonoscopy

    Get PDF
    BACKGROUND:  The adenoma detection rate (ADR) is an important quality indicator in colonoscopy. The aim of this study was to evaluate the changes in visual gaze patterns (VGPs) with increasing polyp detection rate (PDR), a surrogate marker of ADR. METHODS:  18 endoscopists participated in the study. VGPs were measured using eye-tracking technology during the withdrawal phase of colonoscopy. VGPs were characterized using two analyses - screen and anatomy. Eye-tracking parameters were used to characterize performance, which was further substantiated using hidden Markov model (HMM) analysis. RESULTS:  Subjects with higher PDRs spent more time viewing the outer ring of the 3 × 3 grid for both analyses (screen-based: r = 0.56, P = 0.02; anatomy: r = 0.62, P < 0.01). Fixation distribution to the "bottom U" of the screen in screen-based analysis was positively correlated with PDR (r = 0.62, P = 0.01). HMM demarcated the VGPs into three PDR groups. CONCLUSION:  This study defined distinct VGPs that are associated with expert behavior. These data may allow introduction of visual gaze training within structured training programs, and have implications for adoption in higher-level assessment

    Eye movements in surgery: A literature review

    Get PDF
    With recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices, developed techniques to assess surgical skill on the basis of eye movements, and examined the role of eye movements in surgical training. We here provide an overview of these studies with a focus on the methodological aspects. We conclude that the different studies of eye movements in surgery suggest that the recording of eye movements may be beneficial both for skill assessment and training purposes, although more research will be needed in this field

    Visual gaze patterns reveal surgeons' ability to identify risk of bile duct injury during laparoscopic cholecystectomy

    Get PDF
    BACKGROUND: Bile duct injury is a serious surgical complication of laparoscopic cholecystectomy. The aim of this study was to identify distinct visual gaze patterns associated with the prompt detection of bile duct injury risk during laparoscopic cholecystectomy. METHODS: Twenty-nine participants viewed a laparoscopic cholecystectomy that led to a serious bile duct injury ('BDI video') and an uneventful procedure ('control video') and reported when an error was perceived that could result in bile duct injury. Outcome parameters include fixation sequences on anatomical structures and eye tracking metrics. Surgeons were stratified into two groups based on performance and compared. RESULTS: The 'early detector' group displayed reduced common bile duct dwell time in the first half of the BDI video, as well as increased cystic duct dwell time and Calot's triangle glances count during Calot's triangle dissection in the control video. Machine learning based classification of fixation sequences demonstrated clear separability between early and late detector groups. CONCLUSION: There are discernible differences in gaze patterns associated with early recognition of impending bile duct injury. The results could be transitioned into real time and used as an intraoperative early warning system and in an educational setting to improve surgical safety and performance

    Recognition, Analysis, and Assessments of Human Skills using Wearable Sensors

    Get PDF
    One of the biggest social issues in mature societies such as Europe and Japan is the aging population and declining birth rate. These societies have a serious problem with the retirement of the expert workers, doctors, and engineers etc. Especially in the sectors that require long time to make experts in fields like medicine and industry; the retirement and injuries of the experts, is a serious problem. The technology to support the training and assessment of skilled workers (like doctors, manufacturing workers) is strongly required for the society. Although there are some solutions for this problem, most of them are video-based which violates the privacy of the subjects. Furthermore, they are not easy to deploy due to the need for large training data. This thesis provides a novel framework to recognize, analyze, and assess human skills with minimum customization cost. The presented framework tackles this problem in two different domains, industrial setup and medical operations of catheter-based cardiovascular interventions (CBCVI). In particular, the contributions of this thesis are four-fold. First, it proposes an easy-to-deploy framework for human activity recognition based on zero-shot learning approach, which is based on learning basic actions and objects. The model recognizes unseen activities by combinations of basic actions learned in a preliminary way and involved objects. Therefore, it is completely configurable by the user and can be used to detect completely new activities. Second, a novel gaze-estimation model for attention driven object detection task is presented. The key features of the model are: (i) usage of the deformable convolutional layers to better incorporate spatial dependencies of different shapes of objects and backgrounds, (ii) formulation of the gaze-estimation problem in two different way, as a classification as well as a regression problem. We combine both formulations using a joint loss that incorporates both the cross-entropy as well as the mean-squared error in order to train our model. This enhanced the accuracy of the model from 6.8 by using only the cross-entropy loss to 6.4 for the joint loss. The third contribution of this thesis targets the area of quantification of quality of i actions using wearable sensor. To address the variety of scenarios, we have targeted two possibilities: a) both expert and novice data is available , b) only expert data is available, a quite common case in safety critical scenarios. Both of the developed methods from these scenarios are deep learning based. In the first one, we use autoencoders with OneClass SVM, and in the second one we use the Siamese Networks. These methods allow us to encode the expert’s expertise and to learn the differences between novice and expert workers. This enables quantification of the performance of the novice in comparison to the expert worker. The fourth contribution, explicitly targets medical practitioners and provides a methodology for novel gaze-based temporal spatial analysis of CBCVI data. The developed methodology allows continuous registration and analysis of gaze data for analysis of the visual X-ray image processing (XRIP) strategies of expert operators in live-cases scenarios and may assist in transferring experts’ reading skills to novices

    Motor learning induced neuroplasticity in minimally invasive surgery

    Get PDF
    Technical skills in surgery have become more complex and challenging to acquire since the introduction of technological aids, particularly in the arena of Minimally Invasive Surgery. Additional challenges posed by reforms to surgical careers and increased public scrutiny, have propelled identification of methods to assess and acquire MIS technical skills. Although validated objective assessments have been developed to assess motor skills requisite for MIS, they poorly understand the development of expertise. Motor skills learning, is indirectly observable, an internal process leading to relative permanent changes in the central nervous system. Advances in functional neuroimaging permit direct interrogation of evolving patterns of brain function associated with motor learning due to the property of neuroplasticity and has been used on surgeons to identify the neural correlates for technical skills acquisition and the impact of new technology. However significant gaps exist in understanding neuroplasticity underlying learning complex bimanual MIS skills. In this thesis the available evidence on applying functional neuroimaging towards assessment and enhancing operative performance in the field of surgery has been synthesized. The purpose of this thesis was to evaluate frontal lobe neuroplasticity associated with learning a complex bimanual MIS skill using functional near-infrared spectroscopy an indirect neuroimaging technique. Laparoscopic suturing and knot-tying a technically challenging bimanual skill is selected to demonstrate learning related reorganisation of cortical behaviour within the frontal lobe by shifts in activation from the prefrontal cortex (PFC) subserving attention to primary and secondary motor centres (premotor cortex, supplementary motor area and primary motor cortex) in which motor sequences are encoded and executed. In the cross-sectional study, participants of varying expertise demonstrate frontal lobe neuroplasticity commensurate with motor learning. The longitudinal study involves tracking evolution in cortical behaviour of novices in response to receipt of eight hours distributed training over a fortnight. Despite novices achieving expert like performance and stabilisation on the technical task, this study demonstrates that novices displayed persistent PFC activity. This study establishes for complex bimanual tasks, that improvements in technical performance do not accompany a reduced reliance in attention to support performance. Finally, least-squares support vector machine is used to classify expertise based on frontal lobe functional connectivity. Findings of this thesis demonstrate the value of interrogating cortical behaviour towards assessing MIS skills development and credentialing.Open Acces

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Eye Tracking: A Perceptual Interface for Content Based Image Retrieval

    Get PDF
    In this thesis visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. The thesis first explores gaze behaviour on images possessing different levels of saliency. Eye behaviour was predominantly attracted by salient locations, but appears to also require frequent reference to non-salient background regions which indicated that information from scan paths might prove useful for image search. The thesis then specifically investigates the benefits of eye tracking as an image retrieval interface in terms of speed relative to selection by mouse, and in terms of the efficiency of eye tracking mechanisms in the task of retrieving target images. Results are analysed using ANOVA and significant findings are discussed. Results show that eye selection was faster than a computer mouse and experience gained during visual tasks carried out using a mouse would benefit users if they were subsequently transferred to an eye tracking system. Results on the image retrieval experiments show that users are able to navigate to a target image within a database confirming the feasibility of an eye gaze driven search mechanism. Additional histogram analysis of the fixations, saccades and pupil diameters in the human eye movement data revealed a new method of extracting intentions from gaze behaviour for image search, of which the user was not aware and promises even quicker search performances. The research has two implications for Content Based Image Retrieval: (i) improvements in query formulation for visual search and (ii) new methods for visual search using attentional weighting. Futhermore it was demonstrated that users are able to find target images at sufficient speeds indicating that pre-attentive activity is playing a role in visual search. A current review of eye tracking technology, current applications, visual perception research, and models of visual attention is discussed. A review of the potential of the technology for commercial exploitation is also presented
    corecore