785 research outputs found

    Sensory-Glove-Based Open Surgery Skill Evaluation

    Get PDF
    Manual dexterity is one of the most important surgical skills, and yet there are limited instruments to evaluate this ability objectively. In this paper, we propose a system designed to track surgeons’ hand movements during simulated open surgery tasks and to evaluate their manual expertise. Eighteen participants, grouped according to their surgical experience, performed repetitions of two basic surgical tasks, namely single interrupted suture and simple running suture. Subjects’ hand movements were measured with a sensory glove equipped with flex and inertial sensors, tracking flexion/extension of hand joints, and wrist movement. The participants’ level of experience was evaluated discriminating manual performances using linear discriminant analysis, support vector machines, and artificial neural network classifiers. Artificial neural networks showed the best performance, with a median error rate of 0.61% on the classification of single interrupted sutures and of 0.57% on simple running sutures. Strategies to reduce sensory glove complexity and increase its comfort did not affect system performances substantially

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    Perception and Orientation in Minimally Invasive Surgery

    No full text
    During the last two decades, we have seen a revolution in the way that we perform abdominal surgery with increased reliance on minimally invasive techniques. This paradigm shift has come at a rapid pace, with laparoscopic surgery now representing the gold standard for many surgical procedures and further minimisation of invasiveness being seen with the recent clinical introduction of novel techniques such as single-incision laparoscopic surgery and natural orifice translumenal endoscopic surgery. Despite the obvious benefits conferred on the patient in terms of morbidity, length of hospital stay and post-operative pain, this paradigm shift comes at a significantly higher demand on the surgeon, in terms of both perception and manual dexterity. The issues involved include degradation of sensory input to the operator compared to conventional open surgery owing to a loss of three-dimensional vision through the use of the two-dimensional operative interface, and decreased haptic feedback from the instruments. These changes have led to a much higher cognitive load on the surgeon and a greater risk of operator disorientation leading to potential surgical errors. This thesis represents a detailed investigation of disorientation in minimally invasive surgery. In this thesis, eye tracking methodology is identified as the method of choice for evaluating behavioural patterns during orientation. An analysis framework is proposed to profile orientation behaviour using eye tracking data validated in a laboratory model. This framework is used to characterise and quantify successful orientation strategies at critical stages of laparoscopic cholecystectomy and furthermore use these strategies to prove that focused teaching of this behaviour in novices can significantly increase performance in this task. Orientation strategies are then characterised for common clinical scenarios in natural orifice translumenal endoscopic surgery and the concept of image saliency is introduced to further investigate the importance of specific visual cues associated with effective orientation. Profiling of behavioural patterns is related to performance in orientation and implications on education and construction of smart surgical robots are drawn. Finally, a method for potentially decreasing operator disorientation is investigated in the form of endoscopic horizon stabilization in a simulated operative model for transgastric surgery. The major original contributions of this thesis include: Validation of a profiling methodology/framework to characterise orientation behaviour Identification of high performance orientation strategies in specific clinical scenarios including laparoscopic cholecystectomy and natural orifice translumenal endoscopic surgery Evaluation of the efficacy of teaching orientation strategies Evaluation of automatic endoscopic horizon stabilization in natural orifice translumenal endoscopic surgery The impact of the results presented in this thesis, as well as the potential for further high impact research is discussed in the context of both eye tracking as an evaluation tool in minimally invasive surgery as well as implementation of means to combat operator disorientation in a surgical platform. The work also provides further insight into the practical implementation of computer-assistance and technological innovation in future flexible access surgical platforms

    Sensorimotor experience in virtual environments

    Get PDF
    The goal of rehabilitation is to reduce impairment and provide functional improvements resulting in quality participation in activities of life, Plasticity and motor learning principles provide inspiration for therapeutic interventions including movement repetition in a virtual reality environment, The objective of this research work was to investigate functional specific measurements (kinematic, behavioral) and neural correlates of motor experience of hand gesture activities in virtual environments stimulating sensory experience (VE) using a hand agent model. The fMRI compatible Virtual Environment Sign Language Instruction (VESLI) System was designed and developed to provide a number of rehabilitation and measurement features, to identify optimal learning conditions for individuals and to track changes in performance over time. Therapies and measurements incorporated into VESLI target and track specific impairments underlying dysfunction. The goal of improved measurement is to develop targeted interventions embedded in higher level tasks and to accurately track specific gains to understand the responses to treatment, and the impact the response may have upon higher level function such as participation in life. To further clarify the biological model of motor experiences and to understand the added value and role of virtual sensory stimulation and feedback which includes seeing one\u27s own hand movement, functional brain mapping was conducted with simultaneous kinematic analysis in healthy controls and in stroke subjects. It is believed that through the understanding of these neural activations, rehabilitation strategies advantaging the principles of plasticity and motor learning will become possible. The present research assessed successful practice conditions promoting gesture learning behavior in the individual. For the first time, functional imaging experiments mapped neural correlates of human interactions with complex virtual reality hands avatars moving synchronously with the subject\u27s own hands, Findings indicate that healthy control subjects learned intransitive gestures in virtual environments using the first and third person avatars, picture and text definitions, and while viewing visual feedback of their own hands, virtual hands avatars, and in the control condition, hidden hands. Moreover, exercise in a virtual environment with a first person avatar of hands recruited insular cortex activation over time, which might indicate that this activation has been associated with a sense of agency. Sensory augmentation in virtual environments modulated activations of important brain regions associated with action observation and action execution. Quality of the visual feedback was modulated and brain areas were identified where the amount of brain activation was positively or negatively correlated with the visual feedback, When subjects moved the right hand and saw unexpected response, the left virtual avatar hand moved, neural activation increased in the motor cortex ipsilateral to the moving hand This visual modulation might provide a helpful rehabilitation therapy for people with paralysis of the limb through visual augmentation of skills. A model was developed to study the effects of sensorimotor experience in virtual environments, and findings of the effect of sensorimotor experience in virtual environments upon brain activity and related behavioral measures. The research model represents a significant contribution to neuroscience research, and translational engineering practice, A model of neural activations correlated with kinematics and behavior can profoundly influence the delivery of rehabilitative services in the coming years by giving clinicians a framework for engaging patients in a sensorimotor environment that can optimally facilitate neural reorganization

    POV-Surgery: A Dataset for Egocentric Hand and Tool Pose Estimation During Surgical Activities

    Full text link
    The surgical usage of Mixed Reality (MR) has received growing attention in areas such as surgical navigation systems, skill assessment, and robot-assisted surgeries. For such applications, pose estimation for hand and surgical instruments from an egocentric perspective is a fundamental task and has been studied extensively in the computer vision field in recent years. However, the development of this field has been impeded by a lack of datasets, especially in the surgical field, where bloody gloves and reflective metallic tools make it hard to obtain 3D pose annotations for hands and objects using conventional methods. To address this issue, we propose POV-Surgery, a large-scale, synthetic, egocentric dataset focusing on pose estimation for hands with different surgical gloves and three orthopedic surgical instruments, namely scalpel, friem, and diskplacer. Our dataset consists of 53 sequences and 88,329 frames, featuring high-resolution RGB-D video streams with activity annotations, accurate 3D and 2D annotations for hand-object pose, and 2D hand-object segmentation masks. We fine-tune the current SOTA methods on POV-Surgery and further show the generalizability when applying to real-life cases with surgical gloves and tools by extensive evaluations. The code and the dataset are publicly available at batfacewayne.github.io/POV_Surgery_io/

    Physical human-robot collaboration: Robotic systems, learning methods, collaborative strategies, sensors, and actuators

    Get PDF
    This article presents a state-of-the-art survey on the robotic systems, sensors, actuators, and collaborative strategies for physical human-robot collaboration (pHRC). This article starts with an overview of some robotic systems with cutting-edge technologies (sensors and actuators) suitable for pHRC operations and the intelligent assist devices employed in pHRC. Sensors being among the essential components to establish communication between a human and a robotic system are surveyed. The sensor supplies the signal needed to drive the robotic actuators. The survey reveals that the design of new generation collaborative robots and other intelligent robotic systems has paved the way for sophisticated learning techniques and control algorithms to be deployed in pHRC. Furthermore, it revealed the relevant components needed to be considered for effective pHRC to be accomplished. Finally, a discussion of the major advances is made, some research directions, and future challenges are presented

    Biomechatronics: Harmonizing Mechatronic Systems with Human Beings

    Get PDF
    This eBook provides a comprehensive treatise on modern biomechatronic systems centred around human applications. A particular emphasis is given to exoskeleton designs for assistance and training with advanced interfaces in human-machine interaction. Some of these designs are validated with experimental results which the reader will find very informative as building-blocks for designing such systems. This eBook will be ideally suited to those researching in biomechatronic area with bio-feedback applications or those who are involved in high-end research on manmachine interfaces. This may also serve as a textbook for biomechatronic design at post-graduate level

    Human to robot hand motion mapping methods: review and classification

    Get PDF
    In this article, the variety of approaches proposed in literature to address the problem of mapping human to robot hand motions are summarized and discussed. We particularly attempt to organize under macro-categories the great quantity of presented methods, that are often difficult to be seen from a general point of view due to different fields of application, specific use of algorithms, terminology and declared goals of the mappings. Firstly, a brief historical overview is reported, in order to provide a look on the emergence of the human to robot hand mapping problem as a both conceptual and analytical challenge that is still open nowadays. Thereafter, the survey mainly focuses on a classification of modern mapping methods under six categories: direct joint, direct Cartesian, taskoriented, dimensionality reduction based, pose recognition based and hybrid mappings. For each of these categories, the general view that associates the related reported studies is provided, and representative references are highlighted. Finally, a concluding discussion along with the authors’ point of view regarding future desirable trends are reported.This work was supported in part by the European Commission’s Horizon 2020 Framework Programme with the project REMODEL under Grant 870133 and in part by the Spanish Government under Grant PID2020-114819GB-I00.Peer ReviewedPostprint (published version

    Grip force as a functional window to somatosensory cognition

    Get PDF
    Analysis of grip force signals tailored to hand and finger movement evolution and changes in grip force control during task execution provide unprecedented functional insight into somatosensory cognition. Somatosensory cognition is a basis of our ability to manipulate, move, and transform objects of the physical world around us, to recognize them on the basis of touch alone, and to grasp them with the right amount of force for lifting and manipulating them. Recent technology has permitted the wireless monitoring of grip force signals recorded from biosensors in the palm of the human hand to track and trace human grip forces deployed in cognitive tasks executed under conditions of variable sensory (visual, auditory) input. Non-invasive multi-finger grip force sensor technology can be exploited to explore functional interactions between somatosensory brain mechanisms and motor control, in particular during learning a cognitive task where the planning and strategic execution of hand movements is essential. Sensorial and cognitive processes underlying manual skills and/or hand-specific (dominant versus non-dominant hand) behaviors can be studied in a variety of contexts by probing selected measurement loci in the fingers and palm of the human hand. Thousands of sensor data recorded from multiple spatial locations can be approached statistically to breathe functional sense into the forces measured under specific task constraints. Grip force patterns in individual performance profiling may reveal the evolution of grip force control as a direct result of cognitive changes during task learning. Grip forces can be functionally mapped to from-global-to-local coding principles in brain networks governing somatosensory processes for motor control in cognitive tasks leading to a specific task expertise or skill. Under the light of a comprehensive overview of recent discoveries into the functional significance of human grip force variations, perspectives for future studies in cognition, in particular the cognitive control of strategic and task relevant hand movements in complex real-world precision task, are pointed out
    • …
    corecore