76 research outputs found

    Implicit active constraints for safe and effective guidance of unstable concentric tube robots

    Get PDF
    Safe and effective telemanipulation of concentric tube robots is hindered by their complex, non-intuitive kinematics. In order for clinicians to operate these robots naturally, guidance schemes in the form of attractive and repulsive constraints can simplify task execution. The real-time seamless calculation and application of guidance, however, requires computationally efficient algorithms that solve the non-linear inverse kinematics of the robot and guarantee that the commanded robot configuration is stable and sufficiently away from the anatomy. This paper presents a multi-processor framework that allows on-the-fly calculation of optimal safe paths based on rapid workspace and roadmap precomputation. The realtime nature of the developed software enables complex guidance constraints to be implemented with minimal computational overhead. A clinically challenging user study demonstrates that the incorporated guiding constraints are highly beneficial for fast and accurate navigation with concentric tube robots

    Impact of Ear Occlusion on In-Ear Sounds Generated by Intra-oral Behaviors

    Get PDF
    We conducted a case study with one volunteer and a recording setup to detect sounds induced by the actions: jaw clenching, tooth grinding, reading, eating, and drinking. The setup consisted of two in-ear microphones, where the left ear was semi-occluded with a commercially available earpiece and the right ear was occluded with a mouldable silicon ear piece. Investigations in the time and frequency domains demonstrated that for behaviors such as eating, tooth grinding, and reading, sounds could be recorded with both sensors. For jaw clenching, however, occluding the ear with a mouldable piece was necessary to enable its detection. This can be attributed to the fact that the mouldable ear piece sealed the ear canal and isolated it from the environment, resulting in a detectable change in pressure. In conclusion, our work suggests that detecting behaviors such as eating, grinding, reading with a semi-occluded ear is possible, whereas, behaviors such as clenching require the complete occlusion of the ear if the activity should be easily detectable. Nevertheless, the latter approach may limit real-world applicability because it hinders the hearing capabilities.</p

    Research on real-time physics-based deformation for haptic-enabled medical simulation

    Full text link
    This study developed a multiple effective visuo-haptic surgical engine to handle a variety of surgical manipulations in real-time. Soft tissue models are based on biomechanical experiment and continuum mechanics for greater accuracy. Such models will increase the realism of future training systems and the VR/AR/MR implementations for the operating room

    An Introduction to Robotically Assisted Surgical Systems: Current Developments and Focus Areas of Research

    Get PDF
    Robotic assistance systems for diagnosis and therapy have become technically mature and widely available. Thus, they play an increasingly important role in patient care. This paper provides an overview of the general concepts of robotically assisted surgical systems, briefly revisiting historical and current developments in the surgical robotics market and discussing current focus areas of research. Comprehensiveness cannot be achieved in this format, but besides the general overview, references to further readings and more comprehensive reviews with regard to particular aspects are given. Therefore, the work at hand is considered as an introductory paper into the topic and especially addresses investigators, researchers, medical device manufacturers, and clinicians, who are new to this field

    Dynamic testing of total hip and knee replacements under physiological conditions

    Get PDF
    Instability of total hip and knee replacements prevails as major complication. As measurements in patients are afflicted with ethical objections, this work presents a hardware-in-the-loop (HiL) approach that is capable of testing total joint stability under dynamic, reproducible and physiological conditions. An essential aspect represents its validation which includes the development of specific multibody models. In this sense, the HiL test system extends the repertoire of common approaches in orthopedic research by combining the advantages of real implant testing and model-based simulation

    Machine learning and interactive real-time simulation for training on relevant total hip replacement skills.

    Get PDF
    Virtual Reality simulators have proven to be an excellent tool in the medical sector to help trainees mastering surgical abilities by providing them with unlimited training opportunities. Total Hip Replacement (THR) is a procedure that can benefit significantly from VR/AR training, given its non-reversible nature. From all the different steps required while performing a THR, doctors agree that a correct fitting of the acetabular component of the implant has the highest relevance to ensure successful outcomes. Acetabular reaming is the step during which the acetabulum is resurfaced and prepared to receive the acetabular implant. The success of this step is directly related to the success of fitting the acetabular component. Therefore, this thesis will focus on developing digital tools that can be used to assist the training of acetabular reaming. Devices such as navigation systems and robotic arms have proven to improve the final accuracy of the procedure. However, surgeons must learn to adapt their instrument movements to be recognised by infrared cameras. When surgeons are initially introduced to these systems, surgical times can be extended up to 20 minutes, maximising surgical risks. Training opportunities are sparse, given the high investment required to purchase these devices. As a cheaper alternative, we developed an Augmented Reality (AR) alternative for training on the calibration of imageless navigation systems (INS). At the time, there were no alternative simulators that using head-mounted displays to train users into the steps to calibrate such systems. Our simulator replicates the presence of an infrared camera and its interaction with the reflecting markers located on the surgical tools. A group of 6 hip surgeons were invited to test the simulator. All of them expressed their satisfaction with the ease of use and attractiveness of the simulator as well as the similarity of interaction with the real procedure. The study confirmed that our simulator represents a cheaper and faster option to train multiple surgeons simultaneously in the use of Imageless Navigation Systems (INS) than learning exclusively on the surgical theatre. Current reviews on simulators for orthopaedic surgical procedures lack objective metrics of assessment given a standard set of design requirements. Instead, most of them rely exclusively on the level of interaction and functionality provided. We propose a comparative assessment rubric based on three different evaluation criteria. Namely immersion, interaction fidelity, and applied learning theories. After our assessment, we found that none of the simulators available for THR provides an accurate interactive representation of resurfacing procedures such as acetabular reaming based on force inputs exerted by the user. This feature is indispensable for an orthopaedics simulator, given that hand-eye coordination skills are essential skills to be trained before performing non-reversible bone removal on real patients. Based on the findings of our comparative assessment, we decided to develop a model to simulate the physically-based deformation expected during traditional acetabular reaming, given the user’s interaction with a volumetric mesh. Current interactive deformation methods on high-resolution meshes are based on geometrical collision detection and do not consider the contribution of the materials’ physical properties. By ignoring the effect of the material mechanics and the force exerted by the user, they become inadequate for training on hand- eye coordination skills transferable to the surgical theatre. Volumetric meshes are preferred in surgical simulation to geometric ones, given that they are able to represent the internal evolution of deformable solids resulting from cutting and shearing operations. Existing numerical methods for representing linear and corotational FEM cuts can only maintain interactive framerates at a low resolution of the mesh. Therefore, we decided to train a machine-learning model to learn the continuum mechanic laws relevant to acetabular reaming and predict deformations at interactive framerates. To the best of our knowledge, no research has been done previously on training a machine learning model on non-elastic FEM data to achieve results at interactive framerates. As training data, we used the results from XFEM simulations precomputed over 5000 frames for plastic deformations on tetrahedral meshes with 20406 elements each. We selected XFEM simulation as the physically-based deformation ground-truth given its accuracy and fast convergence to represent cuts, discontinuities and large strain rates. Our machine learning-based interactive model was trained following the Graph Neural Networks (GNN) blocks. GNNs were selected to learn on tetrahedral meshes as other supervised-learning architectures like the Multilayer perceptron (MLP), and Convolutional neural networks (CNN) are unable to learn the relationships between entities with an arbitrary number of neighbours. The learned simulator identifies the elements to be removed on each frame and describes the accumulated stress evolution in the whole machined piece. Using data generated from the results of XFEM allowed us to embed the effects of non-linearities in our interactive simulations without extra processing time. The trained model executed the prediction task using our tetrahedral mesh and unseen reamer orientations faster per frame than the time required to generate the training FEM dataset. Given an unseen orientation of the reamer, the trained GN model updates the value of accumulated stress on each of the 20406 tetrahedral elements that constitute our mesh during the prediction task. Once this value is updated, the tetrahedrons to be removed from the mesh are identified using a threshold condition. After using each single-frame output as input for the following prediction repeatedly for up to 60 iterations, our model can maintain an accuracy of up to 90.8% in identifying the status of each element given their value of accumulated stress. Finally, we demonstrate how the developed estimator can be easily connected to any game engine and included in developing a fully functional hip arthroplasty simulator

    Robotic Assisted Fracture Surgery

    Get PDF

    Realistic tool-tissue interaction models for surgical simulation and planning

    Get PDF
    Surgical simulators present a safe and potentially effective method for surgical training, and can also be used in pre- and intra-operative surgical planning. Realistic modeling of medical interventions involving tool-tissue interactions has been considered to be a key requirement in the development of high-fidelity simulators and planners. The soft-tissue constitutive laws, organ geometry and boundary conditions imposed by the connective tissues surrounding the organ, and the shape of the surgical tool interacting with the organ are some of the factors that govern the accuracy of medical intervention planning.\ud \ud This thesis is divided into three parts. First, we compare the accuracy of linear and nonlinear constitutive laws for tissue. An important consequence of nonlinear models is the Poynting effect, in which shearing of tissue results in normal force; this effect is not seen in a linear elastic model. The magnitude of the normal force for myocardial tissue is shown to be larger than the human contact force discrimination threshold. Further, in order to investigate and quantify the role of the Poynting effect on material discrimination, we perform a multidimensional scaling study. Second, we consider the effects of organ geometry and boundary constraints in needle path planning. Using medical images and tissue mechanical properties, we develop a model of the prostate and surrounding organs. We show that, for needle procedures such as biopsy or brachytherapy, organ geometry and boundary constraints have more impact on target motion than tissue material parameters. Finally, we investigate the effects surgical tool shape on the accuracy of medical intervention planning. We consider the specific case of robotic needle steering, in which asymmetry of a bevel-tip needle results in the needle naturally bending when it is inserted into soft tissue. We present an analytical and finite element (FE) model for the loads developed at the bevel tip during needle-tissue interaction. The analytical model explains trends observed in the experiments. We incorporated physical parameters (rupture toughness and nonlinear material elasticity) into the FE model that included both contact and cohesive zone models to simulate tissue cleavage. The model shows that the tip forces are sensitive to the rupture toughness. In order to model the mechanics of deflection of the needle, we use an energy-based formulation that incorporates tissue-specific parameters such as rupture toughness, nonlinear material elasticity, and interaction stiffness, and needle geometric and material properties. Simulation results follow similar trends (deflection and radius of curvature) to those observed in macroscopic experimental studies of a robot-driven needle interacting with gels

    Hand eye coordination in surgery

    Get PDF
    The coordination of the hand in response to visual target selection has always been regarded as an essential quality in a range of professional activities. This quality has thus far been elusive to objective scientific measurements, and is usually engulfed in the overall performance of the individuals. Parallels can be drawn to surgery, especially Minimally Invasive Surgery (MIS), where the physical constraints imposed by the arrangements of the instruments and visualisation methods require certain coordination skills that are unprecedented. With the current paradigm shift towards early specialisation in surgical training and shortened focused training time, selection process should identify trainees with the highest potentials in certain specific skills. Although significant effort has been made in objective assessment of surgical skills, it is only currently possible to measure surgeons’ abilities at the time of assessment. It has been particularly difficult to quantify specific details of hand-eye coordination and assess innate ability of future skills development. The purpose of this thesis is to examine hand-eye coordination in laboratory-based simulations, with a particular emphasis on details that are important to MIS. In order to understand the challenges of visuomotor coordination, movement trajectory errors have been used to provide an insight into the innate coordinate mapping of the brain. In MIS, novel spatial transformations, due to a combination of distorted endoscopic image projections and the “fulcrum” effect of the instruments, accentuate movement generation errors. Obvious differences in the quality of movement trajectories have been observed between novices and experts in MIS, however, this is difficult to measure quantitatively. A Hidden Markov Model (HMM) is used in this thesis to reveal the underlying characteristic movement details of a particular MIS manoeuvre and how such features are exaggerated by the introduction of rotation in the endoscopic camera. The proposed method has demonstrated the feasibility of measuring movement trajectory quality by machine learning techniques without prior arbitrary classification of expertise. Experimental results have highlighted these changes in novice laparoscopic surgeons, even after a short period of training. The intricate relationship between the hands and the eyes changes when learning a skilled visuomotor task has been previously studied. Reactive eye movement, when visual input is used primarily as a feedback mechanism for error correction, implies difficulties in hand-eye coordination. As the brain learns to adapt to this new coordinate map, eye movements then become predictive of the action generated. The concept of measuring this spatiotemporal relationship is introduced as a measure of hand-eye coordination in MIS, by comparing the Target Distance Function (TDF) between the eye fixation and the instrument tip position on the laparoscopic screen. Further validation of this concept using high fidelity experimental tasks is presented, where higher cognitive influence and multiple target selection increase the complexity of the data analysis. To this end, Granger-causality is presented as a measure of the predictability of the instrument movement with the eye fixation pattern. Partial Directed Coherence (PDC), a frequency-domain variation of Granger-causality, is used for the first time to measure hand-eye coordination. Experimental results are used to establish the strengths and potential pitfalls of the technique. To further enhance the accuracy of this measurement, a modified Jensen-Shannon Divergence (JSD) measure has been developed for enhancing the signal matching algorithm and trajectory segmentations. The proposed framework incorporates high frequency noise filtering, which represents non-purposeful hand and eye movements. The accuracy of the technique has been demonstrated by quantitative measurement of multiple laparoscopic tasks by expert and novice surgeons. Experimental results supporting visual search behavioural theory are presented, as this underpins the target selection process immediately prior to visual motor action generation. The effects of specialisation and experience on visual search patterns are also examined. Finally, pilot results from functional brain imaging are presented, where the Posterior Parietal Cortical (PPC) activation is measured using optical spectroscopy techniques. PPC has been demonstrated to involve in the calculation of the coordinate transformations between the visual and motor systems, which establishes the possibilities of exciting future studies in hand-eye coordination
    • …
    corecore