35 research outputs found

    Development and Validation Methodology of the Nuss Procedure Surgical Planner

    Get PDF
    Pectus excavatum (PE) is a congenital chest wall deformity which is characterized, in most cases, by a deep depression of the sternum. A minimally invasive technique for the repair of PE (MIRPE), often referred to as the Nuss procedure, has been proven to be more advantageous than many other PE treatment techniques. The Nuss procedure consists of placement of a metal bar(s) underneath the sternum, thereby forcibly changing the geometry of the ribcage. Because of the prevalence of PE and the popularity of the Nuss procedure, the demand to perform this surgery is greater than ever. Therefore, a Nuss procedure surgical planner would be an invaluable planning tool ensuring an optimal physiological and aesthetic outcome. In this dissertation, the development and validation of the Nuss procedure planner is investigated. First, a generic model of the ribcage is developed to overcome the issue of missing cartilage when PE ribcages are segmented and facilitate the flexibility of the model to accommodate a range of deformity. Then, the CT data collected from actual patients with PE is used to create a set of patient specific finite element models. Based on finite element analyses performed over those models, a set force-displacement data set is created. This data is used to train an artificial neural network to generalize the data set. In order to evaluate the planning process, a methodology which uses an average shape of the chest for comparison with results of the Nuss procedure planner is developed. This method is based on a sample of normal chests obtained from the ODU male population using laser surface scanning and overcomes challenging issues such as hole-filling, scan registration and consistency. Additionally, this planning simulator is optimized so that it can be used for training purposes. Haptic feedback and inertial tracking is implemented, and the force-displacement model is approximated using a neural network approach and evaluated for real-time performance. The results show that it is possible to utilize this approximation of the force-displacement model for the Nuss procedure simulator. The detailed ribcage model achieves real-time performance

    Evaluation of haptic virtual reality user interfaces for medical marking on 3D models

    Get PDF
    Three-dimensional (3D) visualization has been widely used in computer-aided medical diagnosis and planning. To interact with 3D models, current user interfaces in medical systems mainly rely on the traditional 2D interaction techniques by employing a mouse and a 2D display. There are promising haptic virtual reality (VR) interfaces which can enable intuitive and realistic 3D interaction by using VR equipment and haptic devices. However, the practical usability of the haptic VR interfaces in this medical field remains unexplored. In this study, we propose two haptic VR interfaces, a vibrotactile VR interface and a kinesthetic VR interface, for medical diagnosis and planning on volumetric medical images. The vibrotactile VR interface used a head-mounted VR display as the visual output channel and a VR controller with vibrotactile feedback as the manipulation tool. Similarly, the kinesthetic VR interface used a head-mounted VR display as the visual output channel and a kinesthetic force-feedback device as the manipulation tool. We evaluated these two VR interfaces in an experiment involving medical marking on 3D models, by comparing them with the present state-of-the-art 2D interface as the baseline. The results showed that the kinesthetic VR interface performed the best in terms of marking accuracy, whereas the vibrotactile VR interface performed the best in terms of task completion time. Overall, the participants preferred to use the kinesthetic VR interface for the medical task.acceptedVersionPeer reviewe

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Study of medical image data transformation techniques and compatibility analysis for 3D printing

    Get PDF
    Various applications exist for additive manufacturing (AM) and reverse engineering (RE) within the medical sector. One of the significant challenges identified in the literature is the accuracy of 3D printed medical models compared to their original CAD models. Some studies have reported that 3D printed models are accurate, while others claim the opposite. This thesis aims to highlight the medical applications of AM and RE, study medical image reconstruction techniques into a 3D printable file format, and the deviations of a 3D printed model using RE. A case study on a human femur bone was conducted through medical imaging, 3D printing, and RE for comparative deviation analysis. In addition, another medical application of RE has been presented, which is for solid modelling. Segmentation was done using opensource software for trial and training purposes, while the experiment was done using commercial software. The femur model was 3D printed using an industrial FDM printer. Three different non-contact 3D scanners were investigated for the RE process. Post-processing of the point cloud was done in the VX Elements software environment, while mesh analysis was conducted in MeshLab. The scanning performance was measured using the VX Inspect environment and MeshLab. Both relative and absolute metrics were used to determine the deviation of the scanned models from the reference mesh. The scanners' range of deviations was approximately from -0.375 mm to 0.388 mm (range of about 0.763mm) with an average RMS of about 0.22 mm. The results showed that the mean deviation of the 3D printed model (based on 3D scanning) has an average range of about 0.46mm, with an average mean value of about 0.16 mm

    A biomechanics-based articulation model for medical applications

    Get PDF
    Computer Graphics came into the medical world especially after the arrival of 3D medical imaging. Computer Graphics techniques are already integrated in the diagnosis procedure by means of the visual tridimensional analysis of computer tomography, magnetic resonance and even ultrasound data. The representations they provide, nevertheless, are static pictures of the patients' body, lacking in functional information. We believe that the next step in computer assisted diagnosis and surgery planning depends on the development of functional 3D models of human body. It is in this context that we propose a model of articulations based on biomechanics. Such model is able to simulate the joint functionality in order to allow for a number of medical applications. It was developed focusing on the following requirements: it must be at the same time simple enough to be implemented on computer, and realistic enough to allow for medical applications; it must be visual in order for applications to be able to explore the joint in a 3D simulation environment. Then, we propose to combine kinematical motion for the parts that can be considered as rigid, such as bones, and physical simulation of the soft tissues. We also deal with the interaction between the different elements of the joint, and for that we propose a specific contact management model. Our kinematical skeleton is based on anatomy. Special considerations have been taken to include anatomical features like axis displacements, range of motion control, and joints coupling. Once a 3D model of the skeleton is built, it can be simulated by data coming from motion capture or can be specified by a specialist, a clinician for instance. Our deformation model is an extension of the classical mass-spring systems. A spherical volume is considered around mass points, and mechanical properties of real materials can be used to parameterize the model. Viscoelasticity, anisotropy and non-linearity of the tissues are simulated. We particularly proposed a method to configure the mass-spring matrix such that the objects behave according to a predefined Young's modulus. A contact management model is also proposed to deal with the geometric interactions between the elements inside the joint. After having tested several approaches, we proposed a new method for collision detection which measures in constant time the signed distance to the closest point for each point of two meshes subject to collide. We also proposed a method for collision response which acts directly on the surfaces geometry, in a way that the physical behavior relies on the propagation of reaction forces produced inside the tissue. Finally, we proposed a 3D model of a joint combining the three elements: anatomical skeleton motion, biomechanical soft tissues deformation, and contact management. On the top of that we built a virtual hip joint and implemented a set of medical applications prototypes. Such applications allow for assessment of stress distribution on the articular surfaces, range of motion estimation based on ligament constraint, ligament elasticity estimation from clinically measured range of motion, and pre- and post-operative evaluation of stress distribution. Although our model provides physicians with a number of useful variables for diagnosis and surgery planning, it should be improved for effective clinical use. Validation has been done partially. However, a global clinical validation is necessary. Patient specific data are still difficult to obtain, especially individualized mechanical properties of tissues. The characterization of material properties in our soft tissues model can also be improved by including control over the shear modulus

    Exploring Cognitive Processes with Virtual Environments

    Get PDF
    The scope of this thesis is the study of cognitive processes with, and within Virtual Environments (VEs). Specifically, the presented work has two main objectives: (1) to outline a framework for situating the applications of VEs to cognitive sciences, especially those interfacing with the medical domain; and (2) to empirically illustrate the potential of VEs for studying specific aspects of cognitive processes. As for the first objective, the sought framework has been built by proposing classifications and discussing several examples of VEs used for assessing and treating disorders of attention, memory, executive functions, visual-spatial skills, and language. Virtual Reality Exposure Therapy was briefly discussed as well, and applications to autism spectrum disorders, schizophrenia, and pain control were touched on. These applications in fact underscore prerogatives that may extend to non-medical applications to cognitive sciences. The second objective was sought by studying the time course of attention. Two experiments were undertaken, both relying on dual-target paradigms that cause an attentional blink (AB). The first experiment evaluated the effect of a 7-week Tibetan Yoga training on the performance of habitual meditators in an AB paradigm using letters as distractors, and single-digit numbers as targets. The results confirm the evidence that meditation improves the allocation of attentional resources, and extend this conclusion to Yoga, which incorporates also physical exercise. The second experiment compared the AB performance of adult participants using rapid serial presentations of road signs -- hence less abstract stimuli -- under three display conditions: as 2-D images on a computer screen, either with or without a concurrent auditory distraction, and appearing in a 3-D immersive virtual environment depicting a motorway junction. The results found a generally weak AB magnitude, which is maximal in the Virtual Environment, and minimal in the condition with the concurrent auditory distraction. However, no lag-1 sparing effect was observed. == La tesi attiene allo studio dei processi cognitivi in ambiente virtuale (AV). In particolare il lavoro presentato ha due obiettivi principali: (1) proporre un quadro di riferimento per le applicazioni degli AV nelle scienze cognitive, specialmente quando queste rientrano nel settore medico; e (2) illustrare empiricamente il potenziale degli AV per lo studio di specifici aspetti cognitivi. Il quadro di riferimento del primo obiettivo è stato costruito discutendo classificazioni ed esempi di AV usati per valutare e trattare disturbi dell'attenzione, della memoria, esecutivi, delle abilità visuo-spaziali e del linguaggio. Sono stati brevemente discussi anche esempi di applicazioni di Virtual Reality Exposure Therapy, e per i disturbi dello spettro autistico, la schizofrenia, e l'analgesia. Esse sono infatti rappresentative delle prerogative degli AV trasferibili ad aspetti non medici delle scienze cognitive. Per il secondo obiettivo del lavoro si è studiata l'attenzione nel dominio temporale. Sono stati realizzati due esperimenti, entrambi basati su un paradigma sperimentale di doppio-compito tale da indurre il fenomeno dell'attentional blink (AB). Il primo esperimento ha valutato l'effetto di 7 settimane di Yoga tibetano sull'AB di un gruppo di meditatori. La presentazione visiva seriale rapida comprendeva lettere come distrattori e numeri a una cifra come target. I risultati confermano che la meditazione riduce l'AB, ed estendono questa conclusione allo Yoga, che include anche l'esercizio fisico. Il secondo esperimento ha confrontato l'AB di un gruppo di adulti utilizzando presentazioni visive seriali rapide di segnali stradali -- dunque stimoli meno astratti -- in 3 condizioni: immagini 2-D sullo schermo di un computer, essendo simultaneamente presente o assente una distrazione uditiva, e presentazione 3-D in un AV immersivo che simula un incrocio autostradale. I risultati rilevano un AB lieve, che è massimo nell'AV, e minimo nella condizione 2-D con la distrazione uditiva. L'effetto lag-1 sparing non è presente

    Dagstuhl News January - December 2000

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Accuracy of image guided robotic assistance in cochlear implant surgery

    Get PDF
    [no abstract

    Advanced Applications of Rapid Prototyping Technology in Modern Engineering

    Get PDF
    Rapid prototyping (RP) technology has been widely known and appreciated due to its flexible and customized manufacturing capabilities. The widely studied RP techniques include stereolithography apparatus (SLA), selective laser sintering (SLS), three-dimensional printing (3DP), fused deposition modeling (FDM), 3D plotting, solid ground curing (SGC), multiphase jet solidification (MJS), laminated object manufacturing (LOM). Different techniques are associated with different materials and/or processing principles and thus are devoted to specific applications. RP technology has no longer been only for prototype building rather has been extended for real industrial manufacturing solutions. Today, the RP technology has contributed to almost all engineering areas that include mechanical, materials, industrial, aerospace, electrical and most recently biomedical engineering. This book aims to present the advanced development of RP technologies in various engineering areas as the solutions to the real world engineering problems

    Intraoperative process monitoring using generalized surgical process models

    Get PDF
    Der Chirurg in einem modernen Operationssaal kann auf die Funktionen einer Vielzahl technischer, seine Arbeit unterstützender, Geräte zugreifen. Diese Geräte und damit auch die Funktionen, die diese zur Verfügung stellen, sind nur unzureichend miteinander vernetzt. Die unzureichende Interoperabilität der Geräte bezieht sich dabei nicht nur auf den Austausch von Daten untereinander, sondern auch auf das Fehlen eines zentralen Wissens über den gesamten Ablauf des chirurgischen Prozesses. Es werden daher Systeme benötigt, die Prozessmodelle verarbeiten und damit globales Wissen über den Prozess zur Verfügung stellen können. Im Gegensatz zu den meisten Prozessen, die in der Wirtschaft durch Workflow Management-Systeme (WfMS) unterstützt werden, ist der chirurgische Prozess durch eine hohe Variabilität gekennzeichnet. Mittlerweile gibt es viele Ansätze feingranulare, hochformalisierte Modelle des chirurgischen Prozesses zu erstellen. In dieser Arbeit wird zum einen die Qualität eines, auf patienten individuellen Eingriffen basierenden, generalisierten Modells hinsichtlich der Abarbeitung durch ein WfMS untersucht, zum anderen werden die Voraussetzungen die, die vorgelagerten Systeme erfüllen müssen geprüft. Es wird eine Aussage zur Abbruchrate der Pfadverfolgung im generalisierten Modell gemacht, das durch eine unterschiedliche Anzahl von patientenindividuellen Modellen erstellt wurde. Zudem wird die Erfolgsrate zum Wiederfinden des Prozesspfades im Modell ermittelt. Ausserdem werden die Anzahl der benötigten Schritte zumWiederfinden des Prozesspfades im Modell betrachtet.:List of Figures iv List of Tables vi 1 Introduction 1 1.1 Motivation 1 1.2 Problems and objectives 3 2 State of research 6 2.1 Definitions of terms 6 2.1.1 Surgical process 6 2.1.2 Surgical Process Model 7 2.1.3 gSPM and surgical workflow 7 2.1.4 Surgical workflow management system 8 2.1.5 Summary 9 2.2 Workflow Management Systems 10 2.2.1 Agfa HealthCare - ORBIS 10 2.2.2 Siemens Clinical Solutions - Soarian 10 2.2.3 Karl Storz - ORchestrion 10 2.2.4 YAWL BPM 11 2.3 Sensor systems 12 2.3.1 Sensors according to DIN1319 13 2.3.2 Video-based sensor technology 14 2.3.3 Human-based sensor technology 15 2.3.4 Summary 15 2.4 Process model 15 2.4.1 Top-Down 15 2.4.2 Bottom-Up 17 2.4.3 Summary 18 2.5 Methods for creating the ICCAS process model 18 2.5.1 Recording of the iSPMs 18 2.5.2 Creation of the gSPMs 20 2.6 Summary 21 3 Model-based design of workflow schemas 23 3.1 Abstract 24 3.2 Introduction 25 3.3 Model driven design of surgical workflow schemata 27 3.3.1 Recording of patient individual surgical process models 27 3.3.2 Generating generalized SPM from iSPMs 27 3.3.3 Transforming gSPM into workflow schemata 28 3.4 Summary and Outlook 30 4 Model-based validation of workflow schemas 31 4.1 Abstract 32 4.2 Introduction 33 4.3 Methods 36 4.3.1 Surgical Process Modeling 36 4.3.2 Workflow Schema Generation 38 4.3.3 The SurgicalWorkflow Management and Simulation System 40 4.3.4 System Validation Study Design 42 4.4 Results 44 4.5 Discussion 47 4.6 Conclusion 50 4.7 Acknowledgments 51 5 Influence of missing sensor information 52 5.1 Abstract 53 5.2 Introduction 54 5.3 Methodology 57 5.3.1 Surgical process modeling 57 5.3.2 Test system 59 5.3.3 System evaluation study design 61 5.4 Results 63 5.5 Discussion 66 5.6 Conclusion 68 5.7 Acknowledgments 68 5.8 Conflict of interest 68 6 Summary and outlook 69 6.1 Summary 69 6.2 Outlook 70 Bibliography 7
    corecore