246 research outputs found

    Complexity Reduction in Image-Based Breast Cancer Care

    Get PDF
    The diversity of malignancies of the breast requires personalized diagnostic and therapeutic decision making in a complex situation. This thesis contributes in three clinical areas: (1) For clinical diagnostic image evaluation, computer-aided detection and diagnosis of mass and non-mass lesions in breast MRI is developed. 4D texture features characterize mass lesions. For non-mass lesions, a combined detection/characterisation method utilizes the bilateral symmetry of the breast s contrast agent uptake. (2) To improve clinical workflows, a breast MRI reading paradigm is proposed, exemplified by a breast MRI reading workstation prototype. Instead of mouse and keyboard, it is operated using multi-touch gestures. The concept is extended to mammography screening, introducing efficient navigation aids. (3) Contributions to finite element modeling of breast tissue deformations tackle two clinical problems: surgery planning and the prediction of the breast deformation in a MRI biopsy device

    Interfaces for Modular Surgical Planning and Assistance Systems

    Get PDF
    Modern surgery of the 21st century relies in many aspects on computers or, in a wider sense, digital data processing. Department administration, OR scheduling, billing, and - with increasing pervasion - patient data management are performed with the aid of so called Surgical Information Systems (SIS) or, more general, Hospital Information Systems (HIS). Computer Assisted Surgery (CAS) summarizes techniques which assist a surgeon in the preparation and conduction of surgical interventions. Today still predominantly based on radiology images, these techniques include the preoperative determination of an optimal surgical strategy and intraoperative systems which aim at increasing the accuracy of surgical manipulations. CAS is a relatively young field of computer science. One of the unsolved "teething troubles" of CAS is the absence of technical standards for the interconnectivity of CAS system. Current CAS systems are usually "islands of information" with no connection to other devices within the operating room or hospital-wide information systems. Several workshop reports and individual publications point out that this situation leads to ergonomic, logistic, and economic limitations in hospital work. Perioperative processes are prolonged by the manual installation and configuration of an increasing amount of technical devices. Intraoperatively, a large amount of the surgeons'' attention is absorbed by the requirement to monitor and operate systems. The need for open infrastructures which enable the integration of CAS devices from different vendors in order to exchange information as well as commands among these devices through a network has been identified by numerous experts with backgrounds in medicine as well as engineering. This thesis contains two approaches to the integration of CAS systems: - For perioperative data exchange, the specification of new data structures as an amendment to the existing DICOM standard for radiology image management is presented. The extension of DICOM towards surgical application allows for the seamless integration of surgical planning and reporting systems into DICOM-based Picture Archiving and Communication Systems (PACS) as they are installed in most hospitals for the exchange and long-term archival of patient images and image-related patient data. - For the integration of intraoperatively used CAS devices, such as, e.g., navigation systems, video image sources, or biosensors, the concept of a surgical middleware is presented. A c++ class library, the TiCoLi, is presented which facilitates the configuration of ad-hoc networks among the modules of a distributed CAS system as well as the exchange of data streams, singular data objects, and commands between these modules. The TiCoLi is the first software library for a surgical field of application to implement all of these services. To demonstrate the suitability of the presented specifications and their implementation, two modular CAS applications are presented which utilize the proposed DICOM extensions for perioperative exchange of surgical planning data as well as the TiCoLi for establishing an intraoperative network of autonomous, yet not independent, CAS modules.Die moderne Hochleistungschirurgie des 21. Jahrhunderts ist auf vielerlei Weise abhĂ€ngig von Computern oder, im weiteren Sinne, der digitalen Datenverarbeitung. Administrative AblĂ€ufe, wie die Erstellung von NutzungsplĂ€nen fĂŒr die verfĂŒgbaren technischen, rĂ€umlichen und personellen Ressourcen, die Rechnungsstellung und - in zunehmendem Maße - die Verwaltung und Archivierung von Patientendaten werden mit Hilfe von digitalen Informationssystemen rationell und effizient durchgefĂŒhrt. Innerhalb der Krankenhausinformationssysteme (KIS, oder englisch HIS) stehen fĂŒr die speziellen BedĂŒrfnisse der einzelnen Fachabteilungen oft spezifische Informationssysteme zur VerfĂŒgung. Chirurgieinformationssysteme (CIS, oder englisch SIS) decken hierbei vor allen Dingen die Bereiche Operationsplanung sowie Materialwirtschaft fĂŒr spezifisch chirurgische Verbrauchsmaterialien ab. WĂ€hrend die genannten HIS und SIS vornehmlich der Optimierung administrativer Aufgaben dienen, stehen die Systeme der Computerassistierten Chirugie (CAS) wesentlich direkter im Dienste der eigentlichen chirugischen Behandlungsplanung und Therapie. Die CAS verwendet Methoden der Robotik, digitalen Bild- und Signalverarbeitung, kĂŒnstlichen Intelligenz, numerischen Simulation, um nur einige zu nennen, zur patientenspezifischen Behandlungsplanung und zur intraoperativen UnterstĂŒtzung des OP-Teams, allen voran des Chirurgen. Vor allen Dingen Fortschritte in der rĂ€umlichen Verfolgung von Werkzeugen und Patienten ("Tracking"), die VerfĂŒgbarkeit dreidimensionaler radiologischer Aufnahmen (CT, MRT, ...) und der Einsatz verschiedener Robotersysteme haben in den vergangenen Jahrzehnten den Einzug des Computers in den Operationssaal - medienwirksam - ermöglicht. Weniger prominent, jedoch keinesfalls von untergeordnetem praktischen Nutzen, sind Beispiele zur automatisierten Überwachung klinischer Messwerte, wie etwa Blutdruck oder SauerstoffsĂ€ttigung. Im Gegensatz zu den meist hochgradig verteilten und gut miteinander verwobenen Informationssystemen fĂŒr die Krankenhausadministration und Patientendatenverwaltung, sind die Systeme der CAS heutzutage meist wenig oder ĂŒberhaupt nicht miteinander und mit Hintergrundsdatenspeichern vernetzt. Eine Reihe wissenschaftlicher Publikationen und interdisziplinĂ€rer Workshops hat sich in den vergangen ein bis zwei Jahrzehnten mit den Problemen des Alltagseinsatzes von CAS Systemen befasst. Mit steigender IntensitĂ€t wurde hierbei auf den Mangel an infrastrukturiellen Grundlagen fĂŒr die Vernetzung intraoperativ eingesetzter CAS Systeme miteinander und mit den perioperativ eingesetzten Planungs-, Dokumentations- und Archivierungssystemen hingewiesen. Die sich daraus ergebenden negativen EinflĂŒsse auf die Effizienz perioperativer AblĂ€ufe - jedes GerĂ€t muss manuell in Betrieb genommen und mit den spezifischen Daten des nĂ€chsten Patienten gefĂŒttert werden - sowie die zunehmende Aufmerksamkeit, welche der Operateur und sein Team auf die Überwachung und dem Betrieb der einzelnen GerĂ€te verwenden muss, werden als eine der "Kinderkrankheiten" dieser relativ jungen Technologie betrachtet und stehen einer Verbreitung ĂŒber die Grenzen einer engagierten technophilen Nutzergruppe hinaus im Wege. Die vorliegende Arbeit zeigt zwei parallel von einander (jedoch, im Sinne der SchnittstellenkompatibilitĂ€t, nicht gĂ€nzlich unabhĂ€ngig voneinander) zu betreibende AnsĂ€tze zur Integration von CAS Systemen. - FĂŒr den perioperativen Datenaustausch wird die Spezifikation zusĂ€tzlicher Datenstrukturen zum Transfer chirurgischer Planungsdaten im Rahmen des in radiologischen Bildverarbeitungssystemen weit verbreiteten DICOM Standards vorgeschlagen und an zwei Beispielen vorgefĂŒhrt. Die Erweiterung des DICOM Standards fĂŒr den perioperativen Einsatz ermöglicht hierbei die nahtlose Integration chirurgischer Planungssysteme in existierende "Picture Archiving and Communication Systems" (PACS), welche in den meisten FĂ€llen auf dem DICOM Standard basieren oder zumindest damit kompatibel sind. Dadurch ist einerseits der Tatsache Rechnung getragen, dass die patientenspezifische OP-Planung in hohem Masse auf radiologischen Bildern basiert und andererseits sicher gestellt, dass die Planungsergebnisse entsprechend der geltenden Bestimmungen langfristig archiviert und gegen unbefugten Zugriff geschĂŒtzt sind - PACS Server liefern hier bereits wohlerprobte Lösungen. - FĂŒr die integration intraoperativer CAS Systeme, wie etwa Navigationssysteme, Videobildquellen oder Sensoren zur Überwachung der Vitalparameter, wird das Konzept einer "chirurgischen Middleware" vorgestellt. Unter dem Namen TiCoLi wurde eine c++ Klassenbibliothek entwickelt, auf deren Grundlage die Konfiguration von ad-hoc Netzwerken wĂ€hrend der OP-Vorbereitung mittels plug-and-play Mechanismen erleichtert wird. Nach erfolgter Konfiguration ermöglicht die TiCoLi den Austausch kontinuierlicher Datenströme sowie einzelner Datenpakete und Kommandos zwischen den Modulen einer verteilten CAS Anwendung durch ein Ethernet-basiertes Netzwerk. Die TiCoLi ist die erste frei verfĂŒgbare Klassenbibliothek welche diese FunktionalitĂ€ten dediziert fĂŒr einen Einsatz im chirurgischen Umfeld vereinigt. Zum Nachweis der Tauglichkeit der gezeigten Spezifikationen und deren Implementierungen, werden zwei modulare CAS Anwendungen prĂ€sentiert, welche die vorgeschlagenen DICOM Erweiterungen zum perioperativen Austausch von Planungsergebnissen sowie die TiCoLi zum intraoperativen Datenaustausch von Messdaten unter echzeitnahen Anforderungen verwenden

    Proceedings of the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology

    Get PDF
    The volume 2 proceedings from the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology are presented. Topics discussed include intelligent computer assisted training (ICAT) systems architectures, ICAT educational and medical applications, virtual environment (VE) training and assessment, human factors engineering and VE, ICAT theory and natural language processing, ICAT military applications, VE engineering applications, ICAT knowledge acquisition processes and applications, and ICAT aerospace applications

    MIXR: A Standard Architecture for Medical Image Analysis in Augmented and Mixed Reality

    Get PDF
    Medical image analysis is evolving into a new dimension: where it will combine the power of AI and machine learning with real-time, real-space displays, namely Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - known collectively as Extended Reality (XR). These devices, typically available as head-mounted displays, are enabling the move towards the complete transformation of how medical data is viewed, processed and analysed in clinical practice. There have been recent attempts on how XR gadgets can help in surgical planning and training of medics. However, the radiological front from a detection, diagnostics and prognosis remains unexplored. In this paper we propose a standard framework or architecture called Medical Imaging in Extended Reality (MIXR) for building medical image analysis applications in XR. MIXR consists of several components used in literature; however, tied together for reconstructing volume data in 3D space. Our focus here is on the reconstruction mechanism for CT and MRI data in XR; nevertheless, the framework we propose has applications beyond these modalities

    Intelligent computing applications to assist perceptual training in medical imaging

    Get PDF
    The research presented in this thesis represents a body of work which addresses issues in medical imaging, primarily as it applies to breast cancer screening and laparoscopic surgery. The concern here is how computer based methods can aid medical practitioners in these tasks. Thus, research is presented which develops both new techniques of analysing radiologists performance data and also new approaches of examining surgeons visual behaviour when they are undertaking laparoscopic training. Initially a new chest X-Ray self-assessment application is described which has been developed to assess and improve radiologists performance in detecting lung cancer. Then, in breast cancer screening, a method of identifying potential poor performance outliers at an early stage in a national self-assessment scheme is demonstrated. Additionally, a method is presented to optimize whether a radiologist, in using this scheme, has correctly localised and identified an abnormality or made an error. One issue in appropriately measuring radiological performance in breast screening is that both the size of clinical monitors used and the difficulty in linking the medical image to the observer s line of sight hinders suitable eye tracking. Consequently, a new method is presented which links these two items. Laparoscopic surgeons have similar issues to radiologists in interpreting a medical display but with the added complications of hand-eye co-ordination. Work is presented which examines whether visual search feedback of surgeons operations can be useful training aids

    Telemedicine

    Get PDF

    Telemedicine

    Get PDF

    Image processing techniques for mixed reality and biometry

    Get PDF
    2013 - 2014This thesis work is focused on two applicative fields of image processing research, which, for different reasons, have become particularly active in the last decade: Mixed Reality and Biometry. Though the image processing techniques involved in these two research areas are often different, they share the key objective of recognizing salient features typically captured through imaging devices. Enabling technologies for augmented/mixed reality have been improved and refined throughout the last years and more recently they seems to have finally passed the demo stage to becoming ready for practical industrial and commercial applications. To this regard, a crucial role will likely be played by the new generation of smartphones and tablets, equipped with an arsenal of sensors connections and enough processing power for becoming the most portable and affordable AR platform ever. Within this context, techniques like gesture recognition by means of simple, light and robust capturing hardware and advanced computer vision techniques may play an important role in providing a natural and robust way to control software applications and to enhance onthe- field operational capabilities. The research described in this thesis is targeted toward advanced visualization and interaction strategies aimed to improve the operative range and robustness of mixed reality applications, particularly for demanding industrial environments... [edited by Author]XIII n.s

    Teaching Introductory Programming Concepts through a Gesture-Based Interface

    Get PDF
    Computer programming is an integral part of a technology driven society, so there is a tremendous need to teach programming to a wider audience. One of the challenges in meeting this demand for programmers is that most traditional computer programming classes are targeted to university/college students with strong math backgrounds. To expand the computer programming workforce, we need to encourage a wider range of students to learn about programming. The goal of this research is to design and implement a gesture-driven interface to teach computer programming to young and non-traditional students. We designed our user interface based on the feedback from students attending the College of Engineering summer camps at the University of Arkansas. Our system uses the Microsoft Xbox Kinect to capture the movements of new programmers as they use our system. Our software then tracks and interprets student hand movements in order to recognize specific gestures which correspond to different programming constructs, and uses this information to create and execute programs using the Google Blockly visual programming framework. We focus on various gesture recognition algorithms to interpret user data as specific gestures, including template matching, sector quantization, and supervised machine learning clustering algorithms
    • 

    corecore