68 research outputs found

    Comparison of segmentation software packages for in-hospital 3D print workflow

    Get PDF
    Purpose: In-hospital three-dimensional (3D) printing of patient-specific pathologies is increasingly being used in daily care. However, the efficiency of the current conversion from image to print is often obstructed due to limitations associated with segmentation software. Therefore, there is a need for comparison of several clinically available tools. A comparative study has been conducted to compare segmentation performance of Philips IntelliSpace Portal(®) (PISP), Mimics Innovation Suite (MIS), and DICOM to PRINT(®) (D2P). Approach: These tools were compared with respect to segmentation time and 3D mesh quality. The dataset consisted of three computed tomography (CT)-scans of acetabular fractures (ACs), three CT-scans of tibia plateau fractures (TPs), and three CTA-scans of abdominal aortic aneurysms (AAAs). Independent-samples [Formula: see text]-tests were performed to compare the measured segmentation times. Furthermore, 3D mesh quality was assessed and compared according to representativeness and usability for the surgeon. Results: Statistically significant differences in segmentation time were found between PISP and MIS with respect to the segmentation of ACs ([Formula: see text]) and AAAs ([Formula: see text]). Furthermore, statistically significant differences in segmentation time were found between PISP and D2P for segmentations of AAAs ([Formula: see text]). There were no statistically significant differences in segmentation time for TPs. The accumulated mesh quality scores were highest for segmentations performed in MIS, followed by D2P. Conclusion: Based on segmentation time and mesh quality, MIS and D2P are capable of enhancing the in-hospital 3D print workflow. However, they should be integrated with the picture archiving and communication system to truly improve the workflow. In addition, these software packages are not open source and additional costs must be incurred

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    A Systematic Review of Three-Dimensional Printing in Liver Disease

    Get PDF
    The purpose of this review is to analyse current literature related to the clinical applications of 3D printed models in liver disease. A search of the literature was conducted to source studies from databases with the aim of determining the applications and feasibility of 3D printed models in liver disease. 3D printed model accuracy and costs associated with 3D printing, the ability to replicate anatomical structures and delineate important characteristics of hepatic tumours, and the potential for 3D printed liver models to guide surgical planning are analysed. Nineteen studies met the selection criteria for inclusion in the analysis. Seventeen of them were case reports and two were original studies. Quantitative assessment measuring the accuracy of 3D printed liver models was analysed in five studies with mean difference between 3D printed models and original source images ranging from 0.2 to 20%. Fifteen studies provided qualitative assessment with results showing the usefulness of 3D printed models when used as clinical tools in preoperative planning, simulation of surgical or interventional procedures, medical education, and training. The cost and time associated with 3D printed liver model production was reported in 11 studies, with costs ranging from US13toUS13 to US2000, duration of production up to 100 h. This systematic review shows that 3D printed liver models demonstrate hepatic anatomy and tumours with high accuracy. The models can assist with preoperative planning and may be used in the simulation of surgical procedures for the treatment of malignant hepatic tumours

    Segmentierung medizinischer Bilddaten und bildgestützte intraoperative Navigation

    Get PDF
    Die Entwicklung von Algorithmen zur automatischen oder semi-automatischen Verarbeitung von medizinischen Bilddaten hat in den letzten Jahren mehr und mehr an Bedeutung gewonnen. Das liegt zum einen an den immer besser werdenden medizinischen Aufnahmemodalitäten, die den menschlichen Körper immer feiner virtuell abbilden können. Zum anderen liegt dies an der verbesserten Computerhardware, die eine algorithmische Verarbeitung der teilweise im Gigabyte-Bereich liegenden Datenmengen in einer vernünftigen Zeit erlaubt. Das Ziel dieser Habilitationsschrift ist die Entwicklung und Evaluation von Algorithmen für die medizinische Bildverarbeitung. Insgesamt besteht die Habilitationsschrift aus einer Reihe von Publikationen, die in drei übergreifende Themenbereiche gegliedert sind: -Segmentierung medizinischer Bilddaten anhand von vorlagenbasierten Algorithmen -Experimentelle Evaluation quelloffener Segmentierungsmethoden unter medizinischen Einsatzbedingungen -Navigation zur Unterstützung intraoperativer Therapien Im Bereich Segmentierung medizinischer Bilddaten anhand von vorlagenbasierten Algorithmen wurden verschiedene graphbasierte Algorithmen in 2D und 3D entwickelt, die einen gerichteten Graphen mittels einer Vorlage aufbauen. Dazu gehört die Bildung eines Algorithmus zur Segmentierung von Wirbeln in 2D und 3D. In 2D wird eine rechteckige und in 3D eine würfelförmige Vorlage genutzt, um den Graphen aufzubauen und das Segmentierungsergebnis zu berechnen. Außerdem wird eine graphbasierte Segmentierung von Prostatadrüsen durch eine Kugelvorlage zur automatischen Bestimmung der Grenzen zwischen Prostatadrüsen und umliegenden Organen vorgestellt. Auf den vorlagenbasierten Algorithmen aufbauend, wurde ein interaktiver Segmentierungsalgorithmus, der einem Benutzer in Echtzeit das Segmentierungsergebnis anzeigt, konzipiert und implementiert. Der Algorithmus nutzt zur Segmentierung die verschiedenen Vorlagen, benötigt allerdings nur einen Saatpunkt des Benutzers. In einem weiteren Ansatz kann der Benutzer die Segmentierung interaktiv durch zusätzliche Saatpunkte verfeinern. Dadurch wird es möglich, eine semi-automatische Segmentierung auch in schwierigen Fällen zu einem zufriedenstellenden Ergebnis zu führen. Im Bereich Evaluation quelloffener Segmentierungsmethoden unter medizinischen Einsatzbedingungen wurden verschiedene frei verfügbare Segmentierungsalgorithmen anhand von Patientendaten aus der klinischen Routine getestet. Dazu gehörte die Evaluierung der semi-automatischen Segmentierung von Hirntumoren, zum Beispiel Hypophysenadenomen und Glioblastomen, mit der frei verfügbaren Open Source-Plattform 3D Slicer. Dadurch konnte gezeigt werden, wie eine rein manuelle Schicht-für-Schicht-Vermessung des Tumorvolumens in der Praxis unterstützt und beschleunigt werden kann. Weiterhin wurde die Segmentierung von Sprachbahnen in medizinischen Aufnahmen von Hirntumorpatienten auf verschiedenen Plattformen evaluiert. Im Bereich Navigation zur Unterstützung intraoperativer Therapien wurden Softwaremodule zum Begleiten von intra-operativen Eingriffen in verschiedenen Phasen einer Behandlung (Therapieplanung, Durchführung, Kontrolle) entwickelt. Dazu gehört die erstmalige Integration des OpenIGTLink-Netzwerkprotokolls in die medizinische Prototyping-Plattform MeVisLab, die anhand eines NDI-Navigationssystems evaluiert wurde. Außerdem wurde hier ebenfalls zum ersten Mal die Konzeption und Implementierung eines medizinischen Software-Prototypen zur Unterstützung der intraoperativen gynäkologischen Brachytherapie vorgestellt. Der Software-Prototyp enthielt auch ein Modul zur erweiterten Visualisierung bei der MR-gestützten interstitiellen gynäkologischen Brachytherapie, welches unter anderem die Registrierung eines gynäkologischen Brachytherapie-Instruments in einen intraoperativen Datensatz einer Patientin ermöglichte. Die einzelnen Module führten zur Vorstellung eines umfassenden bildgestützten Systems für die gynäkologische Brachytherapie in einem multimodalen Operationssaal. Dieses System deckt die prä-, intra- und postoperative Behandlungsphase bei einer interstitiellen gynäkologischen Brachytherapie ab

    Plataformas de realidad aumentada y realidad virtual para la formación y la práctica médica

    Get PDF
    Tesis por compendio de publicaciones[ES] En esta tesis se ha investigado la aplicación de las tecnologías de Realidad Aumentada y Realidad Virtual en medicina, tanto a nivel formativo como para la práctica médica. Nos centramos en la implementación de diferentes sistemas software, además de incluir estudios de sistemas existentes, análisis de los resultados obtenidos y evaluación de estos. La Realidad Aumentada y la Realidad virtual tienen una gran capacidad de aplicación en muchos ámbitos, pero esta tesis se centra en su aplicación en el campo de la medicina a través del diseño, implementación y estudio de diferentes aplicaciones con estas tecnologías aplicadas al ámbito de la formación médica y la práctica clínica, en este último caso concretamente orientándose a la visualización médica avanzada de imágenes radiológicas. Se analizará cómo la grabación y visualización de forma interactiva de contenidos 360 puede mejorar considerablemente el aprendizaje de los alumnos. Se han implementado diferentes simuladores de Realidad Virtual con el objetivo de analizar cómo pueden mejorar la formación práctica de los estudiantes de medicina. También se ha diseñado un sistema para realizar una formación a distancia empleando la Realidad Virtual, lo cual hoy en día resulta de gran interés, teniendo en cuenta cómo la pandemia causada por la enfermedad COVID-19 está cambiando los procedimientos formativos. Otro de los objetivos ha sido el diseño y estudio de una aplicación de Realidad Aumentada para formación médica en anatomía humana. Finalmente, se analiza el proyecto que más tiempo ha ocupado en la elaboración de esta tesis: Nextmed. Se trata de un proyecto de implementación propia en colaboración cuyo objetivo principal es cambiar el modo en que los profesionales trabajan con las imágenes médicas, aprovechando el potencial de la Realidad Aumentada, la Realidad Virtual, la Visión Artificial y la Inteligencia Artificial. Es importante recalcar que, gran parte del trabajo de esta tesis es la implementación de los proyectos software indicados anteriormente y que se explican en las diferentes publicaciones. En la sección INTRODUCCIÓN, se justifica la elaboración de esta tesis realizando una breve retrospección en la que se analizan diferentes necesidades identificadas por diferentes actores clave respecto a técnicas avanzadas de visualización de conceptos. Se indican además los objetivos principales de esta tesis y se explica cómo está organizado el documento. A continuación, incluimos en MARCO TEÓRICO un análisis de los conceptos principales que son necesarios asimilar para comprender el contenido de esta tesis, así como un estudio del estado del arte y una breve introducción histórica necesaria para poner en contexto todo el trabajo desarrollado. Se ha decidido mantener la estructura habitual de un artículo científico para esta tesis, facilitando así su lectura de forma paralela a los artículos científicos que la acompañan y que amplían o complementan la información presentada. Teniendo esto en cuenta, el siguiente capítulo es el de MATERIALES EMPLEADOS Y METODOLOGÍA. Cabe recordar que gran parte del trabajo realizado para esta tesis es la implementación de diferentes sistemas de Realidad Aumentada y Realidad Virtual, empleando Visión Artificial e Inteligencia Artificial, que se han creado durante los últimos cuatro años. En este capítulo se analizan las diferentes librerías de software y aplicaciones empleadas, así como el hardware utilizado, para la implementación de los proyectos. También se incluye información sobre la metodología, indicando cómo se ha llevado a cabo esa fase de implementación de los proyectos, muy similar en todos ellos, incorporando el diagrama de clases de cada proyecto, que muestra la complejidad del código fuente con los diferentes scripts diseñados. El capítulo de RESULTADOS OBTENIDOS se focaliza en mostrar los sistemas diseñados. Se incluyen además los resultados de las evaluaciones llevadas a cabo, aunque cabe recordar que, al igual que ocurre con el resto de los capítulos, el contenido principal se encuentra en los propios artículos, a pesar de que los diferentes capítulos de la tesis amplían o complementan dicho contenido. En los capítulos finales, realizamos un análisis de los resultados de la investigación realizada en esta tesis, incluyendo el estado actual de las tecnologías en medicina y los resultados obtenidos, en la DISCUSIÓN. En el capítulo LÍNEAS DE TRABAJO FUTURAS se realiza una observación detallada de cómo se pueden mejorar los proyectos implementados y cómo estas tecnologías podrían avanzar en el futuro. Finalmente, se presentan las conclusiones del trabajo realizado en el capítulo CONCLUSIONES. Al haber presentado esta tesis en la modalidad de tesis por compendio de artículos, el contenido principal de la misma se encuentra reflejada en los propios artículos científicos y capítulos de libro publicados que encontramos en los anexos. En el ANEXO XII se incluye un premio obtenido en una de las comunicaciones realizadas, mientras que en el ANEXO XIII se pueden consultar algunas de las publicaciones principales que la prensa ha realizado en relación al trabajo realizado para esta tesis. Debido al carácter innovador de los proyectos implementados y de los resultados obtenidos, así como al éxito en la evaluación de estos y la buena aceptación de la sociedad, han sido numerosas las apariciones en periódicos, o incluso radio y televisión. Este trabajo de Tesis doctoral se presenta bajo la modalidad de compendio de artículos, en función de la normativa de la Universidad de Salamanca, citada en el capítulo II del Reglamento de Doctorado, en su artículo 14.1., sobre la elaboración y defensa de La Tesis Doctoral

    Ubiquitous volume rendering in the web platform

    Get PDF
    176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium

    Cardiac Regeneration and Disease Modeling Using Biomaterial-Free Three-Dimensional (3D) Bioprinted Cardiac Tissue

    Get PDF
    Biomaterial-free 3D bioprinting is a relatively new field within 3D bioprinting. In this implementation, 3D tissues are created from the fusion of 3D multicellular spheroids, without requiring biomaterials. This is in contrast to traditional 3D bioprinting, which requires biomaterials to carry the cells to be bioprinted, such as a hydrogel or decellularized extracellular matrix. In this dissertation, we discuss the generation of biomaterial-free 3D cardiac spheroids and 3D bioprinted cardiac patches, their mechanical, electrical and biological functional characterization in vitro and in vivo for cardiac regeneration. Additionally, we investigated their potential use as a disease model using congenital long QT syndrome due to an inherited calmodulinopathy as an example, utilizing CRISPR interference to shorten abnormally prolonged cardiac action potentials. At the same time, we demonstrate the use of virtual reality for the interactive visualization of cardiovascular structures and its potential use in pre-surgical planning and patient-specific precision medicine

    Exploration of Three-dimensional Morphometrics of the Hip Joint and Reconstructive Technologies

    Get PDF
    This dissertation is an exploration of three-dimensional (3D) anatomy using the hip joint as the model of study. Very few studies have taken advantage of 3D modeling to assess the features of commercially available software, or to assess the validity and reliability of 3D morphometrics. This dissertation compared three reconstructive software programs to survey user appreciation concerning how 3D anatomical reconstructive software can be utilized and then established the advantages and limitations of 3D measurements in the hip joint. Three main studies are presented: the first, a comparison of three widely available 3D reconstructive software programs, Amira, OsiriX, and Mimics. This comparison used a decision matrix to outline which software is best suited for construction of 3D anatomical models, morphometric analysis, and building 3D visualization and learning tools. Mimics was the best-suited program for construction of 3D anatomical models and morphometric analysis. For creating a learning tool the results were less clear. OsiriX was very user- friendly; however, it had limited capabilities. Conversely, although Amira had endless potential and could create complex dynamic videos it had a challenging interface. Based on the overall results of study one, Mimics was used in the second and third studies to quantify 3D surface morphology of the hip joint. The second study assessed the validity and reliability of a novel 3D measurement approach of the femoral head (n=45). Study two highlighted the advantages of modeling a convex shape and the advantages of quantifying the proximal femur in 3D. This measurement approach proved to be valid and reliable. The third study assessed the validity and reliability of a similar 3D measurement approach applied to the acetabulum (n=45). This study illustrated the limitations and challenges encountered when quantifying the complex geometry of the concave acetabulum. This measurement approach was reliable, yet the differences between the digital and cadaveric measurements were large and clinically significant. The hip joint is a complex joint that benefits from 3D visualization and quantification; however, challenges surrounding measuring the acetabulum remain

    Design of a secure architecture for the exchange of biomedical information in m-Health scenarios

    Get PDF
    El paradigma de m-Salud (salud móvil) aboga por la integración masiva de las más avanzadas tecnologías de comunicación, red móvil y sensores en aplicaciones y sistemas de salud, para fomentar el despliegue de un nuevo modelo de atención clínica centrada en el usuario/paciente. Este modelo tiene por objetivos el empoderamiento de los usuarios en la gestión de su propia salud (p.ej. aumentando sus conocimientos, promocionando estilos de vida saludable y previniendo enfermedades), la prestación de una mejor tele-asistencia sanitaria en el hogar para ancianos y pacientes crónicos y una notable disminución del gasto de los Sistemas de Salud gracias a la reducción del número y la duración de las hospitalizaciones. No obstante, estas ventajas, atribuidas a las aplicaciones de m-Salud, suelen venir acompañadas del requisito de un alto grado de disponibilidad de la información biomédica de sus usuarios para garantizar una alta calidad de servicio, p.ej. fusionar varias señales de un usuario para obtener un diagnóstico más preciso. La consecuencia negativa de cumplir esta demanda es el aumento directo de las superficies potencialmente vulnerables a ataques, lo que sitúa a la seguridad (y a la privacidad) del modelo de m-Salud como factor crítico para su éxito. Como requisito no funcional de las aplicaciones de m-Salud, la seguridad ha recibido menos atención que otros requisitos técnicos que eran más urgentes en etapas de desarrollo previas, tales como la robustez, la eficiencia, la interoperabilidad o la usabilidad. Otro factor importante que ha contribuido a retrasar la implementación de políticas de seguridad sólidas es que garantizar un determinado nivel de seguridad implica unos costes que pueden ser muy relevantes en varias dimensiones, en especial en la económica (p.ej. sobrecostes por la inclusión de hardware extra para la autenticación de usuarios), en el rendimiento (p.ej. reducción de la eficiencia y de la interoperabilidad debido a la integración de elementos de seguridad) y en la usabilidad (p.ej. configuración más complicada de dispositivos y aplicaciones de salud debido a las nuevas opciones de seguridad). Por tanto, las soluciones de seguridad que persigan satisfacer a todos los actores del contexto de m-Salud (usuarios, pacientes, personal médico, personal técnico, legisladores, fabricantes de dispositivos y equipos, etc.) deben ser robustas y al mismo tiempo minimizar sus costes asociados. Esta Tesis detalla una propuesta de seguridad, compuesta por cuatro grandes bloques interconectados, para dotar de seguridad a las arquitecturas de m-Salud con unos costes reducidos. El primer bloque define un esquema global que proporciona unos niveles de seguridad e interoperabilidad acordes con las características de las distintas aplicaciones de m-Salud. Este esquema está compuesto por tres capas diferenciadas, diseñadas a la medidas de los dominios de m-Salud y de sus restricciones, incluyendo medidas de seguridad adecuadas para la defensa contra las amenazas asociadas a sus aplicaciones de m-Salud. El segundo bloque establece la extensión de seguridad de aquellos protocolos estándar que permiten la adquisición, el intercambio y/o la administración de información biomédica -- por tanto, usados por muchas aplicaciones de m-Salud -- pero no reúnen los niveles de seguridad detallados en el esquema previo. Estas extensiones se concretan para los estándares biomédicos ISO/IEEE 11073 PHD y SCP-ECG. El tercer bloque propone nuevas formas de fortalecer la seguridad de los tests biomédicos, que constituyen el elemento esencial de muchas aplicaciones de m-Salud de carácter clínico, mediante codificaciones novedosas. Finalmente el cuarto bloque, que se sitúa en paralelo a los anteriores, selecciona herramientas genéricas de seguridad (elementos de autenticación y criptográficos) cuya integración en los otros bloques resulta idónea, y desarrolla nuevas herramientas de seguridad, basadas en señal -- embedding y keytagging --, para reforzar la protección de los test biomédicos.The paradigm of m-Health (mobile health) advocates for the massive integration of advanced mobile communications, network and sensor technologies in healthcare applications and systems to foster the deployment of a new, user/patient-centered healthcare model enabling the empowerment of users in the management of their health (e.g. by increasing their health literacy, promoting healthy lifestyles and the prevention of diseases), a better home-based healthcare delivery for elderly and chronic patients and important savings for healthcare systems due to the reduction of hospitalizations in number and duration. It is a fact that many m-Health applications demand high availability of biomedical information from their users (for further accurate analysis, e.g. by fusion of various signals) to guarantee high quality of service, which on the other hand entails increasing the potential surfaces for attacks. Therefore, it is not surprising that security (and privacy) is commonly included among the most important barriers for the success of m-Health. As a non-functional requirement for m-Health applications, security has received less attention than other technical issues that were more pressing at earlier development stages, such as reliability, eficiency, interoperability or usability. Another fact that has contributed to delaying the enforcement of robust security policies is that guaranteeing a certain security level implies costs that can be very relevant and that span along diferent dimensions. These include budgeting (e.g. the demand of extra hardware for user authentication), performance (e.g. lower eficiency and interoperability due to the addition of security elements) and usability (e.g. cumbersome configuration of devices and applications due to security options). Therefore, security solutions that aim to satisfy all the stakeholders in the m-Health context (users/patients, medical staff, technical staff, systems and devices manufacturers, regulators, etc.) shall be robust and, at the same time, minimize their associated costs. This Thesis details a proposal, composed of four interrelated blocks, to integrate appropriate levels of security in m-Health architectures in a cost-efcient manner. The first block designes a global scheme that provides different security and interoperability levels accordingto how critical are the m-Health applications to be implemented. This consists ofthree layers tailored to the m-Health domains and their constraints, whose security countermeasures defend against the threats of their associated m-Health applications. Next, the second block addresses the security extension of those standard protocols that enable the acquisition, exchange and/or management of biomedical information | thus, used by many m-Health applications | but do not meet the security levels described in the former scheme. These extensions are materialized for the biomedical standards ISO/IEEE 11073 PHD and SCP-ECG. Then, the third block proposes new ways of enhancing the security of biomedical standards, which are the centerpiece of many clinical m-Health applications, by means of novel codings. Finally the fourth block, with is parallel to the others, selects generic security methods (for user authentication and cryptographic protection) whose integration in the other blocks results optimal, and also develops novel signal-based methods (embedding and keytagging) for strengthening the security of biomedical tests. The layer-based extensions of the standards ISO/IEEE 11073 PHD and SCP-ECG can be considered as robust, cost-eficient and respectful with their original features and contents. The former adds no attributes to its data information model, four new frames to the service model |and extends four with new sub-frames|, and only one new sub-state to the communication model. Furthermore, a lightweight architecture consisting of a personal health device mounting a 9 MHz processor and an aggregator mounting a 1 GHz processor is enough to transmit a 3-lead electrocardiogram in real-time implementing the top security layer. The extra requirements associated to this extension are an initial configuration of the health device and the aggregator, tokens for identification/authentication of users if these devices are to be shared and the implementation of certain IHE profiles in the aggregator to enable the integration of measurements in healthcare systems. As regards to the extension of SCP-ECG, it only adds a new section with selected security elements and syntax in order to protect the rest of file contents and provide proper role-based access control. The overhead introduced in the protected SCP-ECG is typically 2{13 % of the regular file size, and the extra delays to protect a newly generated SCP-ECG file and to access it for interpretation are respectively a 2{10 % and a 5 % of the regular delays. As regards to the signal-based security techniques developed, the embedding method is the basis for the proposal of a generic coding for tests composed of biomedical signals, periodic measurements and contextual information. This has been adjusted and evaluated with electrocardiogram and electroencephalogram-based tests, proving the objective clinical quality of the coded tests, the capacity of the coding-access system to operate in real-time (overall delays of 2 s for electrocardiograms and 3.3 s for electroencephalograms) and its high usability. Despite of the embedding of security and metadata to enable m-Health services, the compression ratios obtained by this coding range from ' 3 in real-time transmission to ' 5 in offline operation. Complementarily, keytagging permits associating information to images (and other signals) by means of keys in a secure and non-distorting fashion, which has been availed to implement security measures such as image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. The tests conducted indicate a remarkable robustness-capacity tradeoff that permits implementing all this measures simultaneously, and the compatibility of keytagging with JPEG2000 compression, maintaining this tradeoff while setting the overall keytagging delay in only ' 120 ms for any image size | evidencing the scalability of this technique. As a general conclusion, it has been demonstrated and illustrated with examples that there are various, complementary and structured manners to contribute in the implementation of suitable security levels for m-Health architectures with a moderate cost in budget, performance, interoperability and usability. The m-Health landscape is evolving permanently along all their dimensions, and this Thesis aims to do so with its security. Furthermore, the lessons learned herein may offer further guidance for the elaboration of more comprehensive and updated security schemes, for the extension of other biomedical standards featuring low emphasis on security or privacy, and for the improvement of the state of the art regarding signal-based protection methods and applications

    3rd EGEE User Forum

    Get PDF
    We have organized this book in a sequence of chapters, each chapter associated with an application or technical theme introduced by an overview of the contents, and a summary of the main conclusions coming from the Forum for the chapter topic. The first chapter gathers all the plenary session keynote addresses, and following this there is a sequence of chapters covering the application flavoured sessions. These are followed by chapters with the flavour of Computer Science and Grid Technology. The final chapter covers the important number of practical demonstrations and posters exhibited at the Forum. Much of the work presented has a direct link to specific areas of Science, and so we have created a Science Index, presented below. In addition, at the end of this book, we provide a complete list of the institutes and countries involved in the User Forum
    corecore