398 research outputs found

    Hardware acceleration using FPGAs for adaptive radiotherapy

    Get PDF
    Adaptive radiotherapy (ART) seeks to improve the accuracy of radiotherapy by adapting the treatment based on up-to-date images of the patient's anatomy captured at the time of treatment delivery. The amount of image data, combined with the clinical time requirements for ART, necessitates automatic image analysis to adapt the treatment plan. Currently, the computational effort of the image processing and plan adaptation means they cannot be completed in a clinically acceptable timeframe. This thesis aims to investigate the use of hardware acceleration on Field Programmable Gate Arrays (FPGAs) to accelerate algorithms for segmenting bony anatomy in Computed Tomography (CT) scans, to reduce the plan adaptation time for ART. An assessment was made of the overhead incurred by transferring image data to an FPGA-based hardware accelerator using the industry-standard DICOM protocol over an Ethernet connection. The rate was found to be likely to limit the performanceof hardware accelerators for ART, highlighting the need for an alternative method of integrating hardware accelerators with existing radiotherapy equipment. A clinically-validated segmentation algorithm was adapted for implementation in hardware. This was shown to process three-dimensional CT images up to 13.81 times faster than the original software implementation. The segmentations produced by the two implementations showed strong agreement. Modifications to the hardware implementation were proposed for segmenting fourdimensional CT scans. This was shown to process image volumes 14.96 times faster than the original software implementation, and the segmentations produced by the two implementations showed strong agreement in most cases.A second, novel, method for segmenting four-dimensional CT data was also proposed. The hardware implementation executed 1.95 times faster than the software implementation. However, the algorithm was found to be unsuitable for the global segmentation task examined here, although it may be suitable as a refining segmentation in the context of a larger ART algorithm.Adaptive radiotherapy (ART) seeks to improve the accuracy of radiotherapy by adapting the treatment based on up-to-date images of the patient's anatomy captured at the time of treatment delivery. The amount of image data, combined with the clinical time requirements for ART, necessitates automatic image analysis to adapt the treatment plan. Currently, the computational effort of the image processing and plan adaptation means they cannot be completed in a clinically acceptable timeframe. This thesis aims to investigate the use of hardware acceleration on Field Programmable Gate Arrays (FPGAs) to accelerate algorithms for segmenting bony anatomy in Computed Tomography (CT) scans, to reduce the plan adaptation time for ART. An assessment was made of the overhead incurred by transferring image data to an FPGA-based hardware accelerator using the industry-standard DICOM protocol over an Ethernet connection. The rate was found to be likely to limit the performanceof hardware accelerators for ART, highlighting the need for an alternative method of integrating hardware accelerators with existing radiotherapy equipment. A clinically-validated segmentation algorithm was adapted for implementation in hardware. This was shown to process three-dimensional CT images up to 13.81 times faster than the original software implementation. The segmentations produced by the two implementations showed strong agreement. Modifications to the hardware implementation were proposed for segmenting fourdimensional CT scans. This was shown to process image volumes 14.96 times faster than the original software implementation, and the segmentations produced by the two implementations showed strong agreement in most cases.A second, novel, method for segmenting four-dimensional CT data was also proposed. The hardware implementation executed 1.95 times faster than the software implementation. However, the algorithm was found to be unsuitable for the global segmentation task examined here, although it may be suitable as a refining segmentation in the context of a larger ART algorithm

    Métodos computacionais para otimização de desempenho em redes de imagem médica

    Get PDF
    Over the last few years, the medical imaging has consolidated its position as a major mean of clinical diagnosis. The amount of data generated by the medical imaging practice is increasing tremendously. As a result, repositories are turning into rich databanks of semi-structured data related to patients, ailments, equipment and other stakeholders involved in the medical imaging panorama. The exploration of these repositories for secondary uses of data promises to elevate the quality standards and efficiency of the medical practice. However, supporting these advanced usage scenarios in traditional institutional systems raises many technical challenges that are yet to be overcome. Moreover, the reported poor performance of standard protocols opened doors to the general usage of proprietary solutions, compromising the interoperability necessary for supporting these advanced scenarios. This thesis has researched, developed, and now proposes a series of computer methods and architectures intended to maximize the performance of multi-institutional medical imaging environments. The methods are intended to improve the performance of standard protocols for medical imaging content discovery and retrieval. The main goal is to use them to increase the acceptance of vendor-neutral solutions through the improvement of their performance. Moreover, it intends to promote the adoption of such standard technologies in advanced scenarios that are still a mirage nowadays, such as clinical research or data analytics directly on top of live institutional repositories. Finally, these achievements will facilitate the cooperation between healthcare institutions and researchers, resulting in an increment of healthcare quality and institutional efficiency.As diversas modalidades de imagem médica têm vindo a consolidar a sua posição dominante como meio complementar de diagnóstico. O número de procedimentos realizados e o volume de dados gerados aumentou significativamente nos últimos anos, colocando pressão nas redes e sistemas que permitem o arquivo e distribuição destes estudos. Os repositórios de estudos imagiológicos são fontes de dados ricas contendo dados semiestruturados relacionados com pacientes, patologias, procedimentos e equipamentos. A exploração destes repositórios para fins de investigação e inteligência empresarial, tem potencial para melhorar os padrões de qualidade e eficiência da prática clínica. No entanto, estes cenários avançados são difíceis de acomodar na realidade atual dos sistemas e redes institucionais. O pobre desempenho de alguns protocolos standard usados em ambiente de produção, conduziu ao uso de soluções proprietárias nestes nichos aplicacionais, limitando a interoperabilidade de sistemas e a integração de fontes de dados. Este doutoramento investigou, desenvolveu e propõe um conjunto de métodos computacionais cujo objetivo é maximizar o desempenho das atuais redes de imagem médica em serviços de pesquisa e recuperação de conteúdos, promovendo a sua utilização em ambientes de elevados requisitos aplicacionais. As propostas foram instanciadas sobre uma plataforma de código aberto e espera-se que ajudem a promover o seu uso generalizado como solução vendor-neutral. As metodologias foram ainda instanciadas e validadas em cenários de uso avançado. Finalmente, é expectável que o trabalho desenvolvido possa facilitar a investigação em ambiente hospitalar de produção, promovendo, desta forma, um aumento da qualidade e eficiência dos serviços.Programa Doutoral em Engenharia Informátic

    Plataforma de análise de dados para o Dicoogle

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaAs últimas décadas têm sido caracterizadas pelo aumento do número de estudos imagiológicos produzidos, elementos fundamentais no diagnóstico e tratamento médico. Estes são armazenado em repositórios dedicados e são consumidos em estações de visualização que utilizam processos de comunicação normalizados. Os repositórios de imagem médica armazenam não só imagem médica, mas também uma grande variedade de metadados que têm bastante interesse em cenários de investigação clínica e em processos de auditoria que visam melhorar a qualidade de serviço prestado. Tendo em atenção a tremenda quantidade de estudos produzidos atualmente nas instituições de saúde, verificamos que os métodos convencionais são ineficientes na exploração desses dados, obrigando as instituições a recorrer a plataformas de Inteligência Empresarial e técnicas analíticas aplicadas. Neste contexto, esta dissertação teve como objetivo desenvolver uma plataforma que permite explorar todos os dados armazenados num repositório de imagem médica. A solução permite trabalhar em tempo real sobre os repositórios e não perturba os fluxos de trabalho instituídos. Em termos funcionais, oferece um conjunto de técnicas de análise estatística e de inteligência empresarial que estão acessíveis ao utilizador através de uma aplicação Web. Esta disponibiliza um extenso painel de visualização, gráficos e relatórios, que podem ser complementados com componentes de mineração de dados. O sistema permite ainda definir uma multitude de consultas, filtros e operandos através do uso de uma interface gráfica intuitiva.In the last decades, the amount of medical imaging studies and associated metadata available has been rapidly increasing. These are mostly used to support medical diagnosis and treatment. Nonetheless, recent initiatives claim the usefulness of these studies to support research scenarios and to improve the medical institutions business practices. However, their continuous production, as well as the tremendous amount of associated data, make their analysis difficult by conventional workflows devised up until this point. Current medical imaging repositories contain not only the images themselves, but also a wide-range of valuable metadata. This creates an opportunity for the development of Business Intelligence and analytics techniques applied to this Big Data scenario. The exploration of such technologies has the potential of further increasing the efficiency and quality of the medical practice. This thesis developed a novel automated methodology to derive knowledge from multimodal medical imaging repositories that does not disrupt the regular medical practice. The developed methods enable the application of statistical analysis and business intelligence techniques directly on top of live institutional repositories. The resulting application is a Web-based solution that provides an extensive dashboard, including complete charting and reporting options, combined with data mining components. Furthermore, the system enables the operator to set a multitude of queries, filters and operands through the use of an intuitive graphical interface

    Arquiteturas federadas para integração de dados biomédicos

    Get PDF
    Doutoramento Ciências da ComputaçãoThe last decades have been characterized by a continuous adoption of IT solutions in the healthcare sector, which resulted in the proliferation of tremendous amounts of data over heterogeneous systems. Distinct data types are currently generated, manipulated, and stored, in the several institutions where patients are treated. The data sharing and an integrated access to this information will allow extracting relevant knowledge that can lead to better diagnostics and treatments. This thesis proposes new integration models for gathering information and extracting knowledge from multiple and heterogeneous biomedical sources. The scenario complexity led us to split the integration problem according to the data type and to the usage specificity. The first contribution is a cloud-based architecture for exchanging medical imaging services. It offers a simplified registration mechanism for providers and services, promotes remote data access, and facilitates the integration of distributed data sources. Moreover, it is compliant with international standards, ensuring the platform interoperability with current medical imaging devices. The second proposal is a sensor-based architecture for integration of electronic health records. It follows a federated integration model and aims to provide a scalable solution to search and retrieve data from multiple information systems. The last contribution is an open architecture for gathering patient-level data from disperse and heterogeneous databases. All the proposed solutions were deployed and validated in real world use cases.A adoção sucessiva das tecnologias de comunicação e de informação na área da saúde tem permitido um aumento na diversidade e na qualidade dos serviços prestados, mas, ao mesmo tempo, tem gerado uma enorme quantidade de dados, cujo valor científico está ainda por explorar. A partilha e o acesso integrado a esta informação poderá permitir a identificação de novas descobertas que possam conduzir a melhores diagnósticos e a melhores tratamentos clínicos. Esta tese propõe novos modelos de integração e de exploração de dados com vista à extração de conhecimento biomédico a partir de múltiplas fontes de dados. A primeira contribuição é uma arquitetura baseada em nuvem para partilha de serviços de imagem médica. Esta solução oferece um mecanismo de registo simplificado para fornecedores e serviços, permitindo o acesso remoto e facilitando a integração de diferentes fontes de dados. A segunda proposta é uma arquitetura baseada em sensores para integração de registos electrónicos de pacientes. Esta estratégia segue um modelo de integração federado e tem como objetivo fornecer uma solução escalável que permita a pesquisa em múltiplos sistemas de informação. Finalmente, o terceiro contributo é um sistema aberto para disponibilizar dados de pacientes num contexto europeu. Todas as soluções foram implementadas e validadas em cenários reais

    A formal architecture-centric and model driven approach for the engineering of science gateways

    Get PDF
    From n-Tier client/server applications, to more complex academic Grids, or even the most recent and promising industrial Clouds, the last decade has witnessed significant developments in distributed computing. In spite of this conceptual heterogeneity, Service-Oriented Architecture (SOA) seems to have emerged as the common and underlying abstraction paradigm, even though different standards and technologies are applied across application domains. Suitable access to data and algorithms resident in SOAs via so-called ‘Science Gateways’ has thus become a pressing need in order to realize the benefits of distributed computing infrastructures.In an attempt to inform service-oriented systems design and developments in Grid-based biomedical research infrastructures, the applicant has consolidated work from three complementary experiences in European projects, which have developed and deployed large-scale production quality infrastructures and more recently Science Gateways to support research in breast cancer, pediatric diseases and neurodegenerative pathologies respectively. In analyzing the requirements from these biomedical applications the applicant was able to elaborate on commonly faced issues in Grid development and deployment, while proposing an adapted and extensible engineering framework. Grids implement a number of protocols, applications, standards and attempt to virtualize and harmonize accesses to them. Most Grid implementations therefore are instantiated as superposed software layers, often resulting in a low quality of services and quality of applications, thus making design and development increasingly complex, and rendering classical software engineering approaches unsuitable for Grid developments.The applicant proposes the application of a formal Model-Driven Engineering (MDE) approach to service-oriented developments, making it possible to define Grid-based architectures and Science Gateways that satisfy quality of service requirements, execution platform and distribution criteria at design time. An novel investigation is thus presented on the applicability of the resulting grid MDE (gMDE) to specific examples and conclusions are drawn on the benefits of this approach and its possible application to other areas, in particular that of Distributed Computing Infrastructures (DCI) interoperability, Science Gateways and Cloud architectures developments

    Repositório de registos electrónicos de saúde baseado em OpenEHR

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaAn Electronic Health Record (EHR) aggregates all relevant medical information regarding a single patient, allowing a patient centric storage approach. This way the complete medical history of a patient is stored together in one record, making it possible to save time and work by allowing the sharing of information between health care institutions. To make this sharing possible there has to be agreed on the format in which the information is saved. There are many standards to de ne the way health information is stored, exchanged and retrieved. One of this standards is the Open Electronic Health Record (OpenEHR). The goal of this thesis is to create a repository which allows to store and manage patient records which follow the OpenEHR standard. The result of the implementation consists in three software parts, being them a Extensible Markup Language (XML) repository to store health information, a set of services allowing to manage and query the information stored and a web interface to demonstrate the implemented functionalities.Um registo electrónico de saúde agrega toda a informação médica relevante de um paciente, permitindo uma filosofia de armazenamento orientada ao mesmo. Desta forma todo o historial médico do paciente encontra-se armazenado num único registo, permitindo a optimização de custos e tempo gasto nas diferentes tarefas, através de partilha de informação entre diferentes instituições médicas. Para possibilitar esta partilha é necessário definir um formato comum em que a informação é armazenada. Para tal foram definidas diversas normas que ditam as regras de armazenamento, troca e recuperação de informação médica. Uma destas normas é o Open Electronic Health Record (OpenEHR). O objectivo desta dissertação e criar um reposit orio que permite o armazenamento de registos médicos que sigam a norma OpenEHR. A implementação dá origem a três componentes de software, sendo eles uma base de dados Extensible Markup Language (XML) para armazenamento de registos médicos, um conjunto de serviços para gestão e pesquisa da informação armazenada e uma interface web para demonstração das funcionalidades implementadas

    A Learning Health System for Radiation Oncology

    Get PDF
    The proposed research aims to address the challenges faced by clinical data science researchers in radiation oncology accessing, integrating, and analyzing heterogeneous data from various sources. The research presents a scalable intelligent infrastructure, called the Health Information Gateway and Exchange (HINGE), which captures and structures data from multiple sources into a knowledge base with semantically interlinked entities. This infrastructure enables researchers to mine novel associations and gather relevant knowledge for personalized clinical outcomes. The dissertation discusses the design framework and implementation of HINGE, which abstracts structured data from treatment planning systems, treatment management systems, and electronic health records. It utilizes disease-specific smart templates for capturing clinical information in a discrete manner. HINGE performs data extraction, aggregation, and quality and outcome assessment functions automatically, connecting seamlessly with local IT/medical infrastructure. Furthermore, the research presents a knowledge graph-based approach to map radiotherapy data to an ontology-based data repository using FAIR (Findable, Accessible, Interoperable, Reusable) concepts. This approach ensures that the data is easily discoverable and accessible for clinical decision support systems. The dissertation explores the ETL (Extract, Transform, Load) process, data model frameworks, ontologies, and provides a real-world clinical use case for this data mapping. To improve the efficiency of retrieving information from large clinical datasets, a search engine based on ontology-based keyword searching and synonym-based term matching tool was developed. The hierarchical nature of ontologies is leveraged to retrieve patient records based on parent and children classes. Additionally, patient similarity analysis is conducted using vector embedding models (Word2Vec, Doc2Vec, GloVe, and FastText) to identify similar patients based on text corpus creation methods. Results from the analysis using these models are presented. The implementation of a learning health system for predicting radiation pneumonitis following stereotactic body radiotherapy is also discussed. 3D convolutional neural networks (CNNs) are utilized with radiographic and dosimetric datasets to predict the likelihood of radiation pneumonitis. DenseNet-121 and ResNet-50 models are employed for this study, along with integrated gradient techniques to identify salient regions within the input 3D image dataset. The predictive performance of the 3D CNN models is evaluated based on clinical outcomes. Overall, the proposed Learning Health System provides a comprehensive solution for capturing, integrating, and analyzing heterogeneous data in a knowledge base. It offers researchers the ability to extract valuable insights and associations from diverse sources, ultimately leading to improved clinical outcomes. This work can serve as a model for implementing LHS in other medical specialties, advancing personalized and data-driven medicine

    Sharing and viewing segments of electronic patient records service (SVSEPRS) using multidimensional database model

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The concentration on healthcare information technology has never been determined than it is today. This awareness arises from the efforts to accomplish the extreme utilization of Electronic Health Record (EHR). Due to the greater mobility of the population, EHR will be constructed and continuously updated from the contribution of one or many EPRs that are created and stored at different healthcare locations such as acute Hospitals, community services, Mental Health and Social Services. The challenge is to provide healthcare professionals, remotely among heterogeneous interoperable systems, with a complete view of the selective relevant and vital EPRs fragments of each patient during their care. Obtaining extensive EPRs at the point of delivery, together with ability to search for and view vital, valuable, accurate and relevant EPRs fragments can be still challenging. It is needed to reduce redundancy, enhance the quality of medical decision making, decrease the time needed to navigate through very high number of EPRs, which consequently promote the workflow and ease the extra work needed by clinicians. These demands was evaluated through introducing a system model named SVSEPRS (Searching and Viewing Segments of Electronic Patient Records Service) to enable healthcare providers supply high quality and more efficient services, redundant clinical diagnostic tests. Also inappropriate medical decision making process should be avoided via allowing all patients‟ previous clinical tests and healthcare information to be shared between various healthcare organizations. Multidimensional data model, which lie at the core of On-Line Analytical Processing (OLAP) systems can handle the duplication of healthcare services. This is done by allowing quick search and access to vital and relevant fragments from scattered EPRs to view more comprehensive picture and promote advances in the diagnosis and treatment of illnesses. SVSEPRS is a web based system model that helps participant to search for and view virtual EPR segments, using an endowed and well structured Centralised Multidimensional Search Mapping (CMDSM). This defines different quantitative values (measures), and descriptive categories (dimensions) allows clinicians to slice and dice or drill down to more detailed levels or roll up to higher levels to meet clinicians required fragment

    Interactive framework for Covid-19 detection and segmentation with feedback facility for dynamically improved accuracy and trust

    Get PDF
    Due to the severity and speed of spread of the ongoing Covid-19 pandemic, fast but accurate diagnosis of Covid-19 patients has become a crucial task. Achievements in this respect might enlighten future efforts for the containment of other possible pandemics. Researchers from various fields have been trying to provide novel ideas for models or systems to identify Covid-19 patients from different medical and non-medical data. AI-based researchers have also been trying to contribute to this area by mostly providing novel approaches of automated systems using convolutional neural network (CNN) and deep neural network (DNN) for Covid-19 detection and diagnosis. Due to the efficiency of deep learning (DL) and transfer learning (TL) models in classification and segmentation tasks, most of the recent AI-based researches proposed various DL and TL models for Covid-19 detection and infected region segmentation from chest medical images like X-rays or CT images. This paper describes a web-based application framework for Covid-19 lung infection detection and segmentation. The proposed framework is characterized by a feedback mechanism for self learning and tuning. It uses variations of three popular DL models, namely Mask R-CNN, UNet, and U-Net++. The models were trained, evaluated and tested using CT images of Covid patients which were collected from two different sources. The web application provide a simple user friendly interface to process the CT images from various resources using the chosen models, thresholds and other parameters to generate the decisions on detection and segmentation. The models achieve high performance scores for Dice similarity, Jaccard similarity, accuracy, loss, and precision values. The U-Net model outperformed the other models with more than 98% accuracy
    corecore