157 research outputs found

    Concept Graph Neural Networks for Surgical Video Understanding

    Full text link
    We constantly integrate our knowledge and understanding of the world to enhance our interpretation of what we see. This ability is crucial in application domains which entail reasoning about multiple entities and concepts, such as AI-augmented surgery. In this paper, we propose a novel way of integrating conceptual knowledge into temporal analysis tasks via temporal concept graph networks. In the proposed networks, a global knowledge graph is incorporated into the temporal analysis of surgical instances, learning the meaning of concepts and relations as they apply to the data. We demonstrate our results in surgical video data for tasks such as verification of critical view of safety, as well as estimation of Parkland grading scale. The results show that our method improves the recognition and detection of complex benchmarks as well as enables other analytic applications of interest

    Automatic grading of ocular hyperaemia using image processing techniques

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] The human eye is affected by a number of high-prevalence pathologies, such as Dry Eye Syndrome or allergic conjunctivitis. One of the symptoms that these health problems have in common is the occurrence of hyperaemia in the bulbar conjunctiva, as a consequence of blood vessels getting clogged. The blood is trapped in the affected area and some visible signs, such an increase in the redness of the area, appear. This work proposes an automatic methodology for bulbar hyperaemia grading based on image processing and machine learning techniques. The methodology receives a video as input, chooses the best frame of the sequence, isolates the conjunctiva, computes several image features and, finally, transforms these features to the ranges that optometrists use to evaluate the parameter. Moreover, several tests have been conducted in order to analyse how the methodology reacts to unfavourable situations. The goal was to cover some common issues that assisted diagnosis methodologies have to face in real-world environments. The proposed methodology achieves a significant reduction of the time that the specialists have to invest in the evaluation. Thus, it has a direct repercussion on reaching a fast diagnosis. Moreover, it removes the inherent subjectivity of the manual process and ensures its repeatability. As a consequence, the experts can gain insight in the parameters that influence hyperaemia evaluation.[Resumen] El ojo humano se ve afectado por un gran número de patologías de alta prevalencia, tales corno el Síndrome del Ojo Seco o la conjuntivitis alérgica. Uno de los síntomas que estos problemas de salud comparten es la aparición de hiperemia en la conjuntiva bulbar, consecuencia del taponamiento de vasos sanguíneos. La sangre queda atrapada en el área afectada y aparecen signos visibles, como el aumento de rojez en la zona. Este trabajo propone una metodología automática para la evaluación de hiperemia bulbar basada en técnicas de procesado de imagen y aprendizaje máquina. La metodología recibe un vídeo, escoge la mejor imagen de la secuencia, aísla la conjuntiva, calcula varias características en la imagen y, por último, transforma estas características al rango de valores que los optometristas usan para evaluar la hiperemia. Además, se han realizado varias pruebas para analizar como reacciona la metodología a situaciones desfavorables. El objetivo era incluir problemas comunes que aparecen a la hora de aplicar una metodología de asistencia al diagnóstico en un entorno real. La metodología propuesta consigue una reducción significativa del tiempo que los especialistas invierten en la evaluación. Por lo tanto, tiene repercusiones directas en alcanzar un diagnóstico rápido. Además, elimina la subjetividad inherente al proceso manual y garantiza su repetitibilidad. Como consecuencia, los expertos pueden obtener información acerca de los parámetros involucrados en la evaluación de la hiperemia.[Resumo] O ollo humano vese afectado por un elevado número de patoloxías de alta prevalencia, tales como o Síndrome do Olio Seco ou a conxuntivite alérxica. Un dos síntomas que ditos problemas de saúde teñen en común é a aparición de hiperemia na conxuntiva bulbar, consecuencia da conxestíón dos vasos sanguíneos. O sangue queda atrapado na área afectada, e aparecen signos visibles, como o incremento do arrubiamento na zona. Este traballo propón unha metodoloxia automática para a avaliación da hiperemia bulbar baseada en técnicas de procesado de imaxe e aprendizaxe máquina. A metodoloxía recibe un video como entrada, escolle a mellor imaxe da secuencia, illa a conxuntiva, calcula varias características da imaxe e, por último, transforma estas características ós rangos que os optometristas usan para avaHar o parámetro. Ademáis, leváronse a cabo varias probas para analizar como reacciona a metodoloxía ante situacións pouco favorables. O obxectivo era abarcar algúns dos problemas máis comúns que atopan as metodoloxías de asistencia á diagnose en entornos reais. A metodoloxía proposta consegue milla redución significativa do tempo que os especialistas invirten na avaliación. Polo tanto, ten unha repercusión directa na obtención dunha diagnose rápida. Ademáis, elimina a subxectividade inerente ó proceso manual, e asegura a súa repetitibilidade. Como consecuencia, os expertos poden entender mellor os parámetros que influencian a avaliación da hiperemia

    Physiology-guided treatment of complex coronary artery disease

    Get PDF

    Physiology-guided treatment of complex coronary artery disease

    Get PDF

    Machine Learning/Deep Learning in Medical Image Processing

    Get PDF
    Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL). This special issue, “Machine Learning/Deep Learning in Medical Image Processing”, has been launched to provide an opportunity for researchers in the area of medical image processing to highlight recent developments made in their fields with ML/DL. Seven excellent papers that cover a wide variety of medical/clinical aspects are selected in this special issue

    Integrated Study of Liver Fibrosis: Modeling and Clinical Detection

    Get PDF
    The liver is a vital organ that carries out over 500 essential tasks, including fat metabolism, blood filtering, bile production, and some protein production. Although the structure of the liver and the role of each type of cells in the liver are well known, the biomedical and mechanical interplays within liver tissues remain unclear. Chronic liver diseases are a significant public health challenge. All chronic liver diseases lead to liver fibrosis due to excessive fiber accumulation, resulting in cirrhosis and loss of liver function. Only early stage liver fibrosis is reversible. However, early-stage liver fibrosis is difficult to diagnose. How the progression of fibrosis changes the mechanical properties of the liver tissue and altering the dynamics of blood flow is still not well understood. The objective of this dissertation is to integrate the understanding of liver diseases and mechanical modeling to develop several models relating liver fibrosis to blood flow. In collaboration with clinicians specialized in hepatic fibrosis, we integrated computational modeling and clinicopathologic image analysis and proposed a new technology for early stage fibrosis detection. The key results of this research include: (1) A mathematical model of liver fibrosis progression connecting the cellular and molecular mechanisms of fibrosis to tissue rigidity; (2) A novel machine learning-based algorithm to automatically stage liver fibrosis based on pathology images; (3) A physics model to illustrate how the liver stiffness affects the blood flow pattern, predicting a direct relationship between fibrosis stage and ultrasound Doppler measurement of liver blood flow; (4) Statistical analysis of clinical ultrasound Doppler data from fibrosis patients confirming our model prediction. These results lead to a novel noninvasive technology for detecting early stages of liver fibrosis with high accuracy

    Clinical quantitative cardiac imaging for the assessment of myocardial ischaemia

    Get PDF
    Cardiac imaging has a pivotal role in the prevention, diagnosis and treatment of ischaemic heart disease. SPECT is most commonly used for clinical myocardial perfusion imaging, whereas PET is the clinical reference standard for the quantification of myocardial perfusion. MRI does not involve exposure to ionizing radiation, similar to echocardiography, which can be performed at the bedside. CT perfusion imaging is not frequently used but CT offers coronary angiography data, and invasive catheter-based methods can measure coronary flow and pressure. Technical improvements to the quantification of pathophysiological parameters of myocardial ischaemia can be achieved. Clinical consensus recommendations on the appropriateness of each technique were derived following a European quantitative cardiac imaging meeting and using a real-time Delphi process. SPECT using new detectors allows the quantification of myocardial blood flow and is now also suited to patients with a high BMI. PET is well suited to patients with multivessel disease to confirm or exclude balanced ischaemia. MRI allows the evaluation of patients with complex disease who would benefit from imaging of function and fibrosis in addition to perfusion. Echocardiography remains the preferred technique for assessing ischaemia in bedside situations, whereas CT has the greatest value for combined quantification of stenosis and characterization of atherosclerosis in relation to myocardial ischaemia. In patients with a high probability of needing invasive treatment, invasive coronary flow and pressure measurement is well suited to guide treatment decisions. In this Consensus Statement, we summarize the strengths and weaknesses as well as the future technological potential of each imaging modality

    Non-communicable Diseases, Big Data and Artificial Intelligence

    Get PDF
    This reprint includes 15 articles in the field of non-communicable Diseases, big data, and artificial intelligence, overviewing the most recent advances in the field of AI and their application potential in 3P medicine

    What scans we will read: imaging instrumentation trends in clinical oncology

    Get PDF
    Oncological diseases account for a significant portion of the burden on public healthcare systems with associated costs driven primarily by complex and long-lasting therapies. Through the visualization of patient-specific morphology and functional-molecular pathways, cancerous tissue can be detected and characterized non- invasively, so as to provide referring oncologists with essential information to support therapy management decisions. Following the onset of stand-alone anatomical and functional imaging, we witness a push towards integrating molecular image information through various methods, including anato-metabolic imaging (e.g., PET/ CT), advanced MRI, optical or ultrasound imaging. This perspective paper highlights a number of key technological and methodological advances in imaging instrumentation related to anatomical, functional, molecular medicine and hybrid imaging, that is understood as the hardware-based combination of complementary anatomical and molecular imaging. These include novel detector technologies for ionizing radiation used in CT and nuclear medicine imaging, and novel system developments in MRI and optical as well as opto-acoustic imaging. We will also highlight new data processing methods for improved non-invasive tissue characterization. Following a general introduction to the role of imaging in oncology patient management we introduce imaging methods with well-defined clinical applications and potential for clinical translation. For each modality, we report first on the status quo and point to perceived technological and methodological advances in a subsequent status go section. Considering the breadth and dynamics of these developments, this perspective ends with a critical reflection on where the authors, with the majority of them being imaging experts with a background in physics and engineering, believe imaging methods will be in a few years from now. Overall, methodological and technological medical imaging advances are geared towards increased image contrast, the derivation of reproducible quantitative parameters, an increase in volume sensitivity and a reduction in overall examination time. To ensure full translation to the clinic, this progress in technologies and instrumentation is complemented by progress in relevant acquisition and image-processing protocols and improved data analysis. To this end, we should accept diagnostic images as “data”, and – through the wider adoption of advanced analysis, including machine learning approaches and a “big data” concept – move to the next stage of non-invasive tumor phenotyping. The scans we will be reading in 10 years from now will likely be composed of highly diverse multi- dimensional data from multiple sources, which mandate the use of advanced and interactive visualization and analysis platforms powered by Artificial Intelligence (AI) for real-time data handling by cross-specialty clinical experts with a domain knowledge that will need to go beyond that of plain imaging
    corecore