26 research outputs found

    Myocardial Infarction Quantification From Late Gadolinium Enhancement MRI Using Top-hat Transforms and Neural Networks

    Full text link
    Significance: Late gadolinium enhanced magnetic resonance imaging (LGE-MRI) is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard for quantifying myocardial infarction (MI), demanding most algorithms to be expert dependent. Objectives and Methods: In this work a new automatic method for MI quantification from LGE-MRI is proposed. Our novel segmentation approach is devised for accurately detecting not only hyper-enhanced lesions, but also microvascular-obstructed areas. Moreover, it includes a myocardial disease detection step which extends the algorithm for working under healthy scans. The method is based on a cascade approach where firstly, diseased slices are identified by a convolutional neural network (CNN). Secondly, by means of morphological operations a fast coarse scar segmentation is obtained. Thirdly, the segmentation is refined by a boundary-voxel reclassification strategy using an ensemble of CNNs. For its validation, reproducibility and further comparison against other methods, we tested the method on a big multi-field expert annotated LGE-MRI database including healthy and diseased cases. Results and Conclusion: In an exhaustive comparison against nine reference algorithms, the proposal achieved state-of-the-art segmentation performances and showed to be the only method agreeing in volumetric scar quantification with the expert delineations. Moreover, the method was able to reproduce the intra- and inter-observer variability ranges. It is concluded that the method could suitably be transferred to clinical scenarios.Comment: Submitted to IEE

    Artificial intelligence and cardiovascular magnetic resonance imaging in myocardial infarction patients.

    Get PDF
    Cardiovascular magnetic resonance (CMR) is an important cardiac imaging tool for assessing the prognostic extent of myocardial injury after myocardial infarction (MI). Within the context of clinical trials, CMR is also useful for assessing the efficacy of potential cardioprotective therapies in reducing MI size and preventing adverse left ventricular (LV) remodelling in reperfused MI. However, manual contouring and analysis can be time-consuming with interobserver and intraobserver variability, which can in turn lead to reduction in accuracy and precision of analysis. There is thus a need to automate CMR scan analysis in MI patients to save time, increase accuracy, increase reproducibility and increase precision. In this regard, automated imaging analysis techniques based on artificial intelligence (AI) that are developed with machine learning (ML), and more specifically deep learning (DL) strategies, can enable efficient, robust, accurate and clinician-friendly tools to be built so as to try and improve both clinician productivity and quality of patient care. In this review, we discuss basic concepts of ML in CMR, important prognostic CMR imaging biomarkers in MI and the utility of current ML applications in their analysis as assessed in research studies. We highlight potential barriers to the mainstream implementation of these automated strategies and discuss related governance and quality control issues. Lastly, we discuss the future role of ML applications in clinical trials and the need for global collaboration in growing this field

    Diagnostic utility of artificial intelligence for left ventricular scar identification using cardiac magnetic resonance imaging—A systematic review

    Get PDF
    BACKGROUND: Accurate, rapid quantification of ventricular scar using cardiac magnetic resonance imaging (CMR) carries importance in arrhythmia management and patient prognosis. Artificial intelligence (AI) has been applied to other radiological challenges with success. OBJECTIVE: We aimed to assess AI methodologies used for left ventricular scar identification in CMR, imaging sequences used for training, and its diagnostic evaluation. METHODS: Following PRISMA recommendations, a systematic search of PubMed, Embase, Web of Science, CINAHL, OpenDissertations, arXiv, and IEEE Xplore was undertaken to June 2021 for full-text publications assessing left ventricular scar identification algorithms. No pre-registration was undertaken. Random-effect meta-analysis was performed to assess Dice Coefficient (DSC) overlap of learning vs predefined thresholding methods. RESULTS: Thirty-five articles were included for final review. Supervised and unsupervised learning models had similar DSC compared to predefined threshold models (0.616 vs 0.633, P = .14) but had higher sensitivity, specificity, and accuracy. Meta-analysis of 4 studies revealed standardized mean difference of 1.11; 95% confidence interval -0.16 to 2.38, P = .09, I(2) = 98% favoring learning methods. CONCLUSION: Feasibility of applying AI to the task of scar detection in CMR has been demonstrated, but model evaluation remains heterogenous. Progression toward clinical application requires detailed, transparent, standardized model comparison and increased model generalizability

    Effect of Collateral Flow on Catheter-Based Assessment of Cardiac Microvascular Obstruction.

    Get PDF
    Cardiac microvascular obstruction (MVO) associated with acute myocardial infarction (heart attack) is characterized by partial or complete elimination of perfusion in the myocardial microcirculation. A new catheter-based method (CoFI, Controlled Flow Infusion) has recently been developed to diagnose MVO in the catheterization laboratory during acute therapy of the heart attack. A porcine MVO model demonstrates that CoFI can accurately identify the increased hydraulic resistance of the affected microvascular bed. A benchtop microcirculation model was developed and tuned to reproduce in vivo MVO characteristics. The tuned benchtop model was then used to systematically study the effect of different levels of collateral flow. These experiments showed that measurements obtained in the catheter-based method were adversely affected such that collateral flow may be misinterpreted as MVO. Based on further analysis of the measured data, concepts to mitigate the adverse effects were formulated which allow discrimination between collateral flow and MVO

    Challenges and Opportunities of End-to-End Learning in Medical Image Classification

    Get PDF
    Das Paradigma des End-to-End Lernens hat in den letzten Jahren die Bilderkennung revolutioniert, aber die klinische Anwendung hinkt hinterher. Bildbasierte computergestützte Diagnosesysteme basieren immer noch weitgehend auf hochtechnischen und domänen-spezifischen Pipelines, die aus unabhängigen regelbasierten Modellen bestehen, welche die Teilaufgaben der Bildklassifikation wiederspiegeln: Lokalisation von auffälligen Regionen, Merkmalsextraktion und Entscheidungsfindung. Das Versprechen einer überlegenen Entscheidungsfindung beim End-to-End Lernen ergibt sich daraus, dass domänenspezifische Zwangsbedingungen von begrenzter Komplexität entfernt werden und stattdessen alle Systemkomponenten gleichzeitig, direkt anhand der Rohdaten, und im Hinblick auf die letztendliche Aufgabe optimiert werden. Die Gründe dafür, dass diese Vorteile noch nicht den Weg in die Klinik gefunden haben, d.h. die Herausforderungen, die sich bei der Entwicklung Deep Learning-basierter Diagnosesysteme stellen, sind vielfältig: Die Tatsache, dass die Generalisierungsfähigkeit von Lernalgorithmen davon abhängt, wie gut die verfügbaren Trainingsdaten die tatsächliche zugrundeliegende Datenverteilung abbilden, erweist sich in medizinische Anwendungen als tiefgreifendes Problem. Annotierte Datensätze in diesem Bereich sind notorisch klein, da für die Annotation eine kostspielige Beurteilung durch Experten erforderlich ist und die Zusammenlegung kleinerer Datensätze oft durch Datenschutzauflagen und Patientenrechte erschwert wird. Darüber hinaus weisen medizinische Datensätze drastisch unterschiedliche Eigenschaften im Bezug auf Bildmodalitäten, Bildgebungsprotokolle oder Anisotropien auf, und die oft mehrdeutige Evidenz in medizinischen Bildern kann sich auf inkonsistente oder fehlerhafte Trainingsannotationen übertragen. Während die Verschiebung von Datenverteilungen zwischen Forschungsumgebung und Realität zu einer verminderten Modellrobustheit führt und deshalb gegenwärtig als das Haupthindernis für die klinische Anwendung von Lernalgorithmen angesehen wird, wird dieser Graben oft noch durch Störfaktoren wie Hardwarelimitationen oder Granularität von gegebenen Annotation erweitert, die zu Diskrepanzen zwischen der modellierten Aufgabe und der zugrunde liegenden klinischen Fragestellung führen. Diese Arbeit untersucht das Potenzial des End-to-End-Lernens in klinischen Diagnosesystemen und präsentiert Beiträge zu einigen der wichtigsten Herausforderungen, die derzeit eine breite klinische Anwendung verhindern. Zunächst wird der letzten Teil der Klassifikations-Pipeline untersucht, die Kategorisierung in klinische Pathologien. Wir demonstrieren, wie das Ersetzen des gegenwärtigen klinischen Standards regelbasierter Entscheidungen durch eine groß angelegte Merkmalsextraktion gefolgt von lernbasierten Klassifikatoren die Brustkrebsklassifikation im MRT signifikant verbessert und eine Leistung auf menschlichem Level erzielt. Dieser Ansatz wird weiter anhand von kardiologischer Diagnose gezeigt. Zweitens ersetzen wir, dem Paradigma des End-to-End Lernens folgend, das biophysikalische Modell, das für die Bildnormalisierung in der MRT angewandt wird, sowie die Extraktion handgefertigter Merkmale, durch eine designierte CNN-Architektur und liefern eine eingehende Analyse, die das verborgene Potenzial der gelernten Bildnormalisierung und einen Komplementärwert der gelernten Merkmale gegenüber den handgefertigten Merkmalen aufdeckt. Während dieser Ansatz auf markierten Regionen arbeitet und daher auf manuelle Annotation angewiesen ist, beziehen wir im dritten Teil die Aufgabe der Lokalisierung dieser Regionen in den Lernprozess ein, um eine echte End-to-End-Diagnose baserend auf den Rohbildern zu ermöglichen. Dabei identifizieren wir eine weitgehend vernachlässigte Zwangslage zwischen dem Streben nach der Auswertung von Modellen auf klinisch relevanten Skalen auf der einen Seite, und der Optimierung für effizientes Training unter Datenknappheit auf der anderen Seite. Wir präsentieren ein Deep Learning Modell, das zur Auflösung dieses Kompromisses beiträgt, liefern umfangreiche Experimente auf drei medizinischen Datensätzen sowie eine Serie von Toy-Experimenten, die das Verhalten bei begrenzten Trainingsdaten im Detail untersuchen, und publiziren ein umfassendes Framework, das unter anderem die ersten 3D-Implementierungen gängiger Objekterkennungsmodelle umfasst. Wir identifizieren weitere Hebelpunkte in bestehenden End-to-End-Lernsystemen, bei denen Domänenwissen als Zwangsbedingung dienen kann, um die Robustheit von Modellen in der medizinischen Bildanalyse zu erhöhen, die letztendlich dazu beitragen sollen, den Weg für die Anwendung in der klinischen Praxis zu ebnen. Zu diesem Zweck gehen wir die Herausforderung fehlerhafter Trainingsannotationen an, indem wir die Klassifizierungskompnente in der End-to-End-Objekterkennung durch Regression ersetzen, was es ermöglicht, Modelle direkt auf der kontinuierlichen Skala der zugrunde liegenden pathologischen Prozesse zu trainieren und so die Robustheit der Modelle gegenüber fehlerhaften Trainingsannotationen zu erhöhen. Weiter adressieren wir die Herausforderung der Input-Heterogenitäten, mit denen trainierte Modelle konfrontiert sind, wenn sie an verschiedenen klinischen Orten eingesetzt werden, indem wir eine modellbasierte Domänenanpassung vorschlagen, die es ermöglicht, die ursprüngliche Trainingsdomäne aus veränderten Inputs wiederherzustellen und damit eine robuste Generalisierung zu gewährleisten. Schließlich befassen wir uns mit dem höchst unsystematischen, aufwendigen und subjektiven Trial-and-Error-Prozess zum Finden von robusten Hyperparametern für einen gegebene Aufgabe, indem wir Domänenwissen in ein Set systematischer Regeln überführen, die eine automatisierte und robuste Konfiguration von Deep Learning Modellen auf einer Vielzahl von medizinischen Datensetzen ermöglichen. Zusammenfassend zeigt die hier vorgestellte Arbeit das enorme Potenzial von End-to-End Lernalgorithmen im Vergleich zum klinischen Standard mehrteiliger und hochtechnisierter Diagnose-Pipelines auf, und präsentiert Lösungsansätze zu einigen der wichtigsten Herausforderungen für eine breite Anwendung unter realen Bedienungen wie Datenknappheit, Diskrepanz zwischen der vom Modell behandelten Aufgabe und der zugrunde liegenden klinischen Fragestellung, Mehrdeutigkeiten in Trainingsannotationen, oder Verschiebung von Datendomänen zwischen klinischen Standorten. Diese Beiträge können als Teil des übergreifende Zieles der Automatisierung von medizinischer Bildklassifikation gesehen werden - ein integraler Bestandteil des Wandels, der erforderlich ist, um die Zukunft des Gesundheitswesens zu gestalten

    Intraoperative Quantification of Bone Perfusion in Lower Extremity Injury Surgery

    Get PDF
    Orthopaedic surgery is one of the most common surgical categories. In particular, lower extremity injuries sustained from trauma can be complex and life-threatening injuries that are addressed through orthopaedic trauma surgery. Timely evaluation and surgical debridement following lower extremity injury is essential, because devitalized bones and tissues will result in high surgical site infection rates. However, the current clinical judgment of what constitutes “devitalized tissue” is subjective and dependent on surgeon experience, so it is necessary to develop imaging techniques for guiding surgical debridement, in order to control infection rates and to improve patient outcome. In this thesis work, computational models of fluorescence-guided debridement in lower extremity injury surgery will be developed, by quantifying bone perfusion intraoperatively using Dynamic contrast-enhanced fluorescence imaging (DCE-FI) system. Perfusion is an important factor of tissue viability, and therefore quantifying perfusion is essential for fluorescence-guided debridement. In Chapters 3-7 of this thesis, we explore the performance of DCE-FI in quantifying perfusion from benchtop to translation: We proposed a modified fluorescent microsphere quantification technique using cryomacrotome in animal model. This technique can measure bone perfusion in periosteal and endosteal separately, and therefore to validate bone perfusion measurements obtained by DCE-FI; We developed pre-clinical rodent contaminated fracture model to correlate DCE-FI with infection risk, and compare with multi-modality scanning; Furthermore in clinical studies, we investigated first-pass kinetic parameters of DCE-FI and arterial input functions for characterization of perfusion changes during lower limb amputation surgery; We conducted the first in-human use of dynamic contrast-enhanced texture analysis for orthopaedic trauma classification, suggesting that spatiotemporal features from DCE-FI can classify bone perfusion intraoperatively with high accuracy and sensitivity; We established clinical machine learning infection risk predictive model on open fracture surgery, where pixel-scaled prediction on infection risk will be accomplished. In conclusion, pharmacokinetic and spatiotemporal patterns of dynamic contrast-enhanced imaging show great potential for quantifying bone perfusion and prognosing bone infection. The thesis work will decrease surgical site infection risk and improve successful rates of lower extremity injury surgery

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Computational fluid dynamics indicators to improve cardiovascular pathologies

    Get PDF
    In recent years, the study of computational hemodynamics within anatomically complex vascular regions has generated great interest among clinicians. The progress in computational fluid dynamics, image processing and high-performance computing haveallowed us to identify the candidate vascular regions for the appearance of cardiovascular diseases and to predict how this disease may evolve. Medicine currently uses a paradigm called diagnosis. In this thesis we attempt to introduce into medicine the predictive paradigm that has been used in engineering for many years. The objective of this thesis is therefore to develop predictive models based on diagnostic indicators for cardiovascular pathologies. We try to predict the evolution of aortic abdominal aneurysm, aortic coarctation and coronary artery disease in a personalized way for each patient. To understand how the cardiovascular pathology will evolve and when it will become a health risk, it is necessary to develop new technologies by merging medical imaging and computational science. We propose diagnostic indicators that can improve the diagnosis and predict the evolution of the disease more efficiently than the methods used until now. In particular, a new methodology for computing diagnostic indicators based on computational hemodynamics and medical imaging is proposed. We have worked with data of anonymous patients to create real predictive technology that will allow us to continue advancing in personalized medicine and generate more sustainable health systems. However, our final aim is to achieve an impact at a clinical level. Several groups have tried to create predictive models for cardiovascular pathologies, but they have not yet begun to use them in clinical practice. Our objective is to go further and obtain predictive variables to be used practically in the clinical field. It is to be hoped that in the future extremely precise databases of all of our anatomy and physiology will be available to doctors. These data can be used for predictive models to improve diagnosis or to improve therapies or personalized treatments.En els últims anys, l'estudi de l'hemodinàmica computacional en regions vasculars anatòmicament complexes ha generat un gran interès entre els clínics. El progrés obtingut en la dinàmica de fluids computacional, en el processament d'imatges i en la computació d'alt rendiment ha permès identificar regions vasculars on poden aparèixer malalties cardiovasculars, així com predir-ne l'evolució. Actualment, la medicina utilitza un paradigma anomenat diagnòstic. En aquesta tesi s'intenta introduir en la medicina el paradigma predictiu utilitzat des de fa molts anys en l'enginyeria. Per tant, aquesta tesi té com a objectiu desenvolupar models predictius basats en indicadors de diagnòstic de patologies cardiovasculars. Tractem de predir l'evolució de l'aneurisma d'aorta abdominal, la coartació aòrtica i la malaltia coronària de forma personalitzada per a cada pacient. Per entendre com la patologia cardiovascular evolucionarà i quan suposarà un risc per a la salut, cal desenvolupar noves tecnologies mitjançant la combinació de les imatges mèdiques i la ciència computacional. Proposem uns indicadors que poden millorar el diagnòstic i predir l'evolució de la malaltia de manera més eficient que els mètodes utilitzats fins ara. En particular, es proposa una nova metodologia per al càlcul dels indicadors de diagnòstic basada en l'hemodinàmica computacional i les imatges mèdiques. Hem treballat amb dades de pacients anònims per crear una tecnologia predictiva real que ens permetrà seguir avançant en la medicina personalitzada i generar sistemes de salut més sostenibles. Però el nostre objectiu final és aconseguir un impacte en l¿àmbit clínic. Diversos grups han tractat de crear models predictius per a les patologies cardiovasculars, però encara no han començat a utilitzar-les en la pràctica clínica. El nostre objectiu és anar més enllà i obtenir variables predictives que es puguin utilitzar de forma pràctica en el camp clínic. Es pot preveure que en el futur tots els metges disposaran de bases de dades molt precises de tota la nostra anatomia i fisiologia. Aquestes dades es poden utilitzar en els models predictius per millorar el diagnòstic o per millorar teràpies o tractaments personalitzats.Postprint (published version

    Brain Injury

    Get PDF
    The present two volume book "Brain Injury" is distinctive in its presentation and includes a wealth of updated information on many aspects in the field of brain injury. The Book is devoted to the pathogenesis of brain injury, concepts in cerebral blood flow and metabolism, investigative approaches and monitoring of brain injured, different protective mechanisms and recovery and management approach to these individuals, functional and endocrine aspects of brain injuries, approaches to rehabilitation of brain injured and preventive aspects of traumatic brain injuries. The collective contribution from experts in brain injury research area would be successfully conveyed to the readers and readers will find this book to be a valuable guide to further develop their understanding about brain injury

    Quantifying heart development

    Get PDF
    This thesis presents a series of papers on quantified heart development. It contains an atlas of human embryonic heart development, covering the first 8 weeks after conception. This atlas gives graphs of growth in size and volume of the various cardiac compartments. Such measures are still scarce in literature as illustrated in a review about ventricular wall development. The atlas also shows that by quantification of growth, new insights in developmental processes, such as sinus venosus incorporation can be gained. It, together with a series of ventricular wall growth curves covering foetal development, illustrates that a hypertrabeculated ventricle is the result of differential growth rather than a failure of compaction as has been presumed to underlie left ventricular non-compaction cardiomyopathy. Additionally, this thesis shows that trabecular myocardium is not necessarily weaker or ill-adapted to force generation compared to the compact wall as is assumed to be the case in aforementioned cardiomyopathy. Furthermore, quantification of atrioventricular canal growth on foetal ultrasounds lend support to the theory that aberrant atrioventricular canal development can lead to tricuspid valve agenesis. Finally, this thesis shows that there is a role for comparative anatomy, in a broader sense than just mouse and chicken, in understanding mammalian and human heart development by comparing a series of bird hearts from different species
    corecore