23 research outputs found

    Filtration-histogram based texture analysis and CALIPER based pattern analysis as quantitative CT techniques in idiopathic pulmonary fibrosis: head-to-head comparison

    Get PDF
    OBJECTIVE: To assess the prognostic performance of two quantitative CT (qCT) techniques in idiopathic pulmonary fibrosis (IPF) compared to established clinical measures of disease severity (GAP index). METHODS: Retrospective analysis of high-resolution CT scans for 59 patients (age 70.5 ± 8.8 years) with two qCT methods. Computer-aided lung informatics for pathology evaluation and ratings based analysis classified the lung parenchyma into six different patterns: normal, ground glass, reticulation, hyperlucent, honeycombing and pulmonary vessels. Filtration histogram-based texture analysis extracted texture features: mean intensity, standard deviation (SD), entropy, mean of positive pixels (MPPs), skewness and kurtosis at different spatial scale filters. Univariate Kaplan-Meier survival analysis assessed the different qCT parameters' performance to predict patient outcome and refine the standard GAP staging system. Multivariate cox regression analysis assessed the independence of the significant univariate predictors of patient outcome. RESULTS: The predominant parenchymal lung pattern was reticulation (16.6% ± 13.9), with pulmonary vessel percentage being the most predictive of worse patient outcome (p = 0.009). Higher SD, entropy and MPP, in addition to lower skewness and kurtosis at fine texture scale (SSF2), were the most significant predictors of worse outcome (p < 0.001). Multivariate cox regression analysis demonstrated that SD (SSF2) was the only independent predictor of survival (p < 0.001). Better patient outcome prediction was achieved after adding total vessel percentage and SD (SSF2) to the GAP staging system (p = 0.006). CONCLUSION: Filtration-histogram texture analysis can be an independent predictor of patient mortality in IPF patients. ADVANCES IN KNOWLEDGE: qCT analysis can help in risk stratifying IPF patients in addition to clinical markers

    Deep-Learning for Epicardial Adipose Tissue Assessment with Computed Tomography: Implications for Cardiovascular Risk Prediction

    Get PDF
    Background: Epicardial adipose tissue (EAT) volume is a marker of visceral obesity that can be measured in coronary computed tomography angiograms (CCTA). The clinical value of integrating this measurement in routine CCTA interpretation has not been documented./ Objectives: This study sought to develop a deep-learning network for automated quantification of EAT volume from CCTA, test it in patients who are technically challenging, and validate its prognostic value in routine clinical care./ Methods: The deep-learning network was trained and validated to autosegment EAT volume in 3,720 CCTA scans from the ORFAN (Oxford Risk Factors and Noninvasive Imaging Study) cohort. The model was tested in patients with challenging anatomy and scan artifacts and applied to a longitudinal cohort of 253 patients post-cardiac surgery and 1,558 patients from the SCOT-HEART (Scottish Computed Tomography of the Heart) Trial, to investigate its prognostic value./ Results: External validation of the deep-learning network yielded a concordance correlation coefficient of 0.970 for machine vs human. EAT volume was associated with coronary artery disease (odds ratio [OR] per SD increase in EAT volume: 1.13 [95% CI: 1.04-1.30]; P = 0.01), and atrial fibrillation (OR: 1.25 [95% CI:1.08-1.40]; P = 0.03), after correction for risk factors (including body mass index). EAT volume predicted all-cause mortality (HR per SD: 1.28 [95% CI: 1.10-1.37]; P = 0.02), myocardial infarction (HR: 1.26 [95% CI:1.09-1.38]; P = 0.001), and stroke (HR: 1.20 [95% CI: 1.09-1.38]; P = 0.02) independently of risk factors in SCOT-HEART (5-year follow-up). It also predicted in-hospital (HR: 2.67 [95% CI: 1.26-3.73]; P ≤ 0.01) and long-term post–cardiac surgery atrial fibrillation (7-year follow-up; HR: 2.14 [95% CI: 1.19-2.97]; P ≤ 0.01). Conclusions: Automated assessment of EAT volume is possible in CCTA, including in patients who are technically challenging; it forms a powerful marker of metabolically unhealthy visceral obesity, which could be used for cardiovascular risk stratification

    Synergistic application of pulmonary 18F-FDG PET/HRCT and computer-based CT analysis with conventional severity measures to refine current risk stratification in idiopathic pulmonary fibrosis (IPF).

    Get PDF
    INTRODUCTION: To investigate the combined performance of quantitative CT (qCT) following a computer algorithm analysis (IMBIO) and 18F-FDG PET/CT to assess survival in patients with idiopathic pulmonary fibrosis (IPF). METHODS: A total of 113 IPF patients (age 70 ± 9 years) prospectively and consecutively underwent 18F-FDG PET/CT and high-resolution CT (HRCT) at our institution. During a mean follow-up of 29.6 ± 26 months, 44 (48%) patients died. As part of the qCT analysis, pattern evaluation of HRCT (using IMBIO software) included the total extent (percentage) of the following features: normal-appearing lung, hyperlucent lung, parenchymal damage (comprising ground-glass opacification, reticular pattern and honeycombing), and the pulmonary vessels. The maximum (SUVmax) and minimum (SUVmin) standardized uptake value (SUV) for 18F-FDG uptake in the lungs, and the target-to-background (SUVmax/SUVmin) ratio (TBR) were quantified using routine region-of-interest (ROI) analysis. Pulmonary functional tests (PFTs) were acquired within 14 days of the PET/CT/HRCT scan. Kaplan-Meier (KM) survival analysis was used to identify associations with mortality. RESULTS: Data from 91 patients were available for comparative analysis. The average ± SD GAP [gender, age, physiology] score was 4.2 ± 1.7 (range 0-8). The average ± SD SUVmax, SUVmin, and TBR were 3.4 ± 1.4, 0.7 ± 0.2, and 5.6 ± 2.8, respectively. In all patients, qCT analysis demonstrated a predominantly reticular lung pattern (14.9 ± 12.4%). KM analysis showed that TBR (p = 0.018) and parenchymal damage assessed by qCT (p = 0.0002) were the best predictors of survival. Adding TBR and qCT to the GAP score significantly increased the ability to differentiate between high and low risk (p < 0.0001). CONCLUSION: 18F-FDG PET and qCT are independent and synergistic in predicting mortality in patients with IPF

    Inflammatory risk and cardiovascular events in patients without obstructive coronary artery disease: the ORFAN multicentre, longitudinal cohort study

    Get PDF
    Background: Coronary computed tomography angiography (CCTA) is the first line investigation for chest pain, and it is used to guide revascularisation. However, the widespread adoption of CCTA has revealed a large group of individuals without obstructive coronary artery disease (CAD), with unclear prognosis and management. Measurement of coronary inflammation from CCTA using the perivascular fat attenuation index (FAI) Score could enable cardiovascular risk prediction and guide the management of individuals without obstructive CAD. The Oxford Risk Factors And Non-invasive imaging (ORFAN) study aimed to evaluate the risk profile and event rates among patients undergoing CCTA as part of routine clinical care in the UK National Health Service (NHS); to test the hypothesis that coronary arterial inflammation drives cardiac mortality or major adverse cardiac events (MACE) in patients with or without CAD; and to externally validate the performance of the previously trained artificial intelligence (AI)-Risk prognostic algorithm and the related AI-Risk classification system in a UK population. Methods: This multicentre, longitudinal cohort study included 40 091 consecutive patients undergoing clinically indicated CCTA in eight UK hospitals, who were followed up for MACE (ie, myocardial infarction, new onset heart failure, or cardiac death) for a median of 2·7 years (IQR 1·4–5·3). The prognostic value of FAI Score in the presence and absence of obstructive CAD was evaluated in 3393 consecutive patients from the two hospitals with the longest follow-up (7·7 years [6·4–9·1]). An AI-enhanced cardiac risk prediction algorithm, which integrates FAI Score, coronary plaque metrics, and clinical risk factors, was then evaluated in this population. Findings: In the 2·7 year median follow-up period, patients without obstructive CAD (32 533 [81·1%] of 40 091) accounted for 2857 (66·3%) of the 4307 total MACE and 1118 (63·7%) of the 1754 total cardiac deaths in the whole of Cohort A. Increased FAI Score in all the three coronary arteries had an additive impact on the risk for cardiac mortality (hazard ratio [HR] 29·8 [95% CI 13·9–63·9], p<0·001) or MACE (12·6 [8·5–18·6], p<0·001) comparing three vessels with an FAI Score in the top versus bottom quartile for each artery. FAI Score in any coronary artery predicted cardiac mortality and MACE independently from cardiovascular risk factors and the presence or extent of CAD. The AI-Risk classification was positively associated with cardiac mortality (6·75 [5·17–8·82], p<0·001, for very high risk vs low or medium risk) and MACE (4·68 [3·93–5·57], p<0·001 for very high risk vs low or medium risk). Finally, the AI-Risk model was well calibrated against true events. Interpretation: The FAI Score captures inflammatory risk beyond the current clinical risk stratification and CCTA interpretation, particularly among patients without obstructive CAD. The AI-Risk integrates this information in a prognostic algorithm, which could be used as an alternative to traditional risk factor-based risk calculators

    A novel machine learning-derived radiotranscriptomic signature of perivascular fat improves cardiac risk prediction using coronary CT angiography

    Get PDF
    Background: Coronary inflammation induces dynamic changes in the balance between water and lipid content in perivascular adipose tissue (PVAT), as captured by perivascular Fat Attenuation Index (FAI) in standard coronary CT angiography (CCTA). However, inflammation is not the only process involved in atherogenesis and we hypothesized that additional radiomic signatures of adverse fibrotic and microvascular PVAT remodelling, may further improve cardiac risk prediction. Methods and results: We present a new artificial intelligence-powered method to predict cardiac risk by analysing the radiomic profile of coronary PVAT, developed and validated in patient cohorts acquired in three different studies. In Study 1, adipose tissue biopsies were obtained from 167 patients undergoing cardiac surgery, and the expression of genes representing inflammation, fibrosis and vascularity was linked with the radiomic features extracted from tissue CT images. Adipose tissue wavelet-transformed mean attenuation (captured by FAI) was the most sensitive radiomic feature in describing tissue inflammation (TNFA expression), while features of radiomic texture were related to adipose tissue fibrosis (COL1A1 expression) and vascularity (CD31 expression). In Study 2, we analysed 1391 coronary PVAT radiomic features in 101 patients who experienced major adverse cardiac events (MACE) within 5 years of having a CCTA and 101 matched controls, training and validating a machine learning (random forest) algorithm (fat radiomic profile, FRP) to discriminate cases from controls (C-statistic 0.77 [95%CI: 0.62–0.93] in the external validation set). The coronary FRP signature was then tested in 1575 consecutive eligible participants in the SCOT-HEART trial, where it significantly improved MACE prediction beyond traditional risk stratification that included risk factors, coronary calcium score, coronary stenosis, and high-risk plaque features on CCTA (Δ[C-statistic] = 0.126, P  Conclusion: The CCTA-based radiomic profiling of coronary artery PVAT detects perivascular structural remodelling associated with coronary artery disease, beyond inflammation. A new artificial intelligence (AI)-powered imaging biomarker (FRP) leads to a striking improvement of cardiac risk prediction over and above the current state-of-the-art. </p

    Methods and algorithms for solving problems in the automatic recognition of license plates

    No full text
    © 2019, Institute of Advanced Scientific Research, Inc.. All rights reserved. The article shows the relevance of developing methods and algorithms for automatic recognition of license plates in the images. These methods and algorithms are used to control traffic safety and to ensure the operation of car parks in the access control systems of vehicles to protected areas. For the design of these systems, recognition, parsing, localization, normalization and segmentation procedures are carried out. During localization, part of the vehicle license plates is detected in the image. Normalization leads to a change in the size and orientation of the resulting image fragment to a form suitable for further processing. Segmentation allows you to highlight individual characters of the license plates. The recognition result is a text string with identified license plates characters. To determine the elements of a string, parsing is used. These tasks are resource-intensive and, as a rule, must be performed in real time with high accuracy. Therefore, the development of effective methods and algorithms that ensure the achievement of the required time indicators and the accuracy of automatic recognition of license plates for use in various information processing and control systems is relevant. The paper discusses the various methods of solving these problems, the possibility of detection and localization of the searched area in the image based on the method of Viola-Jones and normalization using Hough transform is proposed and the analysis of histograms of brightness and character recognition of numbers using the method of support vector machines

    Methods and algorithms for solving problems in the automatic recognition of license plates

    No full text
    © 2019, Institute of Advanced Scientific Research, Inc.. All rights reserved. The article shows the relevance of developing methods and algorithms for automatic recognition of license plates in the images. These methods and algorithms are used to control traffic safety and to ensure the operation of car parks in the access control systems of vehicles to protected areas. For the design of these systems, recognition, parsing, localization, normalization and segmentation procedures are carried out. During localization, part of the vehicle license plates is detected in the image. Normalization leads to a change in the size and orientation of the resulting image fragment to a form suitable for further processing. Segmentation allows you to highlight individual characters of the license plates. The recognition result is a text string with identified license plates characters. To determine the elements of a string, parsing is used. These tasks are resource-intensive and, as a rule, must be performed in real time with high accuracy. Therefore, the development of effective methods and algorithms that ensure the achievement of the required time indicators and the accuracy of automatic recognition of license plates for use in various information processing and control systems is relevant. The paper discusses the various methods of solving these problems, the possibility of detection and localization of the searched area in the image based on the method of Viola-Jones and normalization using Hough transform is proposed and the analysis of histograms of brightness and character recognition of numbers using the method of support vector machines

    Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI

    Get PDF
    Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques
    corecore