24 research outputs found
CT-Based Risk Stratification for Intensive Care Need and Survival in COVID-19 PatientsâA Simple Solution
We evaluated a simple semi-quantitative (SSQ) method for determining pulmonary involvement in computed tomography (CT) scans of COVID-19 patients. The extent of lung involvement in the first available CT was assessed with the SSQ method and subjectively. We identified risk factors for the need of invasive ventilation, intensive care unit (ICU) admission and for time to death after infection. Additionally, the diagnostic performance of both methods was evaluated. With the SSQ method, a 10% increase in the affected lung area was found to significantly increase the risk for need of ICU treatment with an odds ratio (OR) of 1.68 and for invasive ventilation with an OR of 1.35. Male sex, age, and pre-existing chronic lung disease were also associated with higher risks. A larger affected lung area was associated with a higher instantaneous risk of dying (hazard ratio (HR) of 1.11) independently of other risk factors. SSQ measurement was slightly superior to the subjective approach with an AUC of 73.5% for need of ICU treatment and 72.7% for invasive ventilation. SSQ assessment of the affected lung in the first available CT scans of COVID-19 patients may support early identification of those with higher risks for need of ICU treatment, invasive ventilation, or death
Diagnostic Value of Initial Chest CT Findings for the Need of ICU Treatment/Intubation in Patients with COVID-19
Computed tomography (CT) plays an important role in the diagnosis of COVID-19. The aim of this study was to evaluate a simple, semi-quantitative method that can be used for identifying patients in need of subsequent intensive care unit (ICU) treatment and intubation. We retrospectively analyzed the initial CT scans of 28 patients who tested positive for SARS-CoV-2 at our Level-I center. The extent of lung involvement on CT was classified both subjectively and with a simple semi-quantitative method measuring the affected area at three lung levels. Competing risks Cox regression was used to identify factors associated with the time to ICU admission and intubation. Their potential diagnostic ability was assessed with receiver operating characteristic (ROC)/area under the ROC curves (AUC) analysis. A 10% increase in the affected lung parenchyma area increased the instantaneous risk of intubation (hazard ratio (HR) = 2.00) and the instantaneous risk of ICU admission (HR 1.73). The semi-quantitative measurement outperformed the subjective assessment diagnostic ability (AUC = 85.6% for ICU treatment, 71.9% for intubation). This simple measurement of the involved lung area in initial CT scans of COVID-19 patients may allow early identification of patients in need of ICU treatment/intubation and thus help make optimal use of limited ICU/ventilation resources in hospitals
Liver Venous Deprivation (LVD) Versus Portal Vein Embolization (PVE) Alone Prior to Extended Hepatectomy: A Matched Pair Analysis
Background: To investigate whether liver venous deprivation (LVD) as simultaneous, portal vein (PVE) and right hepatic vein embolization offers advantages in terms of hypertrophy induction before extended hepatectomy in non-cirrhotic liver.
Materials and Methods: Between June 2018 and August 2019, 20 patients were recruited for a prospective, nonrandomized study to investigate the efficacy of LVD. After screening of 134 patients treated using PVE alone from January 2015 to August 2019, 14 directly matched pairs regarding tumor entity (cholangiocarcinoma, CC and colorectal carcinoma, CRC) and hypertrophy time (defined as time from embolization to follow-up imaging) were identified. In both treatment groups, the same experienced reader (> 5 years experience) performed imaging-based measurement of the volumes of liver segments of the future liver remnant (FLR) prior to embolization and after the standard clinical hypertrophy interval ( similar to 30 days), before surgery. Percentage growth of segments was calculated and compared.
Results: After matched follow-up periods (mean of 30.5 days), there were no statistically significant differences in relative hypertrophy of FLRs. Mean +/- standard deviation relative hypertrophy rates for LVD/PVE were 59 +/- 29.6%/54.1 +/- 27.6% (p = 0.637) for segments II + III and 48.2 +/- 22.2%/44.9 +/- 28.9% (p = 0.719) for segments II-IV, respectively.
Conclusions: LVD had no significant advantages over the standard method (PVE alone) in terms of hypertrophy induction of the FLR before extended hepatectomy in this study population
Yttrium-90 radioembolization for unresectable hepatocellular carcinoma: predictive modeling strategies to anticipate tumor response and improve patient selection
Objectives: This study aims to better characterize potential responders of Y-90-radioembolization at baseline through analysis of clinical variables and contrast enhanced (CE) MRI tumor volumetry in order to adjust therapeutic regimens early on and to improve treatment outcomes.
Methods: Fifty-eight HCC patients who underwent Y-90-radioembolization at our center between 10/2008 and 02/2017 were retrospectively included. Pre- and post-treatment target lesion volumes were measured as total tumor volume (TTV) and enhancing tumor volume (ETV). Survival analysis was performed with Cox regression models to evaluate 65% ETV reduction as surrogate endpoint for treatment efficacy. Univariable and multivariable logistic regression analyses were used to evaluate the combination of baseline clinical variables and tumor volumetry as predictors of >= 65% ETV reduction.
Results: Mean patients' age was 66 (SD 8.7) years, and 12 were female (21%). Sixty-seven percent of patients suffered from liver cirrhosis. Median survival was 11 months. A threshold of >= 65% in ETV reduction allowed for a significant (p = 0.04) separation of the survival curves with a median survival of 11 months in non-responders and 17 months in responders. Administered activity per tumor volume did predict neither survival nor ETV reduction. A baseline ETV/TTV ratio greater than 50% was the most important predictor of arterial devascularization (odds ratio 6.3) in a statistically significant (p = 0.001) multivariable logistic regression model. The effect size was strong with a Cohen's f of 0.89.
Conclusion: We present a novel approach to identify promising candidates for Y-90 radioembolization at pre-treatment baseline MRI using tumor volumetry and clinical baseline variables
Submillisievert chest CT in patients with COVID-19 - experiences of a German Level-I center
Purpose:
Computed tomography (CT) is used for initial diagnosis and therapy monitoring of patients with coronavirus disease 2019 (COVID-19). As patients of all ages are affected, radiation dose is a concern. While follow-up CT examinations lead to high cumulative radiation doses, the ALARA principle states that the applied dose should be as low as possible while maintaining adequate image quality. The aim of this study was to evaluate parameter settings for two commonly used CT scanners to ensure sufficient image quality/diagnostic confidence at a submillisievert dose.
Materials and methods:
We retrospectively analyzed 36 proven COVID-19 cases examined on two different scanners. Image quality was evaluated objectively as signal-to-noise ratio (SNR)/contrast-to-noise ratio (CNR) measurement and subjectively by two experienced, independent readers using 3-point Likert scales. CT dose index volume (CTDIvol) and dose-length product (DLP) were extracted from dose reports, and effective dose was calculated.
Results:
With the tested parameter settings we achieved effective doses below 1 mSv (median 0.5 mSv, IQR: 0.2 mSv, range: 0.3â0.9 mSv) in all 36 patients. Thirty-four patients had typical COVID-19 findings. Both readers were confident regarding the typical COVID-19 CT-characteristics in all cases (3 ± 0). Objective image quality parameters were: SNRnormal lung: 17.0 ± 5.9, CNRGGO/normal lung: 7.5 ± 5.0, and CNRconsolidation/normal lung: 15.3 ± 6.1.
Conclusion:
With the tested parameters, we achieved applied doses in the submillisievert range, on two different CT scanners without sacrificing diagnostic confidence regarding COVID-19 findings
Spectral CT in patients with acute thoracoabdominal bleeding-a safe technique to improve diagnostic confidence and reduce dose?
Computed tomography (CT) protocols for the detection of bleeding sources often include unenhanced CT series to distinguish contrast agent extravasation from calcification. This study evaluates whether virtual non-contrast images (VNC) can safely replace real non-contrast images (RNC) in the search for acute thoracoabdominal bleeding and whether monoenergetic imaging can improve the detection of the bleeding source.The 32 patients with active bleeding in spectral CT angiography (SCT) were retrospectively analyzed. RNC and SCT series were acquired including VNC and monoenergetic images at 40, 70, and 140 keV. CT numbers were measured in regions of interest (ROIs) in different organs and in the bleeding jet for quantitative image analysis (contrast-to-noise ratios [CNR] and signal-to-noise ratio [SNR]). Additionally, 2 radiologists rated detectability of the bleeding source in the different CT series. Wilcoxon rank test for related samples was used.VNC series suppressed iodine sufficiently but not completely (CT number of aorta: RNC: 33.3±12.3, VNC: 44.8â±â9.5, Pâ=â.01; bleeding jet: RNC: 43.1â±â16.9, VNC: 56.3â±â16.7, Pâ=â.02). VNC showed significantly higher signal-to-noise ratios than RNC for all regions investigated. Contrast-to-noise ratios in the bleeding jet were significantly higher in 40 keV images than in standard 140 keV images. The 40 keV images were also assigned the best subjective ratings for bleeding source detection.VNC can safely replace RNC in a CT protocol used to search for bleeding sources, thereby reducing radiation exposure by 30%. Low-keV series may enhance diagnostic confidence in the detection of bleeding sources
Metallic dental artifact reduction in computed tomography (Smart MAR): Improvement of image quality and diagnostic confidence in patients with suspected head and neck pathology and oral implants.
PURPOSE
We determined whether the Smart MAR metal artifact reduction tool - a three-stage, projection-based, post processing algorithm - improves subjective and objective image quality and diagnostic confidence in patients with dental artifacts and suspected head and neck pathology compared to standard adaptive statistical iterative reconstructions (ASIR V) alone.
METHOD
The study included 100 consecutive patients with nonremovable oral implants or dental fillings and suspected oropharyngeal cancer or abscess. CT raw data of a single-source multislice CT scanner were postprocessed using ASIR V alone and with additional Smart MAR reconstruction. Image quality of baseline ASIR V and Smart MAR-based reconstruction series was compared both quantitatively (5 regions of interest, ROIs) and qualitatively (two independent raters).
RESULTS
Additional Smart MAR reconstruction significantly seems to improve both attenuation and noise adjacent to implants and in more distant areas (all pâŻ<âŻ0.001) compared to standard ASIR V reconstructions alone. Signal-to-noise ratio (SNR; pâŻ=âŻ0.001) and contrast-to-noise ratio were improved significantly (CNR; pâŻ=âŻ0.001). Smart MAR improved visualization of tumor/abscess (detected in 36 of 100 patients, 36%) and representative oropharyngeal tissue (pâŻ<âŻ0.001). In 8 of 36 patients (22%), tumor was only detected in Smart MAR series. Mean total DLP was 506.8mGy*cm; average CTDIvol was 5.5âŻmGy.
CONCLUSIONS
The supplementary use of the Smart MAR post-processing tool seems to significantly improve both subjective and objective image quality as well as diagnostic confidence and lesion detection in CT of the head and neck. In 22% of cases, the tumor was detected only in Smart MAR reconstructed images