640 research outputs found
MonoPerfCap: Human Performance Capture from Monocular Video
We present the first marker-less approach for temporally coherent 3D
performance capture of a human with general clothing from monocular video. Our
approach reconstructs articulated human skeleton motion as well as medium-scale
non-rigid surface deformations in general scenes. Human performance capture is
a challenging problem due to the large range of articulation, potentially fast
motion, and considerable non-rigid deformations, even from multi-view data.
Reconstruction from monocular video alone is drastically more challenging,
since strong occlusions and the inherent depth ambiguity lead to a highly
ill-posed reconstruction problem. We tackle these challenges by a novel
approach that employs sparse 2D and 3D human pose detections from a
convolutional neural network using a batch-based pose estimation strategy.
Joint recovery of per-batch motion allows to resolve the ambiguities of the
monocular reconstruction problem based on a low dimensional trajectory
subspace. In addition, we propose refinement of the surface geometry based on
fully automatically extracted silhouettes to enable medium-scale non-rigid
alignment. We demonstrate state-of-the-art performance capture results that
enable exciting applications such as video editing and free viewpoint video,
previously infeasible from monocular video. Our qualitative and quantitative
evaluation demonstrates that our approach significantly outperforms previous
monocular methods in terms of accuracy, robustness and scene complexity that
can be handled.Comment: Accepted to ACM TOG 2018, to be presented on SIGGRAPH 201
LIME: Live Intrinsic Material Estimation
We present the first end to end approach for real time material estimation
for general object shapes with uniform material that only requires a single
color image as input. In addition to Lambertian surface properties, our
approach fully automatically computes the specular albedo, material shininess,
and a foreground segmentation. We tackle this challenging and ill posed inverse
rendering problem using recent advances in image to image translation
techniques based on deep convolutional encoder decoder architectures. The
underlying core representations of our approach are specular shading, diffuse
shading and mirror images, which allow to learn the effective and accurate
separation of diffuse and specular albedo. In addition, we propose a novel
highly efficient perceptual rendering loss that mimics real world image
formation and obtains intermediate results even during run time. The estimation
of material parameters at real time frame rates enables exciting mixed reality
applications, such as seamless illumination consistent integration of virtual
objects into real world scenes, and virtual material cloning. We demonstrate
our approach in a live setup, compare it to the state of the art, and
demonstrate its effectiveness through quantitative and qualitative evaluation.Comment: 17 pages, Spotlight paper in CVPR 201
Predicting Adverse Radiation Effects in Brain Tumors After Stereotactic Radiotherapy With Deep Learning and Handcrafted Radiomics
Introduction
There is a cumulative risk of 20-40% of developing brain metastases (BM) in solid cancers. Stereotactic radiotherapy (SRT) enables the application of high focal doses of radiation to a volume and is often used for BM treatment. However, SRT can cause adverse radiation effects (ARE), such as radiation necrosis, which sometimes cause irreversible damage to the brain. It is therefore of clinical interest to identify patients at a high risk of developing ARE. We hypothesized that models trained with radiomics features, deep learning (DL) features, and patient characteristics or their combination can predict ARE risk in patients with BM before SRT.
Methods
Gadolinium-enhanced T1-weighted MRIs and characteristics from patients treated with SRT for BM were collected for a training and testing cohort (N = 1,404) and a validation cohort (N = 237) from a separate institute. From each lesion in the training set, radiomics features were extracted and used to train an extreme gradient boosting (XGBoost) model. A DL model was trained on the same cohort to make a separate prediction and to extract the last layer of features. Different models using XGBoost were built using only radiomics features, DL features, and patient characteristics or a combination of them. Evaluation was performed using the area under the curve (AUC) of the receiver operating characteristic curve on the external dataset. Predictions for individual lesions and per patient developing ARE were investigated.
Results
The best-performing XGBoost model on a lesion level was trained on a combination of radiomics features and DL features (AUC of 0.71 and recall of 0.80). On a patient level, a combination of radiomics features, DL features, and patient characteristics obtained the best performance (AUC of 0.72 and recall of 0.84). The DL model achieved an AUC of 0.64 and recall of 0.85 per lesion and an AUC of 0.70 and recall of 0.60 per patient.
Conclusion
Machine learning models built on radiomics features and DL features extracted from BM combined with patient characteristics show potential to predict ARE at the patient and lesion levels. These models could be used in clinical decision making, informing patients on their risk of ARE and allowing physicians to opt for different therapies
External validation of <sup>18</sup>F-FDG PET-based radiomic models on identification of residual oesophageal cancer after neoadjuvant chemoradiotherapy
Objectives Detection of residual oesophageal cancer after neoadjuvant chemoradiotherapy (nCRT) is important to guide treatment decisions regarding standard oesophagectomy or active surveillance. The aim was to validate previously developed 18F-FDG PET-based radiomic models to detect residual local tumour and to repeat model development (i.e. 'model extension') in case of poor generalisability. Methods This was a retrospective cohort study in patients collected from a prospective multicentre study in four Dutch institutes. Patients underwent nCRT followed by oesophagectomy between 2013 and 2019. Outcome was tumour regression grade (TRG) 1 (0% tumour) versus TRG 2-3-4 (≥1% tumour). Scans were acquired according to standardised protocols. Discrimination and calibration were assessed for the published models with optimism-corrected AUCs >0.77. For model extension, the development and external validation cohorts were combined. Results Baseline characteristics of the 189 patients included [median age 66 years (interquartile range 60-71), 158/189 male (84%), 40/189 TRG 1 (21%) and 149/189 (79%) TRG 2-3-4] were comparable to the development cohort. The model including cT stage plus the feature 'sum entropy' had best discriminative performance in external validation (AUC 0.64, 95% confidence interval 0.55-0.73), with a calibration slope and intercept of 0.16 and 0.48 respectively. An extended bootstrapped LASSO model yielded an AUC of 0.65 for TRG 2-3-4 detection. Conclusion The high predictive performance of the published radiomic models could not be replicated. The extended model had moderate discriminative ability. The investigated radiomic models appeared inaccurate to detect local residual oesophageal tumour and cannot be used as an adjunct tool for clinical decision-making in patients.</p
AI-Based Chest CT Analysis for Rapid COVID-19 Diagnosis and Prognosis: A Practical Tool to Flag High-Risk Patients and Lower Healthcare Costs
peer reviewedEarly diagnosis of COVID-19 is required to provide the best treatment to our patients, to prevent the epidemic from spreading in the community, and to reduce costs associated with the aggravation of the disease. We developed a decision tree model to evaluate the impact of using an artificial intelligence-based chest computed tomography (CT) analysis software (icolung, icometrix) to analyze CT scans for the detection and prognosis of COVID-19 cases. The model compared routine practice where patients receiving a chest CT scan were not screened for COVID-19, with a scenario where icolung was introduced to enable COVID-19 diagnosis. The primary outcome was to evaluate the impact of icolung on the transmission of COVID-19 infection, and the secondary outcome was the in-hospital length of stay. Using EUR 20000 as a willingness-to-pay threshold, icolung is cost-effective in reducing the risk of transmission, with a low prevalence of COVID-19 infections. Concerning the hospitalization cost, icolung is cost-effective at a higher value of COVID-19 prevalence and risk of hospitalization. This model provides a framework for the evaluation of AI-based tools for the early detection of COVID-19 cases. It allows for making decisions regarding their implementation in routine practice, considering both costs and effects
Investigation of the added value of CT-based radiomics in predicting the development of brain metastases in patients with radically treated stage III NSCLC
Introduction: Despite radical intent therapy for patients with stage III non-small-cell lung cancer (NSCLC), cumulative incidence of brain metastases (BM) reaches 30%. Current risk stratification methods fail to accurately identify these patients. As radiomics features have been shown to have predictive value, this study aims to develop a model combining clinical risk factors with radiomics features for BM development in patients with radically treated stage III NSCLC. Methods: Retrospective analysis of two prospective multicentre studies. Inclusion criteria: adequately staged [18F-fluorodeoxyglucose positron emission tomography-computed tomography (18-FDG-PET-CT), contrast-enhanced chest CT, contrast-enhanced brain magnetic resonance imaging/CT] and radically treated stage III NSCLC, exclusion criteria: second primary within 2 years of NSCLC diagnosis and prior prophylactic cranial irradiation. Primary endpoint was BM development any time during follow-up (FU). CT-based radiomics features (N = 530) were extracted from the primary lung tumour on 18-FDG-PET-CT images, and a list of clinical features (N = 8) was collected. Univariate feature selection based on the area under the curve (AUC) of the receiver operating characteristic was performed to identify relevant features. Generalized linear models were trained using the selected features, and multivariate predictive performance was assessed through the AUC. Results: In total, 219 patients were eligible for analysis. Median FU was 59.4 months for the training cohort and 67.3 months for the validation cohort; 21 (15%) and 17 (22%) patients developed BM in the training and validation cohort, respectively. Two relevant clinical features (age and adenocarcinoma histology) and four relevant radiomics features were identified as predictive. The clinical model yielded the highest AUC value of 0.71 (95% CI: 0.58–0.84), better than radiomics or a combination of clinical parameters and radiomics (both an AUC of 0.62, 95% CIs of 0.47–076 and 0.48–0.76, respectively). Conclusion: CT-based radiomics features of primary NSCLC in the current setup could not improve on a model based on clinical predictors (age and adenocarcinoma histology) of BM development in radically treated stage III NSCLC patients
Recommended from our members
Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions.
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
- …