4,209 research outputs found
Reproducibility of CT-based radiomic features against image resampling and perturbations for tumour and healthy kidney in renal cancer patients.
Computed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances
Dense Motion Estimation for Smoke
Motion estimation for highly dynamic phenomena such as smoke is an open
challenge for Computer Vision. Traditional dense motion estimation algorithms
have difficulties with non-rigid and large motions, both of which are
frequently observed in smoke motion. We propose an algorithm for dense motion
estimation of smoke. Our algorithm is robust, fast, and has better performance
over different types of smoke compared to other dense motion estimation
algorithms, including state of the art and neural network approaches. The key
to our contribution is to use skeletal flow, without explicit point matching,
to provide a sparse flow. This sparse flow is upgraded to a dense flow. In this
paper we describe our algorithm in greater detail, and provide experimental
evidence to support our claims.Comment: ACCV201
Recommended from our members
Radiomics and Machine Learning in the Prediction of Cardiovascular Disease
Carotid atherosclerosis is a major risk factor for ischaemic stroke which is a leading cause of death worldwide. For stroke survivors, 1 in 4 will have another stroke within five years. Carotid CT angiography (CTA) is commonly performed following an ischaemic stroke or transient ischemic attack to help guide patient management in the secondary prevention of stroke. For
example, carotid endarterectomy surgery plus medical therapy or medical therapy alone. The degree of carotid stenosis is the mainstay in making this decision and uses only one aspect of anatomical information that can be obtained from a carotid CTA scan. Radiomics, sometimes called ‘texture analysis’, is the extraction of quantitative data from medical images that may
not be apparent to the naked eye and has already demonstrated clinical utility in oncology for applications ranging from lesion characterisation to tumour grading and prognostication. Machine learning refers to the process of learning from experience (in this case data), rather than following pre-programmed rules. This thesis presents the findings of a proof-of-principle study to assess the value of radiomics in identifying the ‘vulnerable plaque’ and the ‘vulnerable patient’ within the context of cerebrovascular events. To evaluate the potential of radiomic features as imaging biomarkers, their reproducibility and robustness to morphological perturbations were assessed, as well as their biological associations with both PET and immunohistochemistry data. The ability of radiomic features to classify different carotid artery types, namely, culprit, non-culprit and asymptomatic carotid arteries was assessed using several machine learning classifiers. This was subsequently compared with a deep learning approach, which has greater capacity for data mining than feature-based machine learning approaches. Overall, radiomics could extract further useful information from carotid CTA scans. Culprit versus non-culprit carotid arteries in symptomatic patients and asymptomatic carotid arteries from asymptomatic patients had
different radiomic profiles that could be leveraged using machine learning for better classification performance than carotid calcification or carotid PET imaging alone. Reliable and robust CT-based carotid radiomic features were identified that were associated with the degree of inflammation underlying the carotid artery. If validated with future prospective studies, this has the potential to improve personalised patient care in stroke management and
advance clinical decision-making.Cambridge School of Clinical Medicine, the Medical Research Council's Doctoral Training Partnership and the Frank Edward Elmore Fun
GANimation: one-shot anatomically consistent facial animation
The final publication is available at link.springer.comRecent advances in generative adversarial networks (GANs) have shown impressive results for the task of facial expression synthesis. The most successful architecture is StarGAN (Choi et al. in CVPR, 2018), that conditions GANs’ generation process with images of a specific domain, namely a set of images of people sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content and granularity of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on action units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combining several of them. Additionally, we propose a weakly supervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit a novel self-learned attention mechanism that makes our network robust to changing backgrounds, lighting conditions and occlusions. Extensive evaluation shows that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild. The code of this work is publicly available at https://github.com/albertpumarola/GANimation.Peer ReviewedPostprint (author's final draft
Assessing radiomic feature robustness to interpolation in 18F-FDG PET imaging
Radiomic studies link quantitative imaging features to patient outcomes in an effort to personalise treatment in oncology. To be clinically useful, a radiomic feature must be robust to image processing steps, which has made robustness testing a necessity for many technical aspects of feature extraction. We assessed the stability of radiomic features to interpolation processing and categorised features based on stable, systematic, or unstable responses. Here, 18F-fluorodeoxyglucose (18F-FDG) PET images for 441 oesophageal cancer patients (split: testing = 353, validation = 88) were resampled to 6 isotropic voxel sizes (1.5 mm, 1.8 mm, 2.0 mm, 2.2 mm, 2.5 mm, 2.7 mm) and 141 features were extracted from each volume of interest (VOI). Features were categorised into four groups with two statistical tests. Feature reliability was analysed using an intraclass correlation coefficient (ICC) and patient ranking consistency was assessed using a Spearman’s rank correlation coefficient (ρ). We categorised 93 features robust and 6 limited robustness (stable responses), 34 potentially correctable (systematic responses), and 8 not robust (unstable responses). We developed a correction technique for features with potential systematic variation that used surface fits to link voxel size and percentage change in feature value. Twenty-nine potentially correctable features were re-categorised to robust for the validation dataset, after applying corrections defined by surface fits generated on the testing dataset. Furthermore, we found the choice of interpolation algorithm alone (spline vs trilinear) resulted in large variation in values for a number of features but the response categorisations remained constant. This study attempted to quantify the diverse response of radiomics features commonly found in 18F-FDG PET clinical modelling to isotropic voxel size interpolation
Analysis of 3D Face Reconstruction
This thesis investigates the long standing problem of 3D reconstruction from a single 2D face
image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters
and the texture parameters. The proposed approach has many potential applications in the
law enforcement, surveillance, medicine, computer games and the entertainment industries.
This problem is addressed using an analysis by synthesis framework by reconstructing a 3D
face model from identity photographs. The identity photographs are a widely used medium for
face identi cation and can be found on identity cards and passports.
The novel contribution of this thesis is a new technique for creating 3D face models from a single
2D face image. The proposed method uses the improved dense 3D correspondence obtained
using rigid and non-rigid registration techniques. The existing reconstruction methods use the
optical
ow method for establishing 3D correspondence. The resulting 3D face database is used
to create a statistical shape model.
The existing reconstruction algorithms recover shape by optimizing over all the parameters
simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step
wise approach thus reducing the dimension of the parameter space and simplifying the opti-
mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face
image by using anatomical landmarks. The texture is then warped onto the 3D model by using
the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over
the shape parameters while matching a texture mapped model to the target image.
There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately
recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for
improving the quality of reconstruction by improving the cost function. Previous methods use
qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy.
The improvement in the performance of the cost function occurs as a result of improvement
in the feature space comprising the landmark and intensity features. Previously, the feature
space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate
assumptions about its behaviour.
The proposed approach simpli es the reconstruction problem by using only identity images,
rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations.
This makes sense, as frontal face images under standard illumination conditions are widely
available and could be utilized for accurate reconstruction. The reconstructed 3D models with
texture can then be used for overcoming the PIE variations
- …