928 research outputs found
Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods
Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods
Stratified decision forests for accurate anatomical landmark localization in cardiac images
Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D highresolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-theart landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy
Co-Inventor of Jet Engine to Donate Draper Prize Medal to UD
News release announces that Hans von Ohain will present the Charles Stark Draper medal to the University of Dayton
Solving a Slick Problem; Morally Preferable; Objectors by Conscience
News release announces a UD biologist\u27s solutions for cleaning up the massive oil spill in the Persian golf, a senior research scientist\u27s comments on the use of smart weapons in the Persian Gulf, and counseling concerning the selective service law and legal option for members of the UD community will be offered
Automated analysis of atrial late gadolinium enhancement imaging that correlates with endocardial voltage and clinical outcomes: A 2-center study
This work was supported by the British Heart Foundation PG/10/37/28347, RG/10/11/28457, NIHR Biomedical Research Centre funding, and the ElectroCardioMaths Programme of the Imperial BHF Centre of Research Excellence
A novel grading biomarker for the prediction of conversion from mild cognitive impairment to Alzheimer's disease
OBJECTIVE: Identifying mild cognitive impairment (MCI) subjects who will progress to Alzheimer's disease is not only crucial in clinical practice, but also has a significant potential to enrich clinical trials. The purpose of this study is to develop an effective biomarker for an accurate prediction of MCI-to-AD conversion from magnetic resonance (MR) images. METHODS: We propose a novel grading biomarker for the prediction of MCI-to-AD conversion. First, we comprehensively study the effects of several important factors on the performance in the prediction task including registration accuracy, age correction, feature selection and the selection of training data. Based on the studies of these factors, a grading biomarker is then calculated for each MCI subject using sparse representation techniques. Finally, the grading biomarker is combined with age and cognitive measures to provide a more accurate prediction of MCI-to-AD conversion. RESULTS: Using the ADNI dataset, the proposed global grading biomarker achieved an area under the receiver operating characteristic curve (AUC) in the range of 79%-81% for the prediction of MCI-to-AD conversion within 3 years in 10-fold cross validations. The classification AUC further increases to 84%-92% when age and cognitive measures are combined with the proposed grading biomarker. CONCLUSION: The obtained accuracy of the proposed biomarker benefits from the contributions of different factors: a tradeoff registration level to align images to the template space; the removal of the normal aging effect; selection of discriminative voxels; the calculation of the grading biomarker using AD and normal control groups; the integration of sparse representation technique and the combination of cognitive measures. SIGNIFICANCE: The evaluation on the ADNI dataset shows the efficacy of the proposed biomarker and demonstrates a significant contribution in accurate prediction of MCI-to-AD conversion
Geodesic Information Flows: Spatially-Variant Graphs and Their Application to Segmentation and Fusion
Clinical annotations, such as voxel-wise binary or probabilistic tissue segmentations, structural parcellations, pathological regionsof- interest and anatomical landmarks are key to many clinical studies. However, due to the time consuming nature of manually generating these annotations, they tend to be scarce and limited to small subsets of data. This work explores a novel framework to propagate voxel-wise annotations between morphologically dissimilar images by diffusing and mapping the available examples through intermediate steps. A spatially-variant graph structure connecting morphologically similar subjects is introduced over a database of images, enabling the gradual diffusion of information to all the subjects, even in the presence of large-scale morphological variability. We illustrate the utility of the proposed framework on two example applications: brain parcellation using categorical labels and tissue segmentation using probabilistic features. The application of the proposed method to categorical label fusion showed highly statistically significant improvements when compared to state-of-the-art methodologies. Significant improvements were also observed when applying the proposed framework to probabilistic tissue segmentation of both synthetic and real data, mainly in the presence of large morphological variability
Deep learning cardiac motion analysis for human survival prediction
Motion analysis is used in computer vision to understand the behaviour of
moving objects in sequences of images. Optimising the interpretation of dynamic
biological systems requires accurate and precise motion tracking as well as
efficient representations of high-dimensional motion trajectories so that these
can be used for prediction tasks. Here we use image sequences of the heart,
acquired using cardiac magnetic resonance imaging, to create time-resolved
three-dimensional segmentations using a fully convolutional network trained on
anatomical shape priors. This dense motion model formed the input to a
supervised denoising autoencoder (4Dsurvival), which is a hybrid network
consisting of an autoencoder that learns a task-specific latent code
representation trained on observed outcome data, yielding a latent
representation optimised for survival prediction. To handle right-censored
survival outcomes, our network used a Cox partial likelihood loss function. In
a study of 302 patients the predictive accuracy (quantified by Harrell's
C-index) was significantly higher (p < .0001) for our model C=0.73 (95 CI:
0.68 - 0.78) than the human benchmark of C=0.59 (95 CI: 0.53 - 0.65). This
work demonstrates how a complex computer vision task using high-dimensional
medical image data can efficiently predict human survival
Autoadaptive motion modelling for MR-based respiratory motion estimation
This repository contains four T1-weighted 2D MR slice datasets from multiple slice positions covering the entire thorax during free breathing and breath holds. The data was used to evaluate our novel autoadaptive respiratory motion model which we proposed in [1]. In particular, the datasets contain the following:
Acquisition of all sagittal slice positions covering the thorax and one coronal slice position acquired during a breath hold.
Results of registration between adjacent sagittal slice positions [control point displacements (cpp) and displacement fields (dfs)]
40 dynamic acquisitions of each slice position also present in the breath-hold acquired during free breathing.
Results of registration of the dynamic acquisitions to the respective breath-holds slices (cpp's and dfs's).
The data is divided into 4 zip files, each containing the data of one volunteer. The folder structure for each is as follows:
|-- bhs (breath hold data)
| |-- images (images)
| | |-- cor
| | `-- sag
| `-- mfs_slpos2slpos (registration results)
| `-- sag
`-- dyn (dynamic free-breathing data)
|-- images (images)
| |-- cor
| `-- sag
`-- mfs_tpos2tpos (registration results)
|-- cor
`-- sag
Please, see our publication [1] for details on the acquisition sequence and registration used.
--
[1]: CF Baumgartner, C Kolbitsch, JR McClelland, D Rueckert, AP King, Autoadaptive motion modelling for MR-based respiratory motion estimation, Medical Image Analysis (2016), http://dx.doi.org/10.1016/j.media.2016.06.00
- …