296 research outputs found

    Automatic Segmentation and Disease Classification Using Cardiac Cine MR Images

    Full text link
    Segmentation of the heart in cardiac cine MR is clinically used to quantify cardiac function. We propose a fully automatic method for segmentation and disease classification using cardiac cine MR images. A convolutional neural network (CNN) was designed to simultaneously segment the left ventricle (LV), right ventricle (RV) and myocardium in end-diastole (ED) and end-systole (ES) images. Features derived from the obtained segmentations were used in a Random Forest classifier to label patients as suffering from dilated cardiomyopathy, hypertrophic cardiomyopathy, heart failure following myocardial infarction, right ventricular abnormality, or no cardiac disease. The method was developed and evaluated using a balanced dataset containing images of 100 patients, which was provided in the MICCAI 2017 automated cardiac diagnosis challenge (ACDC). The segmentation and classification pipeline were evaluated in a four-fold stratified cross-validation. Average Dice scores between reference and automatically obtained segmentations were 0.94, 0.88 and 0.87 for the LV, RV and myocardium. The classifier assigned 91% of patients to the correct disease category. Segmentation and disease classification took 5 s per patient. The results of our study suggest that image-based diagnosis using cine MR cardiac scans can be performed automatically with high accuracy.Comment: Accepted in STACOM Automated Cardiac Diagnosis Challenge 201

    Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier

    Full text link
    Coronary artery centerline extraction in cardiac CT angiography (CCTA) images is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We propose an algorithm that extracts coronary artery centerlines in CCTA using a convolutional neural network (CNN). A 3D dilated CNN is trained to predict the most likely direction and radius of an artery at any given point in a CCTA image based on a local image patch. Starting from a single seed point placed manually or automatically anywhere in a coronary artery, a tracker follows the vessel centerline in two directions using the predictions of the CNN. Tracking is terminated when no direction can be identified with high certainty. The CNN was trained using 32 manually annotated centerlines in a training set consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08 challenge showed that extracted centerlines had an average overlap of 93.7% with 96 manually annotated reference centerlines. Extracted centerline points were highly accurate, with an average distance of 0.21 mm to reference centerline points. In a second test set consisting of 50 CCTA scans, 5,448 markers in the coronary arteries were used as seed points to extract single centerlines. This showed strong correspondence between extracted centerlines and manually placed markers. In a third test set containing 36 CCTA scans, fully automatic seeding and centerline extraction led to extraction of on average 92% of clinically relevant coronary artery segments. The proposed method is able to accurately and efficiently determine the direction and radius of coronary arteries. The method can be trained with limited training data, and once trained allows fast automatic or interactive extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi

    Programmable two-photon quantum interference in 10310^3 channels in opaque scattering media

    Get PDF
    We investigate two-photon quantum interference in an opaque scattering medium that intrinsically supports 10610^6 transmission channels. By adaptive spatial phase-modulation of the incident wavefronts, the photons are directed at targeted speckle spots or output channels. From 10310^3 experimentally available coupled channels, we select two channels and enhance their transmission, to realize the equivalent of a fully programmable 2×22\times2 beam splitter. By sending pairs of single photons from a parametric down-conversion source through the opaque scattering medium, we observe two-photon quantum interference. The programmed beam splitter need not fulfill energy conservation over the two selected output channels and hence could be non-unitary. Consequently, we have the freedom to tune the quantum interference from bunching (Hong-Ou-Mandel-like) to antibunching. Our results establish opaque scattering media as a platform for high-dimensional quantum interference that is notably relevant for boson sampling and physical-key-based authentication

    Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis

    Full text link
    In patients with coronary artery stenoses of intermediate severity, the functional significance needs to be determined. Fractional flow reserve (FFR) measurement, performed during invasive coronary angiography (ICA), is most often used in clinical practice. To reduce the number of ICA procedures, we present a method for automatic identification of patients with functionally significant coronary artery stenoses, employing deep learning analysis of the left ventricle (LV) myocardium in rest coronary CT angiography (CCTA). The study includes consecutively acquired CCTA scans of 166 patients with FFR measurements. To identify patients with a functionally significant coronary artery stenosis, analysis is performed in several stages. First, the LV myocardium is segmented using a multiscale convolutional neural network (CNN). To characterize the segmented LV myocardium, it is subsequently encoded using unsupervised convolutional autoencoder (CAE). Thereafter, patients are classified according to the presence of functionally significant stenosis using an SVM classifier based on the extracted and clustered encodings. Quantitative evaluation of LV myocardium segmentation in 20 images resulted in an average Dice coefficient of 0.91 and an average mean absolute distance between the segmented and reference LV boundaries of 0.7 mm. Classification of patients was evaluated in the remaining 126 CCTA scans in 50 10-fold cross-validation experiments and resulted in an area under the receiver operating characteristic curve of 0.74 +- 0.02. At sensitivity levels 0.60, 0.70 and 0.80, the corresponding specificity was 0.77, 0.71 and 0.59, respectively. The results demonstrate that automatic analysis of the LV myocardium in a single CCTA scan acquired at rest, without assessment of the anatomy of the coronary arteries, can be used to identify patients with functionally significant coronary artery stenosis.Comment: This paper was submitted in April 2017 and accepted in November 2017 for publication in Medical Image Analysis. Please cite as: Zreik et al., Medical Image Analysis, 2018, vol. 44, pp. 72-8

    Depth-Supervised NeRF for Multi-View RGB-D Operating Room Images

    Get PDF
    Neural Radiance Fields (NeRF) is a powerful novel technology for the reconstruction of 3D scenes from a set of images captured by static cameras. Renders of these reconstructions could play a role in virtual presence in the operating room (OR), e.g. for training purposes. In contrast to existing systems for virtual presence, NeRF can provide real instead of simulated surgeries. This work shows how NeRF can be used for view synthesis in the OR. A depth-supervised NeRF (DS-NeRF) is trained with three or five synchronised cameras that capture the surgical field in knee replacement surgery videos from the 4D-OR dataset. The algorithm is trained and evaluated for images in five distinct phases before and during the surgery. With qualitative analysis, we inspect views synthesised by a virtual camera that moves in 180 degrees around the surgical field. Additionally, we quantitatively inspect view synthesis from an unseen camera position in terms of PSNR, SSIM and LPIPS for the colour channels and in terms of MAE and error percentage for the estimated depth. DS-NeRF generates geometrically consistent views, also from interpolated camera positions. Views are generated from an unseen camera pose with an average PSNR of 17.8 and a depth estimation error of 2.10%. However, due to artefacts and missing of fine details, the synthesised views do not look photo-realistic. Our results show the potential of NeRF for view synthesis in the OR. Recent developments, such as NeRF for video synthesis and training speedups, require further exploration to reveal its full potential.Comment: 12 pages, 4 figures, submitted to the 14th International Conference on Information Processing in Computer-Assisted Intervention

    Polymers grafted to porous membranes

    Full text link
    We study a single flexible chain molecule grafted to a membrane which has pores of size slightly larger than the monomer size. On both sides of the membrane there is the same solvent. When this solvent is good, i.e. when the polymer is described by a self avoiding walk, it can fairly easily penetrate the membrane, so that the average number of membrane crossings tends, for chain length N→∞N\to\infty, to a positive constant. The average numbers of monomers on either side of the membrane diverges in this limit, although their ratio becomes infinite. For a poor solvent, in contrast, the entire polymer is located, for large NN, on one side of the membrane. For good and for theta solvents (ideal polymers) we find scaling laws, whose exponents can in the latter case be easily understood from the behaviour of random walks.Comment: 4 pages, 6 figure
    • …
    corecore