54 research outputs found

    3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes

    Full text link
    While deep convolutional neural networks (CNN) have been successfully applied for 2D image analysis, it is still challenging to apply them to 3D anisotropic volumes, especially when the within-slice resolution is much higher than the between-slice resolution and when the amount of 3D volumes is relatively small. On one hand, direct learning of CNN with 3D convolution kernels suffers from the lack of data and likely ends up with poor generalization; insufficient GPU memory limits the model size or representational power. On the other hand, applying 2D CNN with generalizable features to 2D slices ignores between-slice information. Coupling 2D network with LSTM to further handle the between-slice information is not optimal due to the difficulty in LSTM learning. To overcome the above challenges, we propose a 3D Anisotropic Hybrid Network (AH-Net) that transfers convolutional features learned from 2D images to 3D anisotropic volumes. Such a transfer inherits the desired strong generalization capability for within-slice information while naturally exploiting between-slice information for more effective modelling. The focal loss is further utilized for more effective end-to-end learning. We experiment with the proposed 3D AH-Net on two different medical image analysis tasks, namely lesion detection from a Digital Breast Tomosynthesis volume, and liver and liver tumor segmentation from a Computed Tomography volume and obtain the state-of-the-art results

    Automatic Collimation Detection in Digital Radiographs with the Directed Hough Transform and Learning-Based Edge Detection

    Get PDF
    Abstract. Collimation is widely used for X-ray examinations to reduce the overall radiation exposure to the patient and improve the contrast resolution in the region of interest (ROI), that has been exposed directly to X-rays. It is desirable to detect the region of interest and exclude the unexposed area to optimize the image display. Although we only focus on the X-ray images generated with a rectangular collimator, it remains a challenging task because of the large variability of collimated images. In this study, we detect the region of interest as an optimal quadrilateral, which is the intersection of the optimal group of four half-planes. Each half-plane is defined as the positive side of a directed straight line. We develop an extended Hough transform for directed straight lines on a model-aware gray level edge-map, which is estimated with random forests [1] on features of pairs of superpixels. Experiments show that our algorithm can extract the region of interest quickly and accurately, despite variations in size, shape and orientation, and incompleteness of boundaries

    Development of a synthetic phantom for the selection of optimal scanning parameters in CAD-CT colonography

    Get PDF
    The aim of this paper is to present the development of a synthetic phantom that can be used for the selection of optimal scanning parameters in computed tomography (CT) colonography. In this paper we attempt to evaluate the influence of the main scanning parameters including slice thickness, reconstruction interval, field of view, table speed and radiation dose on the overall performance of a computer aided detection (CAD)–CTC system. From these parameters the radiation dose received a special attention, as the major problem associated with CTC is the patient exposure to significant levels of ionising radiation. To examine the influence of the scanning parameters we performed 51 CT scans where the spread of scanning parameters was divided into seven different protocols. A large number of experimental tests were performed and the results analysed. The results show that automatic polyp detection is feasible even in cases when the CAD–CTC system was applied to low dose CT data acquired with the following protocol: 13 mAs/rotation with collimation of 1.5 mm × 16 mm, slice thickness of 3.0 mm, reconstruction interval of 1.5 mm, table speed of 30 mm per rotation. The CT phantom data acquired using this protocol was analysed by an automated CAD–CTC system and the experimental results indicate that our system identified all clinically significant polyps (i.e. larger than 5 mm)

    The use of 3D surface fitting for robust polyp detection and classification in CT colonography

    Get PDF
    In this paper we describe the development of a computationally efficient computer-aided detection (CAD) algorithm based on the evaluation of the surface morphology that is employed for the detection of colonic polyps in computed tomography (CT) colonography. Initial polyp candidate voxels were detected using the surface normal intersection values. These candidate voxels were clustered using the normal direction, convexity test, region growing and Gaussian distribution. The local colonic surface was classified as polyp or fold using a feature normalized nearest neighbor-hood classifier. The main merit of this paper is the methodology applied to select the robust features derived from the colon surface that have a high discriminative power for polyp/fold classification. The devised polyp detection scheme entails a low computational overhead (typically takes 2.20 minute per dataset) and shows 100% sensitivity for phantom polyps greater than 5mm. It also shows 100% sensitivity for real polyps larger than 10mm and 91.67% sensitivity for polyps between 5 to 10mm with an average of 4.5 false positives per dataset. The experimental data indicates that the proposed CAD polyp detection scheme outperforms other techniques that identify the polyps using features that sample the colon surface curvature especially when applied to low-dose datasets

    Out-of-plane artifact reduction in tomosynthesis based on regression modeling and outlier detection.

    No full text
    We propose a method for out-of-plane artifact reduction in digital breast tomosynthesis reconstruction. Because of the limited angular range acquisition in DBT, the reconstructed slices have reduced resolution in z-direction and are affected by artifacts. The out-of-plane blur caused by dense tissue and large masses complicates reconstruction of thick slices volumes. The streak-like out-of-plane artifacts caused by calcifications and metal clips distort the shape of calcifications which is regarded by many radiologists as an important malignancy predictor. Small clinical features such as micro-calcifications could be obscured by bright artifacts. The proposed technique involves reconstructing a set of super-resolution slices and predicting the artifact-free voxel intensity based on the corresponding set of projection pixels using a statistical model learned from a set of training data. Our experiments show that the resulting reconstructed images are de-blurred and streak-like artifacts are reduced, visibility of clinical features, contrast and sharpness are improved and thick-slice reconstruction is possible without the loss of contrast and sharpness

    Neural net based image matching

    No full text
    The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications - integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms described in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.</p
    corecore