12,184 research outputs found

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)

    Full text link
    Positron emission tomography (PET) image synthesis plays an important role, which can be used to boost the training data for computer aided diagnosis systems. However, existing image synthesis methods have problems in synthesizing the low resolution PET images. To address these limitations, we propose multi-channel generative adversarial networks (M-GAN) based PET image synthesis method. Different to the existing methods which rely on using low-level features, the proposed M-GAN is capable to represent the features in a high-level of semantic based on the adversarial learning concept. In addition, M-GAN enables to take the input from the annotation (label) to synthesize the high uptake regions e.g., tumors and from the computed tomography (CT) images to constrain the appearance consistency and output the synthetic PET images directly. Our results on 50 lung cancer PET-CT studies indicate that our method was much closer to the real PET images when compared with the existing methods.Comment: 9 pages, 2 figure

    Pure phase-encoded MRI and classification of solids

    Get PDF
    Here, the authors combine a pure phase-encoded magnetic resonance imaging (MRI) method with a new tissue-classification technique to make geometric models of a human tooth. They demonstrate the feasibility of three-dimensional imaging of solids using a conventional 11.7-T NMR spectrometer. In solid-state imaging, confounding line-broadening effects are typically eliminated using coherent averaging methods. Instead, the authors circumvent them by detecting the proton signal at a fixed phase-encode time following the radio-frequency excitation. By a judicious choice of the phase-encode time in the MRI protocol, the authors differentiate enamel and dentine sufficiently to successfully apply a new classification algorithm. This tissue-classification algorithm identifies the distribution of different material types, such as enamel and dentine, in volumetric data. In this algorithm, the authors treat a voxel as a volume, not as a single point, and assume that each voxel may contain more than one material. They use the distribution of MR image intensities within each voxel-sized volume to estimate the relative proportion of each material using a probabilistic approach. This combined approach, involving MRI and data classification, is directly applicable to bone imaging and hard-tissue contrast-based modeling of biological solids

    Scale Stain: Multi-Resolution Feature Enhancement in Pathology Visualization

    Full text link
    Digital whole-slide images of pathological tissue samples have recently become feasible for use within routine diagnostic practice. These gigapixel sized images enable pathologists to perform reviews using computer workstations instead of microscopes. Existing workstations visualize scanned images by providing a zoomable image space that reproduces the capabilities of the microscope. This paper presents a novel visualization approach that enables filtering of the scale-space according to color preference. The visualization method reveals diagnostically important patterns that are otherwise not visible. The paper demonstrates how this approach has been implemented into a fully functional prototype that lets the user navigate the visualization parameter space in real time. The prototype was evaluated for two common clinical tasks with eight pathologists in a within-subjects study. The data reveal that task efficiency increased by 15% using the prototype, with maintained accuracy. By analyzing behavioral strategies, it was possible to conclude that efficiency gain was caused by a reduction of the panning needed to perform systematic search of the images. The prototype system was well received by the pathologists who did not detect any risks that would hinder use in clinical routine
    corecore