17,438 research outputs found

    Breast Cancer: Modelling and Detection

    Get PDF
    This paper reviews a number of the mathematical models used in cancer modelling and then chooses a specific cancer, breast carcinoma, to illustrate how the modelling can be used in aiding detection. We then discuss mathematical models that underpin mammographic image analysis, which complements models of tumour growth and facilitates diagnosis and treatment of cancer. Mammographic images are notoriously difficult to interpret, and we give an overview of the primary image enhancement technologies that have been introduced, before focusing on a more detailed description of some of our own recent work on the use of physics-based modelling in mammography. This theoretical approach to image analysis yields a wealth of information that could be incorporated into the mathematical models, and we conclude by describing how current mathematical models might be enhanced by use of this information, and how these models in turn will help to meet some of the major challenges in cancer detection

    DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

    Full text link
    We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201
    • …
    corecore