5 research outputs found

    An Attention-based Multi-Scale Feature Learning Network for Multimodal Medical Image Fusion

    Full text link
    Medical images play an important role in clinical applications. Multimodal medical images could provide rich information about patients for physicians to diagnose. The image fusion technique is able to synthesize complementary information from multimodal images into a single image. This technique will prevent radiologists switch back and forth between different images and save lots of time in the diagnostic process. In this paper, we introduce a novel Dilated Residual Attention Network for the medical image fusion task. Our network is capable to extract multi-scale deep semantic features. Furthermore, we propose a novel fixed fusion strategy termed Softmax-based weighted strategy based on the Softmax weights and matrix nuclear norm. Extensive experiments show our proposed network and fusion strategy exceed the state-of-the-art performance compared with reference image fusion methods on four commonly used fusion metrics.Comment: 8 pages, 8 figures, 3 table

    Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models

    Get PDF
    Deep Ensembles are a simple, reliable, and effective method of improving both the predictive performance and uncertainty estimates of deep learning approaches. However, they are widely criticised as being computationally expensive, due to the need to deploy multiple independent models. Recent work has challenged this view, showing that for predictive accuracy, ensembles can be more computationally efficient (at inference) than scaling single models within an architecture family. This is achieved by cascading ensemble members via an early-exit approach. In this work, we investigate extending these efficiency gains to tasks related to uncertainty estimation. As many such tasks, e.g. selective classification, are binary classification, our key novel insight is to only pass samples within a window close to the binary decision boundary to later cascade stages. Experiments on ImageNet-scale data across a number of network architectures and uncertainty tasks show that the proposed window-based early-exit approach is able to achieve a superior uncertainty-computation trade-off compared to scaling single models. For example, a cascaded EfficientNet-B2 ensemble is able to achieve similar coverage at 5% risk as a single EfficientNet-B4 with <30% the number of MACs. We also find that cascades/ensembles give more reliable improvements on OOD data vs scaling models up. Code for this work is available at: https://github.com/Guoxoug/window-early-exit

    Task-driven learned hyperspectral data reduction using end-to-end supervised deep learning

    Get PDF
    An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods

    Pushing the temporal resolution in absorption and Zernike phase contrast nanotomography: Enabling fast in situ experiments

    Get PDF
    Hard X-ray nanotomography enables 3D investigations of a wide range of samples with high resolution (<100 nm) with both synchrotron-based and laboratory-based setups. However, the advantage of synchrotron-based setups is the high flux, enabling time resolution, which cannot be achieved at laboratory sources. Here, the nanotomography setup at the imaging beamline P05 at PETRA III is presented, which offers high time resolution not only in absorption but for the first time also in Zernike phase contrast. Two test samples are used to evaluate the image quality in both contrast modalities based on the quantitative analysis of contrast-to-noise ratio (CNR) and spatial resolution. High-quality scans can be recorded in 15 min and fast scans down to 3 min are also possible without significant loss of image quality. At scan times well below 3 min, the CNR values decrease significantly and classical image-filtering techniques reach their limitation. A machine-learning approach shows promising results, enabling acquisition of a full tomography in only 6 s. Overall, the transmission X-ray microscopy instrument offers high temporal resolution in absorption and Zernike phase contrast, enabling in situ experiments at the beamline
    corecore