1,775 research outputs found
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Gated Multi-Resolution Transfer Network for Burst Restoration and Enhancement
Burst image processing is becoming increasingly popular in recent years.
However, it is a challenging task since individual burst images undergo
multiple degradations and often have mutual misalignments resulting in ghosting
and zipper artifacts. Existing burst restoration methods usually do not
consider the mutual correlation and non-local contextual information among
burst frames, which tends to limit these approaches in challenging cases.
Another key challenge lies in the robust up-sampling of burst frames. The
existing up-sampling methods cannot effectively utilize the advantages of
single-stage and progressive up-sampling strategies with conventional and/or
recent up-samplers at the same time. To address these challenges, we propose a
novel Gated Multi-Resolution Transfer Network (GMTNet) to reconstruct a
spatially precise high-quality image from a burst of low-quality raw images.
GMTNet consists of three modules optimized for burst processing tasks:
Multi-scale Burst Feature Alignment (MBFA) for feature denoising and alignment,
Transposed-Attention Feature Merging (TAFM) for multi-frame feature
aggregation, and Resolution Transfer Feature Up-sampler (RTFU) to up-scale
merged features and construct a high-quality output image. Detailed
experimental analysis on five datasets validates our approach and sets a
state-of-the-art for burst super-resolution, burst denoising, and low-light
burst enhancement.Comment: Accepted at CVPR 202
Burstormer: Burst Image Restoration and Enhancement Transformer
On a shutter press, modern handheld cameras capture multiple images in rapid
succession and merge them to generate a single image. However, individual
frames in a burst are misaligned due to inevitable motions and contain multiple
degradations. The challenge is to properly align the successive image shots and
merge their complimentary information to achieve high-quality outputs. Towards
this direction, we propose Burstormer: a novel transformer-based architecture
for burst image restoration and enhancement. In comparison to existing works,
our approach exploits multi-scale local and non-local features to achieve
improved alignment and feature fusion. Our key idea is to enable inter-frame
communication in the burst neighborhoods for information aggregation and
progressive fusion while modeling the burst-wide context. However, the input
burst frames need to be properly aligned before fusing their information.
Therefore, we propose an enhanced deformable alignment module for aligning
burst features with regards to the reference frame. Unlike existing methods,
the proposed alignment module not only aligns burst features but also exchanges
feature information and maintains focused communication with the reference
frame through the proposed reference-based feature enrichment mechanism, which
facilitates handling complex motions. After multi-level alignment and
enrichment, we re-emphasize on inter-frame communication within burst using a
cyclic burst sampling module. Finally, the inter-frame information is
aggregated using the proposed burst feature fusion module followed by
progressive upsampling. Our Burstormer outperforms state-of-the-art methods on
burst super-resolution, burst denoising and burst low-light enhancement. Our
codes and pretrained models are available at https://
github.com/akshaydudhane16/BurstormerComment: Accepted at CVPR 202
Guided Deep Decoder: Unsupervised Image Pair Fusion
The fusion of input and guidance images that have a tradeoff in their
information (e.g., hyperspectral and RGB image fusion or pansharpening) can be
interpreted as one general problem. However, previous studies applied a
task-specific handcrafted prior and did not address the problems with a unified
approach. To address this limitation, in this study, we propose a guided deep
decoder network as a general prior. The proposed network is composed of an
encoder-decoder network that exploits multi-scale features of a guidance image
and a deep decoder network that generates an output image. The two networks are
connected by feature refinement units to embed the multi-scale features of the
guidance image into the deep decoder network. The proposed network allows the
network parameters to be optimized in an unsupervised way without training
data. Our results show that the proposed network can achieve state-of-the-art
performance in various image fusion problems.Comment: ECCV 202
- …