A major challenge for high dynamic range (HDR) image reconstruction from
multi-exposed low dynamic range (LDR) images, especially with dynamic scenes,
is the extraction and merging of relevant contextual features in order to
suppress any ghosting and blurring artifacts from moving objects. To tackle
this, in this work we propose a novel network for HDR reconstruction with deep
and rich feature extraction layers, including residual attention blocks with
sequential channel and spatial attention. For the compression of the
rich-features to the HDR domain, a residual feature distillation block (RFDB)
based architecture is adopted. In contrast to earlier deep-learning methods for
HDR, the above contributions shift focus from merging/compression to feature
extraction, the added value of which we demonstrate with ablation experiments.
We present qualitative and quantitative comparisons on a public benchmark
dataset, showing that our proposed method outperforms the state-of-the-art.Comment: 4 pages, 5 figure