40,348 research outputs found
DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs
We present a novel deep learning architecture for fusing static
multi-exposure images. Current multi-exposure fusion (MEF) approaches use
hand-crafted features to fuse input sequence. However, the weak hand-crafted
representations are not robust to varying input conditions. Moreover, they
perform poorly for extreme exposure image pairs. Thus, it is highly desirable
to have a method that is robust to varying input conditions and capable of
handling extreme exposure without artifacts. Deep representations have known to
be robust to input conditions and have shown phenomenal performance in a
supervised setting. However, the stumbling block in using deep learning for MEF
was the lack of sufficient training data and an oracle to provide the
ground-truth for supervision. To address the above issues, we have gathered a
large dataset of multi-exposure image stacks for training and to circumvent the
need for ground truth images, we propose an unsupervised deep learning
framework for MEF utilizing a no-reference quality metric as loss function. The
proposed approach uses a novel CNN architecture trained to learn the fusion
operation without reference ground truth image. The model fuses a set of common
low level features extracted from each image to generate artifact-free
perceptually pleasing results. We perform extensive quantitative and
qualitative evaluation and show that the proposed technique outperforms
existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201
High dynamic range imaging for archaeological recording
This paper notes the adoption of digital photography as a primary recording means within archaeology, and reviews some issues and problems that this presents. Particular attention is given to the problems of recording high-contrast scenes in archaeology and High Dynamic Range imaging using multiple exposures is suggested as a means of providing an archive of high-contrast scenes that can later be tone-mapped to provide a variety of visualisations. Exposure fusion is also considered, although it is noted that this has some disadvantages. Three case studies are then presented (1) a very high contrast photograph taken from within a rock-cut tomb at Cala Morell, Menorca (2) an archaeological test pitting exercise requiring rapid acquisition of photographic records in challenging circumstances and (3) legacy material consisting of three differently exposed colour positive (slide) photographs of the same scene. In each case, HDR methods are shown to significantly aid the generation of a high quality illustrative record photograph, and it is concluded that HDR imaging could serve an effective role in archaeological photographic recording, although there remain problems of archiving and distributing HDR radiance map data
Real-time Model-based Image Color Correction for Underwater Robots
Recently, a new underwater imaging formation model presented that the
coefficients related to the direct and backscatter transmission signals are
dependent on the type of water, camera specifications, water depth, and imaging
range. This paper proposes an underwater color correction method that
integrates this new model on an underwater robot, using information from a
pressure depth sensor for water depth and a visual odometry system for
estimating scene distance. Experiments were performed with and without a color
chart over coral reefs and a shipwreck in the Caribbean. We demonstrate the
performance of our proposed method by comparing it with other statistic-,
physic-, and learning-based color correction methods. Applications for our
proposed method include improved 3D reconstruction and more robust underwater
robot navigation.Comment: Accepted at the 2019 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
- …