10 research outputs found

    Assessment of multi-exposure HDR image deghosting methods

    Get PDF
    © 2017 Elsevier LtdTo avoid motion artefacts when merging multiple exposures into a high dynamic range image, a number of HDR deghosting algorithms have been proposed. However, these algorithms do not work equally well on all types of scenes, and some may even introduce additional artefacts. As the number of proposed deghosting methods is increasing rapidly, there is an immediate need to evaluate them and compare their results. Even though subjective methods of evaluation provide reliable means of testing, they are often cumbersome and need to be repeated for each new proposed method or even its slight modification. Because of that, there is a need for objective quality metrics that will provide automatic means of evaluation of HDR deghosting algorithms. In this work, we explore several computational approaches of quantitative evaluation of multi-exposure HDR deghosting algorithms and demonstrate their results on five state-of-the-art algorithms. In order to perform a comprehensive evaluation, a new dataset consisting of 36 scenes has been created, where each scene provides a different challenge for a deghosting algorithm. The quality of HDR images produced by deghosting method is measured in a subjective experiment and then evaluated using objective metrics. As this paper is an extension of our conference paper, we add one more objective quality metric, UDQM, as an additional metric in the evaluation. Furthermore, analysis of objective and subjective experiments is performed and explained more extensively in this work. By testing correlation between objective metric and subjective scores, the results show that from the tested metrics, that HDR-VDP-2 is the most reliable metric for evaluating HDR deghosting algorithms. The results also show that for most of the tested scenes, Sen et al.'s deghosting method outperforms other evaluated deghosting methods. The observations based on the obtained results can be used as a vital guide in the development of new HDR deghosting algorithms, which would be robust to a variety of scenes and could produce high quality results

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. Zunächst präsentieren wir iterative Schemata, die sich gut für solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der Schlüssel für eine vielseitige Methode, die gute Ergebnisse für zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, Bildentfärbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, präsentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenüber starken Beleuchtungsänderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. Zusätzliches Wissen über die Belichtungsreihe ermöglicht uns, die erste vollständig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final präsentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusätzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Nonlinear acoustics of water-saturated marine sediments

    Get PDF
    corecore