24 research outputs found

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. ZunĂ€chst prĂ€sentieren wir iterative Schemata, die sich gut fĂŒr solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der SchlĂŒssel fĂŒr eine vielseitige Methode, die gute Ergebnisse fĂŒr zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, BildentfĂ€rbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, prĂ€sentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenĂŒber starken BeleuchtungsĂ€nderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. ZusĂ€tzliches Wissen ĂŒber die Belichtungsreihe ermöglicht uns, die erste vollstĂ€ndig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final prĂ€sentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusĂ€tzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. ZunĂ€chst prĂ€sentieren wir iterative Schemata, die sich gut fĂŒr solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der SchlĂŒssel fĂŒr eine vielseitige Methode, die gute Ergebnisse fĂŒr zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, BildentfĂ€rbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, prĂ€sentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenĂŒber starken BeleuchtungsĂ€nderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. ZusĂ€tzliches Wissen ĂŒber die Belichtungsreihe ermöglicht uns, die erste vollstĂ€ndig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final prĂ€sentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusĂ€tzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Deterministic free surface multiple removal of marine seismic data

    Get PDF

    A practical guide and software for analysing pairwise comparison experiments

    Get PDF
    Most popular strategies to capture subjective judgments from humans involve the construction of a unidimensional relative measurement scale, representing order preferences or judgments about a set of objects or conditions. This information is generally captured by means of direct scoring, either in the form of a Likert or cardinal scale, or by comparative judgments in pairs or sets. In this sense, the use of pairwise comparisons is becoming increasingly popular because of the simplicity of this experimental procedure. However, this strategy requires non-trivial data analysis to aggregate the comparison ranks into a quality scale and analyse the results, in order to take full advantage of the collected data. This paper explains the process of translating pairwise comparison data into a measurement scale, discusses the benefits and limitations of such scaling methods and introduces a publicly available software in Matlab. We improve on existing scaling methods by introducing outlier analysis, providing methods for computing confidence intervals and statistical testing and introducing a prior, which reduces estimation error when the number of observers is low. Most of our examples focus on image quality assessment.Comment: Code available at https://github.com/mantiuk/pwcm

    Application of Machine Learning within Visual Content Production

    Get PDF
    We are living in an era where digital content is being produced at a dazzling pace. The heterogeneity of contents and contexts is so varied that a numerous amount of applications have been created to respond to people and market demands. The visual content production pipeline is the generalisation of the process that allows a content editor to create and evaluate their product, such as a video, an image, a 3D model, etc. Such data is then displayed on one or more devices such as TVs, PC monitors, virtual reality head-mounted displays, tablets, mobiles, or even smartwatches. Content creation can be simple as clicking a button to film a video and then share it into a social network, or complex as managing a dense user interface full of parameters by using keyboard and mouse to generate a realistic 3D model for a VR game. In this second example, such sophistication results in a steep learning curve for beginner-level users. In contrast, expert users regularly need to refine their skills via expensive lessons, time-consuming tutorials, or experience. Thus, user interaction plays an essential role in the diffusion of content creation software, primarily when it is targeted to untrained people. In particular, with the fast spread of virtual reality devices into the consumer market, new opportunities for designing reliable and intuitive interfaces have been created. Such new interactions need to take a step beyond the point and click interaction typical of the 2D desktop environment. The interactions need to be smart, intuitive and reliable, to interpret 3D gestures and therefore, more accurate algorithms are needed to recognise patterns. In recent years, machine learning and in particular deep learning have achieved outstanding results in many branches of computer science, such as computer graphics and human-computer interface, outperforming algorithms that were considered state of the art, however, there are only fleeting efforts to translate this into virtual reality. In this thesis, we seek to apply and take advantage of deep learning models to two different content production pipeline areas embracing the following subjects of interest: advanced methods for user interaction and visual quality assessment. First, we focus on 3D sketching to retrieve models from an extensive database of complex geometries and textures, while the user is immersed in a virtual environment. We explore both 2D and 3D strokes as tools for model retrieval in VR. Therefore, we implement a novel system for improving accuracy in searching for a 3D model. We contribute an efficient method to describe models through 3D sketch via an iterative descriptor generation, focusing both on accuracy and user experience. To evaluate it, we design a user study to compare different interactions for sketch generation. Second, we explore the combination of sketch input and vocal description to correct and fine-tune the search for 3D models in a database containing fine-grained variation. We analyse sketch and speech queries, identifying a way to incorporate both of them into our system's interaction loop. Third, in the context of the visual content production pipeline, we present a detailed study of visual metrics. We propose a novel method for detecting rendering-based artefacts in images. It exploits analogous deep learning algorithms used when extracting features from sketches

    Reciprocity-based imaging using multiply scattered waves

    Get PDF
    In exploration seismology, seismic waves are emitted into the structurally complex Earth. Its response, consisting of a mixture of arrivals including primary reflections, conversions, multiples, and transmissions, is used to infer the internal structure and properties. Waves that interact multiple times with the inhomogeneities in the medium probe areas of the subsurface that are sometimes inaccessible to singly scattered waves. However, these contributions are notoriously difficult to use for imaging because multiple scattering turns out to be a highly nonlinear process. Conventionally, imaging algorithms assume singly scattered energy dominates data. Hence these require that energy that scatters more than once is attenuated. The principal focus of this thesis is to incorporate the effect of complex nonlinear scattering in the construction of subsurface elastic images. Reciprocity theory is used to establish an exact relation between the full recorded data and the local (zero-offset, zero-time) scattering response in the subsurface which constitutes our image. Fully nonlinear, elastic imaging conditions are shown to lead to better illumination, higher resolution and improved amplitudes in pure-mode imaging. Strikingly it is also observed that when multiple scattering is correctly handled, no converted-wave energy is mapped to any image point. I explain this result by noting that conversions require finite time and space to manifest. The construction of wavefield propagators (Green’s functions) that are used to extrapolate recorded data from the surface to points in the Earth’s interior is a crucial component of any imaging technique. Classical approaches are based on strong assumptions about the propagation direction of recorded data, and their polarization; preliminary steps of wavefield decomposition (directional and modal) are required to extract upward propagating waves at the recording surface and separate different wave modes. These algorithms also generally fail to explain the trajectories of multiply scattered and converted waves, representing a major problem when constructing nonlinear images as we do not know where such energy interacted with the scatterers to be imaged. A secondary aim of this thesis is to improve on the practice of wavefield extrapolation or redatuming by taking advantage of the different nature of multi-component data compared with single-mode acoustic data. Two-way representation theorems are used to define novel formulations in elastic media which allow both up- and downward propagating fields to be back-propagated correctly without ambiguity in the direction, and such that no cross-talk between wave modes is generated. As an application of directional extrapolation, the acoustic counterpart of the new approach is tested on an ocean-bottom cable field dataset acquired over the Volve field, North Sea. Interestingly, the process of redatuming sources to locations beneath a complex overburden by means of multi-dimensional deconvolution also requires preliminary wavefield separation to be successful: I propose to use the two-way convolution-type representation as a way to combine full pressure and particle velocity recordings. Accurate redatumed wavefields can then be obtained directly from multi-component data without separation. Another major challenge in seismic imaging is to construct detailed velocity models through which recorded data will be extrapolated. Nowadays the information contained in the extension of subsurface images along either the time or space axis is commonly exploited by velocity model building techniques acting in the image domain. Recent research has shown that when both extensions are taken into account, it is possible to estimate the data that would have been recorded if a small, local seismic survey was conducted around any image point in the subsurface. I elaborate on the use of nonlinear elastic imaging conditions to construct such so-called extended image gathers: missing events, incorrect amplitudes, and spurious energy generated from the use of only primary arrivals are shown to be mitigated when multiple scattering is included in the migration process. Finally, having access to virtual recordings in the subsurface is also very useful for target-oriented imaging applications. In the context of one-way representation, I apply the novel methodology of Marchenko redatuming to the Volve field dataset as a way to unravel propagation effects in the overburden structure. Constructed wavefields are then used to synthesize local, subsurface reflection responses that are only sensitive to local heterogeneities, and detailed images of target areas of the subsurface are ultimately produced. Overall the findings of this thesis demonstrate that, while incorporating multiply scattered waves as well as multi-component data in imaging may be not a trivial task, such information is vital for achieving high-resolution and true-amplitude seismic imaging

    Methods for high-precision subsurface imaging using spatially dense seismic data

    Get PDF
    Current state-of-the-art depth migration techniques are regularly applied in marine seismic exploration, where they deliver accurate and reliable pictures of Earth’s interior. The question is how these algorithms will perform in different environments, not related to oil and gas exploration. For example, how to utilise those techniques in an elusive environment of hard rocks? The main challenge there is to image highly complex, subvertical piece-wise geology, represented by often low reflectivity, in a noisy environment

    Computational and numerical aspects of full waveform seismic inversion

    Get PDF
    Full-waveform inversion (FWI) is a nonlinear optimisation procedure, seeking to match synthetically-generated seismograms with those observed in field data by iteratively updating a model of the subsurface seismic parameters, typically compressional wave (P-wave) velocity. Advances in high-performance computing have made FWI of 3-dimensional models feasible, but the low sensitivity of the objective function to deeper, low-wavenumber components of velocity makes these difficult to recover using FWI relative to more traditional, less automated, techniques. While the use of inadequate physics during the synthetic modelling stage is a contributing factor, I propose that this weakness is substantially one of ill-conditioning, and that efforts to remedy it should focus on the development of both more efficient seismic modelling techniques, and more sophisticated preconditioners for the optimisation iterations. I demonstrate that the problem of poor low-wavenumber velocity recovery can be reproduced in an analogous one-dimensional inversion problem, and that in this case it can be remedied by making full use of the available curvature information, in the form of the Hessian matrix. In two or three dimensions, this curvature information is prohibitively expensive to obtain and store as part of an inversion procedure. I obtain the complete Hessian matrices for a realistically-sized, two-dimensional, towed-streamer inversion problem at several stages during the inversion and link properties of these matrices to the behaviour of the inversion. Based on these observations, I propose a method for approximating the action of the Hessian and suggest it as a path forward for more sophisticated preconditioning of the inversion process.Open Acces
    corecore