227 research outputs found

    Automatic Estimation of Modulation Transfer Functions

    Full text link
    The modulation transfer function (MTF) is widely used to characterise the performance of optical systems. Measuring it is costly and it is thus rarely available for a given lens specimen. Instead, MTFs based on simulations or, at best, MTFs measured on other specimens of the same lens are used. Fortunately, images recorded through an optical system contain ample information about its MTF, only that it is confounded with the statistics of the images. This work presents a method to estimate the MTF of camera lens systems directly from photographs, without the need for expensive equipment. We use a custom grid display to accurately measure the point response of lenses to acquire ground truth training data. We then use the same lenses to record natural images and employ a data-driven supervised learning approach using a convolutional neural network to estimate the MTF on small image patches, aggregating the information into MTF charts over the entire field of view. It generalises to unseen lenses and can be applied for single photographs, with the performance improving if multiple photographs are available

    Denoising and enhancement of digital images : variational methods, integrodifferential equations, and wavelets

    Get PDF
    The topics of this thesis are methods for denoising, enhancement, and simplification of digital image data. Special emphasis lies on the relations and structural similarities between several classes of methods which are motivated from different contexts. In particular, one can distinguish the methods treated in this thesis in three classes: For variational approaches and partial differential equations, the notion of the derivative is the tool of choice to model regularity of the data and the desired result. A general framework for such approaches is proposed that involve all partial derivatives of a prescribed order and experimentally are capable of leading to piecewise polynomial approximations of the given data. The second class of methods uses wavelets to represent the data which makes it possible to understand the filtering as very simple pointwise application of a nonlinear function. To view these wavelets as derivatives of smoothing kernels is the basis for relating these methods to integrodifferential equations which are investigated here. In the third case, values of the image in a neighbourhood are averaged where the weights of this averaging can be adapted respecting different criteria. By refinement of the pixel grid and transfer to scaling limits, connections to partial differential equations become visible here, too. They are described in the framework explained before. Numerical aspects of the simplification of images are presented with respect to the NDS energy function, a unifying approach that allows to model many of the aforementioned methods. The behaviour of the filtering methods is documented with numerical examples.Gegenstand der vorliegenden Arbeit sind Verfahren zum Entrauschen, qualitativen Verbessern und Vereinfachen digitaler Bilddaten. Besonderes Augenmerk liegt dabei auf den Beziehungen und der strukturellen Ähnlichkeit zwischen unterschiedlich motivierten Verfahrensklassen. Insbesondere lassen sich die hier behandelten Methoden in drei Klassen einordnen: Bei den VariationsansĂ€tzen und partiellen Differentialgleichungen steht der Begriff der Ableitung im Mittelpunkt, um RegularitĂ€t der Daten und des gewĂŒnschten Resultats zu modellieren. Hier wird ein einheitlicher Rahmen fĂŒr solche AnsĂ€tze angegeben, die alle partiellen Ableitungen einer vorgegebenen Ordnung involvieren und experimentell auf stĂŒckweise polynomielle Approximationen der gegebenen Daten fĂŒhren können. Die zweite Klasse von Methoden nutzt Wavelets zur ReprĂ€sentation von Daten, mit deren Hilfe sich Filterung als sehr einfache punktweise Anwendung einer nichtlinearen Funktion verstehen lĂ€sst. Diese Wavelets als Ableitungen von GlĂ€ttungskernen aufzufassen bildet die Grundlage fĂŒr die hier untersuchte Verbindung dieser Verfahren zu Integrodifferentialgleichungen. Im dritten Fall werden Werte des Bildes in einer Nachbarschaft gemittelt, wobei die Gewichtung bei dieser Mittelung adaptiv nach verschiedenen Kriterien angepasst werden kann. Durch Verfeinern des Pixelgitters und Übergang zu Skalierungslimites werden auch hier Verbindungen zu partiellen Differentialgleichungen sichtbar, die in den vorher dargestellten Rahmen eingeordnet werden. Numerische Aspekte beim Vereinfachen von Bildern werden anhand der NDS-Energiefunktion dargestellt, eines einheitlichen Ansatzes, mit dessen Hilfe sich viele der vorgenannten Methoden realisieren lassen. Das Verhalten der einzelnen Filtermethoden wird dabei jeweils durch numerische Beispiele dokumentiert

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen FĂ€llen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die GrĂŒnde fĂŒr diesen Erfolg noch nicht vollstĂ€ndig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen VerstĂ€ndnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen fĂŒr die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec fĂŒr hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die EchtzeitfĂ€higkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermöglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die KonkurrenzfĂ€higkeit dieses Verfahrens auf

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen FĂ€llen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die GrĂŒnde fĂŒr diesen Erfolg noch nicht vollstĂ€ndig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen VerstĂ€ndnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen fĂŒr die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec fĂŒr hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die EchtzeitfĂ€higkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermöglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die KonkurrenzfĂ€higkeit dieses Verfahrens auf

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Single-pixel, single-photon three-dimensional imaging

    Get PDF
    The 3D recovery of a scene is a crucial task with many real-life applications such as self-driving vehicles, X-ray tomography and virtual reality. The recent development of time-resolving detectors sensible to single photons allowed the recovery of the 3D information at high frame rate with unprecedented capabilities. Combined with a timing system, single-photon sensitive detectors allow the 3D image recovery by measuring the Time-of-Flight (ToF) of the photons scattered back by the scene with a millimetre depth resolution. Current ToF 3D imaging techniques rely on scanning detection systems or multi-pixel sensor. Here, we discuss an approach to simplify the hardware complexity of the current 3D imaging ToF techniques using a single-pixel, single-photon sensitive detector and computational imaging algorithms. The 3D imaging approaches discussed in this thesis do not require mechanical moving parts as in standard Lidar systems. The single-pixel detector allows to reduce the pixel complexity to a single unit and offers several advantages in terms of size, flexibility, wavelength range and cost. The experimental results demonstrate the 3D image recovery of hidden scenes with a subsecond acquisition time, allowing also non-line-of-sight scenes 3D recovery in real-time. We also introduce the concept of intelligent Lidar, a 3D imaging paradigm based uniquely on the temporal trace of the return photons and a data-driven 3D retrieval algorithm

    Methods for 3D Geometry Processing in the Cultural Heritage Domain

    Get PDF
    This thesis presents methods for 3D geometry processing under the aspects of cultural heritage applications. After a short overview over the relevant basics in 3D geometry processing, the present thesis investigates the digital acquisition of 3D models. A particular challenge in this context are on the one hand difficult surface or material properties of the model to be captured. On the other hand, the fully automatic reconstruction of models even with suitable surface properties that can be captured with Laser range scanners is not yet completely solved. This thesis presents two approaches to tackle these challenges. One exploits a thorough capture of the object’s appearance and a coarse reconstruction for a concise and realistic object representation even for objects with problematic surface properties like reflectivity and transparency. The other method concentrates on digitisation via Laser-range scanners and exploits 2D colour images that are typically recorded with the range images for a fully automatic registration technique. After reconstruction, the captured models are often still incomplete, exhibit holes and/or regions of insufficient sampling. In addition to that, holes are often deliberately introduced into a registered model to remove some undesired or defective surface part. In order to produce a visually appealing model, for instance for visualisation purposes, for prototype or replica production, these holes have to be detected and filled. Although completion is a well-established research field in 2D image processing and many approaches do exist for image completion, surface completion in 3D is a fairly new field of research. This thesis presents a hierarchical completion approach that employs and extends successful exemplar-based 2D image processing approaches to 3D and fills in detail-equipped surface patches into missing surface regions. In order to identify and construct suitable surface patches, selfsimilarity and coherence properties of the surface context of the hole are exploited. In addition to the reconstruction and repair, the present thesis also investigates methods for a modification of captured models via interactive modelling. In this context, modelling is regarded as a creative process, for instance for animation purposes. On the other hand, it is also demonstrated how this creative process can be used to introduce human expertise into the otherwise automatic completion process. This way, reconstructions are feasible even of objects where already the data source, the object itself, is incomplete due to corrosion, demolition, or decay.Methoden zur 3D-Geometrieverarbeitung im Kulturerbesektor In dieser Arbeit werden Methoden zur Bearbeitung von digitaler 3D-Geometrie unter besonderer BerĂŒcksichtigung des Anwendungsbereichs im Kulturerbesektor vorgestellt. Nach einem kurzen Überblick ĂŒber die relevanten Grundlagen der dreidimensionalen Geometriebehandlung wird zunĂ€chst die digitale Akquise von dreidimensionalen Objekten untersucht. Eine besondere Herausforderung stellen bei der Erfassung einerseits ungĂŒnstige OberflĂ€chen- oder Materialeigenschaften der Objekte dar (wie z.B. ReflexivitĂ€t oder Transparenz), andererseits ist auch die vollautomatische Rekonstruktion von solchen Modellen, die sich verhĂ€ltnismĂ€ĂŸig problemlos mit Laser-Range Scannern erfassen lassen, immer noch nicht vollstĂ€ndig gelöst. Daher bilden zwei neuartige Verfahren, die diesen Herausforderungen begegnen, den Anfang. Auch nach der Registrierung sind die erfassten DatensĂ€tze in vielen FĂ€llen unvollstĂ€ndig, weisen Löcher oder nicht ausreichend abgetastete Regionen auf. DarĂŒber hinaus werden in vielen Anwendungen auch, z.B. durch Entfernen unerwĂŒnschter OberflĂ€chenregionen, Löcher gewollt hinzugefĂŒgt. FĂŒr eine optisch ansprechende Rekonstruktion, vor allem zu Visualisierungszwecken, im Bildungs- oder Unterhaltungssektor oder zur Prototyp- und Replik-Erzeugung mĂŒssen diese Löcher zunĂ€chst automatisch detektiert und anschließend geschlossen werden. Obwohl dies im zweidimensionalen Fall der Bildbearbeitung bereits ein gut untersuchtes Forschungsfeld darstellt und vielfĂ€ltige AnsĂ€tze zur automatischen BildvervollstĂ€ndigung existieren, ist die Lage im dreidimensionalen Fall anders, und die Übertragung von zweidimensionalen AnsĂ€tzen in den 3D stellt vielfach eine große Herausforderung dar, die bislang keine zufriedenstellenden Lösungen erlaubt hat. Nichtsdestoweniger wird in dieser Arbeit ein hierarchisches Verfahren vorgestellt, das beispielbasierte Konzepte aus dem 2D aufgreift und Löcher in OberflĂ€chen im 3D unter Ausnutzung von SelbstĂ€hnlichkeiten und KohĂ€renzeigenschaften des OberflĂ€chenkontextes schließt. Um plausible OberflĂ€chen zu erzeugen werden die Löcher dabei nicht nur glatt gefĂŒllt, sondern auch feinere Details aus dem Kontext rekonstruiert. Abschließend untersucht die vorliegende Arbeit noch die Modifikation der vervollstĂ€ndigten Objekte durch Freiformmodellierung. Dies wird dabei zum einen als kreativer Prozess z.B. zu Animationszwecken betrachtet. Zum anderen wird aber auch untersucht, wie dieser kreative Prozess benutzt werden kann, um etwaig vorhandenes Expertenwissen in die ansonsten automatische VervollstĂ€ndigung mit einfließen zu lassen. Auf diese Weise werden auch Rekonstruktionen ermöglicht von Objekten, bei denen schon die Datenquelle, also das Objekt selbst z.B. durch Korrosion oder mutwillige Zerstörung unvollstĂ€ndig ist
    • 

    corecore