5 research outputs found

    On Creating Reference Data for Performance Analysis in Image Processing

    Get PDF
    This thesis investigates methods for the creation of reference datasets for image processing, especially for the dense correspondence problem. Three types of reference data can be identified: Real datasets with dense ground truth, real datasets with sparse or missing ground truth and synthetic datasets. For the creation of real datasets with ground truth a existing method based on depth map fusion was evaluated. The described method is especially suited for creating large amounts of reference data with known accuracy. The creation of reference datasets with missing ground truth was examined on the example of multiple datasets for the automotive industry. The data was used succesfully for verification and evaluation by multiple image processing projects. Finally, it was investigated how methods from computer graphics can be used for creating synthetic reference datasets. Especially the creation of photorealistic image sequences using global illumination has been examined for the task of evaluating algorithms. The results show that while such sequences can be used for evaluation, their creation is hindered by practicallity problems. As an application example, a new simulation method for Time-of-Flight depth cameras which can simulate all relevant error sources of these systems was developed

    A simulation framework for the design and evaluation of computational cameras

    Get PDF
    In the emerging field of computational imaging, rapid prototyping of new camera concepts becomes increasingly difficult since the signal processing is intertwined with the physical design of a camera. As novel computational cameras capture information other than the traditional two-dimensional information, ground truth data, which can be used to thoroughly benchmark a new system design, is also hard to acquire. We propose to bridge this gap by using simulation. In this article, we present a raytracing framework tailored for the design and evaluation of computational imaging systems. We show that, depending on the application, the image formation on a sensor and phenomena like image noise have to be simulated accurately in order to achieve meaningful results while other aspects, such as photorealistic scene modeling, can be omitted. Therefore, we focus on accurately simulating the mandatory components of computational cameras, namely apertures, lenses, spectral filters and sensors. Besides the simulation of the imaging process, the framework is capable of generating various ground truth data, which can be used to evaluate and optimize the performance of a particular imaging system. Due to its modularity, it is easy to further extend the framework to the needs of other fields of application. We make the source code of our simulation framework publicly available and encourage other researchers to use it to design and evaluate their own camera designs

    A simulation framework for the design and evaluation of computational cameras

    Get PDF
    In the emerging field of computational imaging, rapid prototyping of new camera concepts becomes increasingly difficult since the signal processing is intertwined with the physical design of a camera. As novel computational cameras capture information other than the traditional two-dimensional information, ground truth data, which can be used to thoroughly benchmark a new system design, is also hard to acquire. We propose to bridge this gap by using simulation. In this article, we present a raytracing framework tailored for the design and evaluation of computational imaging systems. We show that, depending on the application, the image formation on a sensor and phenomena like image noise have to be simulated accurately in order to achieve meaningful results while other aspects, such as photorealistic scene modeling, can be omitted. Therefore, we focus on accurately simulating the mandatory components of computational cameras, namely apertures, lenses, spectral filters and sensors. Besides the simulation of the imaging process, the framework is capable of generating various ground truth data, which can be used to evaluate and optimize the performance of a particular imaging system. Due to its modularity, it is easy to further extend the framework to the needs of other fields of application. We make the source code of our simulation framework publicly available and encourage other researchers to use it to design and evaluate their own camera designs

    Entwurf von Computational-Imaging-Systemen am Beispiel der monokularen Tiefenschätzung

    Get PDF
    Computational-Imaging-Systeme kombinieren optische und digitale Signalverarbeitung um Information aus dem Licht einer Szene zu extrahieren. In dieser Arbeit wird das Raytracing-Verfahren als Simulationswerkzeug genutzt, um Computational-Imaging-Systeme ganzheitlich zu beschreiben, bewerten und optimieren. Am Beispiel der monokularen Tiefenschätzung wird die Simulation mit einem realen Prototyp einer Kamera mit programmierbarer Apertur verglichen und die vorgestellten Methoden evaluiert

    Performance Metrics and Test Data Generation for Depth Estimation Algorithms

    Get PDF
    This thesis investigates performance metrics and test datasets used for the evaluation of depth estimation algorithms. Stereo and light field algorithms take structured camera images as input to reconstruct a depth map of the depicted scene. Such depth estimation algorithms are employed in a multitude of practical applications such as industrial inspection and the movie industry. Recently, they have also been used for safety-relevant applications such as driver assistance and computer assisted surgery. Despite this increasing practical relevance, depth estimation algorithms are still evaluated with simple error measures and on small academic datasets. To develop and select suitable and safe algorithms, it is essential to gain a thorough understanding of their respective strengths and weaknesses. In this thesis, I demonstrate that computing average pixel errors of depth estimation algorithms is not sufficient for a thorough and reliable performance analysis. The analysis must also take into account the specific requirements of the given applications as well as the characteristics of the available test data. I propose metrics to explicitly quantify depth estimation results at continuous surfaces, depth discontinuities, and fine structures. These geometric entities are particularly relevant for many applications and challenging for algorithms. In contrast to prevalent metrics, the proposed metrics take into account that pixels are neither spatially independent within an image nor uniformly challenging nor equally relevant. Apart from performance metrics, test datasets play an important role for evaluation. Their availability is typically limited in quantity, quality, and diversity. I show how test data deficiencies can be overcome by using specific metrics, additional annotations, and stratified test data. Using systematic test cases, a user study, and a comprehensive case study, I demonstrate that the proposed metrics, test datasets, and visualizations allow for a meaningful quantitative analysis of the strengths and weaknesses of different algorithms. In contrast to existing evaluation methodologies, application-specific priorities can be taken into account to identify the most suitable algorithms
    corecore