113 research outputs found
Automatic HDRI generation of dynamic environments
of the human shapes (inset). (b) HDRI generation using the presented method, HCM is removed using a variance image, VI. (c) HDRI of a dynamic scene (larger image) with LCM (leaves). (d) HDRI after LCM removal using an uncertainty measure, UI. (e, top) VI segmentation. (e, bottom) UI segmentation.
Development of a Novel Object Detection System Based on Synthetic Data Generated from Unreal Game Engine
This paper presents a novel approach to training a real-world object detection system based on synthetic data utilizing state-of-the-art technologies. Training an object detection system can be challenging and time-consuming as machine learning requires substantial volumes of training data with associated metadata. Synthetic data can solve this by providing unlimited desired training data with automatic generation. However, the main challenge is creating a balanced dataset that closes the reality gap and generalizes well when deployed in the real world. A state-of-the-art game engine, Unreal Engine 4, was used to approach the challenge of generating a photorealistic dataset for deep learning model training. In addition, a comprehensive domain randomized environment was implemented to create a robust dataset that generalizes the training data well. The randomized environment was reinforced by adding high-dynamic-range image scenes. Finally, a modern neural network was used to train the object detection system, providing a robust framework for an adaptive and self-learning model. The final models were deployed in simulation and in the real world to evaluate the training. The results of this study show that it is possible to train a real-world object detection system on synthetic data. However, the models showcase a lot of potential for improvements regarding the stability and confidence of the inference results. In addition, the paper provides valuable insight into how the number of assets and training data influence the resulting model.publishedVersio
Methodology for generating synthetic labeled datasets for visual container inspection
Nowadays, containerized freight transport is one of the most important
transportation systems that is undergoing an automation process due to the Deep
Learning success. However, it suffers from a lack of annotated data in order to
incorporate state-of-the-art neural network models to its systems. In this
paper we present an innovative methodology to generate a realistic, varied,
balanced, and labelled dataset for visual inspection task of containers in a
dock environment. In addition, we validate this methodology with multiple
visual tasks recurrently found in the state of the art. We prove that the
generated synthetic labelled dataset allows to train a deep neural network that
can be used in a real world scenario. On the other side, using this methodology
we provide the first open synthetic labelled dataset called SeaFront available
in: https://datasets.vicomtech.org/di21-seafront/readme.txt
06221 Abstracts Collection -- Computational Aestethics in Graphics, Visualization and Imaging
From 28.05.06 to 02.06.06, the Dagstuhl Seminar 06221 ``Computational Aesthetics in Graphics, Visualization and Imaging\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Improving SLI Performance in Optically Challenging Environments
The construction of 3D models of real-world scenes using non-contact methods is an important problem in computer vision. Some of the more successful methods belong to a class of techniques called structured light illumination (SLI). While SLI methods are generally very successful, there are cases where their performance is poor. Examples include scenes with a high dynamic range in albedo or scenes with strong interreflections. These scenes are referred to as optically challenging environments.
The work in this dissertation is aimed at improving SLI performance in optically challenging environments. A new method of high dynamic range imaging (HDRI) based on pixel-by-pixel Kalman filtering is developed. Using objective metrics, it is show to achieve as much as a 9.4 dB improvement in signal-to-noise ratio and as much as a 29% improvement in radiometric accuracy over a classic method. Quality checks are developed to detect and quantify multipath interference and other quality defects using phase measuring profilometry (PMP). Techniques are established to improve SLI performance in the presence of strong interreflections. Approaches in compressed sensing are applied to SLI, and interreflections in a scene are modeled using SLI. Several different applications of this research are also discussed
Contemplation of tone mapping operators in high dynamic range imaging
The technique of tone mapping has found widespread popularity in the modern era owing to its applications in the digital world. There are a considerable number of tone mapping techniques that have been developed so far. One method may be better than the other in some cases which is determined by the requirement of the user. In this paper, some of the techniques for tone mapping/tone reproduction of high dynamic range images have been contemplated. The classification of tone mapping operators has also been given. However, it has been found that these techniques lack in providing quality of service visualization of high dynamic range images. This paper has tried to highlight the drawbacks in the existing traditional methods so that the tone-mapped techniques can be enhanced
Photorealistic physically based render engines: a comparative study
PĂ©rez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad
A web-based approach to image-based lighting using high dynamic range images and QuickTime object virtual reality
This thesis presents a web-based approach to lighting three-dimensional
geometry in a virtual scene. The use of High Dynamic Range (HDR) images for the
lighting model makes it possible to convey a greater sense of photorealism than can be
provided with a conventional computer generated three-point lighting setup. The use of
QuickTime ™ Object Virtual Reality to display the three-dimensional geometry offers a
sophisticated user experience and a convenient method for viewing virtual objects over
the web. With this work, I generate original High Dynamic Range images for the
purpose of image-based lighting and use the QuickTime ™ Object Virtual Reality
framework to creatively alter the paradigm of object VR for use in object lighting. The
result is two scenarios: one that allows for the virtual manipulation of an object within a
lit scene, and another with the virtual manipulation of light around a static object. Future
work might include the animation of High Dynamic Range image-based lighting, with
emphasis on such features as depth of field and glare generation
- …