9 research outputs found
Extending minkowski norm illuminant estimation
The ability to obtain colour images invariant to changes of illumination is called colour
constancy. An algorithm for colour constancy takes sensor responses - digital images
- as input, estimates the ambient light and returns a corrected image in which the illuminant
influence over the colours has been removed. In this thesis we investigate the
step of illuminant estimation for colour constancy and aim to extend the state of the art
in this field.
We first revisit the Minkowski Family Norm framework for illuminant estimation.
Because, of all the simple statistical approaches, it is the most general formulation and,
crucially, delivers the best results. This thesis makes four technical contributions. First,
we reformulate the Minkowski approach to provide better estimation when a constraint
on illumination is employed. Second, we show how the method can (by orders of
magnitude) be implemented to run much faster than previous algorithms. Third, we
show how a simple edge based variant delivers improved estimation compared with the
state of the art across many datasets. In contradistinction to the prior state of the art our
definition of edges is fixed (a simple combination of first and second derivatives) i.e.
we do not tune our algorithm to particular image datasets. This performance is further
improved by incorporating a gamut constraint on surface colour -our 4th contribution.
The thesis finishes by considering our approach in the context of a recent OSA
competition run to benchmark computational algorithms operating on physiologically
relevant cone based input data. Here we find that Constrained Minkowski Norms operi
ii
ating on spectrally sharpened cone sensors (linear combinations of the cones that behave
more like camera sensors) supports competition leading illuminant estimation
DeepToF: Off-the-shelf real-time correction of multipath interference in time-of-flight imaging
Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work, we avoid these approaches and propose a new technique to correct errors in depth caused by MPI, which requires no camera modifications and takes just 10 milliseconds per frame. Our observations about the nature of MPI suggest that most of its information is available in image space; this allows us to formulate the depth imaging process as a spatially-varying convolution and use a convolutional neural network to correct MPI errors. Since the input and output data present similar structure, we base our network on an autoencoder, which we train in two stages. First, we use the encoder (convolution filters) to learn a suitable basis to represent MPI-corrupted depth images; then, we train the decoder (deconvolution filters) to correct depth from synthetic scenes, generated by using a physically-based, time-resolved renderer. This approach allows us to tackle a key problem in ToF, the lack of ground-truth data, by using a large-scale captured training set with MPI-corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the decoder stage of the network. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured, incorrect depth as input
Photonics simulation and modelling of skin for design of spectrocutometer
fi=vertaisarvioitu|en=peerReviewed
Recommended from our members
Shape from Gradients. A psychophysical and computational study of the role complex illumination gradients, such as shading and mutual illumination, play in three-dimensional shape perception.
The human visual system gathers information about three-dimensional object shape from a wide range of sources. How effectively we can use these sources, and how they are combined to form a consistent and accurate percept of the 3D world is the focus of much research. In complex scenes inter-reflections of light between surfaces (mutual illumination) can occur, creating chromatic illumination gradients. These gradients provide a source of information about 3D object shape, but little research has been conducted into the capabilities of the visual system to use such information.
The experiments described here were conducted with the aim of understanding the influence of chromatic gradients from mutual illumination on 3D shape perception. Psychophysical experiments are described that were designed to investigate: If the human visual system takes account of mutual illumination when estimating 3D object shape, and how this might occur; How colour shading cues are integrated with other shape cues; The relative influence on 3D shape perception of achromatic (luminance) shading and chromatic shading from mutual illumination. In addition, one chapter explores a selection of mathematical models of cue integration and their applicability in this case.
The results of the experiments suggest that the human visual system is able to quickly assess and take account of colour mutual illuminations when estimating 3D object shape, and use chromatic gradients as an independent and effective cue. Finally, mathematical modelling reveals that the chromatic gradient cue is likely integrated with other shape cues in a way that is close to statistically optimal
A STUDY OF ILLUMINANT ESTIMATION AND GROUND TRUTH COLORS FOR COLOR CONSTANCY
Ph.DDOCTOR OF PHILOSOPH
Efficient Methods for Computational Light Transport
En esta tesis presentamos contribuciones sobre distintos retos computacionales relacionados con transporte de luz. Los algoritmos que utilizan información sobre el transporte de luz están presentes en muchas aplicaciones de hoy en día, desde la generación de efectos visuales, a la detección de objetos en tiempo real. La luz es una valiosa fuente de información que nos permite entender y representar nuestro entorno, pero obtener y procesar esta información presenta muchos desafíos debido a la complejidad de las interacciones entre la luz y la materia. Esta tesis aporta contribuciones en este tema desde dos puntos de vista diferentes: algoritmos en estado estacionario, en los que se asume que la velocidad de la luz es infinita; y algoritmos en estado transitorio, que tratan la luz no solo en el dominio espacial, sino también en el temporal. Nuestras contribuciones en algoritmos estacionarios abordan problemas tanto en renderizado offline como en tiempo real. Nos enfocamos en la reducción de varianza para métodos offline,proponiendo un nuevo método para renderizado eficiente de medios participativos. En renderizado en tiempo real, abordamos las limitacionesde consumo de batería en dispositivos móviles proponiendo un sistema de renderizado que incrementa la eficiencia energética en aplicaciones gráficas en tiempo real. En el transporte de luz transitorio, formalizamos la simulación de este tipo transporte en este nuevo dominio, y presentamos nuevos algoritmos y métodos para muestreo eficiente para render transitorio. Finalmente, demostramos la utilidad de generar datos en este dominio, presentando un nuevo método para corregir interferencia multi-caminos en camaras Timeof- Flight, un problema patológico en el procesamiento de imágenes transitorias.n this thesis we present contributions to different challenges of computational light transport. Light transport algorithms are present in many modern applications, from image generation for visual effects to real-time object detection. Light is a rich source of information that allows us to understand and represent our surroundings, but obtaining and processing this information presents many challenges due to its complex interactions with matter. This thesis provides advances in this subject from two different perspectives: steady-state algorithms, where the speed of light is assumed infinite, and transient-state algorithms, which deal with light as it travels not only through space but also time. Our steady-state contributions address problems in both offline and real-time rendering. We target variance reduction in offline rendering by proposing a new efficient method for participating media rendering. In real-time rendering, we target energy constraints of mobile devices by proposing a power-efficient rendering framework for real-time graphics applications. In transient-state we first formalize light transport simulation under this domain, and present new efficient sampling methods and algorithms for transient rendering. We finally demonstrate the potential of simulated data to correct multipath interference in Time-of-Flight cameras, one of the pathological problems in transient imaging.<br /