527 research outputs found

    Algorithms for light applications: from theoretical simulations to prototyping

    Get PDF
    [eng] Although the first LED dates to the middle of the 20th century, it has not been until the last decade that the market has been flooded with high efficiency and high durability LED solutions compared to previous technologies. In addition, luminaires that include types of LEDs differentiated in hue or color have already appeared. These luminaires offer new possibilities to reach colorimetric or non-visual capabilities not seen to date. Due to the enormous number of LEDs on the market, with very different spectral characteristics, the use of the spectrometer as a measuring device for determining LEDs properties has become popular. Obtaining colorimetric information from a luminaire is a necessary step to commercialize it, so it is a tool commonly used by many LED manufacturers. This doctoral thesis advances the state-of-the-art and knowledge of LED technology at the level of combined spectral emission, as well as applying innovative spectral reconstruction techniques to a commercial multichannel colorimetric sensor. On the one hand, new spectral simulation algorithms that allow obtaining a very high number of results have been developed, being able to obtain optimized values of colorimetric and non-visual parameters in multichannel light sources. MareNostrum supercomputer has been used and new relationships between colorimetric and non-visual parameters in commercial white LED datasets have been found through data analysis. Moreover, the functional improvement of a multichannel colorimetric sensor has been explored by providing it with a neural network for spectral reconstruction. A large amount of data has been generated, which has allowed simulations and statistical studies on the error committed in the spectral reconstruction process using different techniques. This improvement has led to an increase in the spectral resolution measured by the sensor, allowing better accuracy in the calculation of colorimetric parameters. Prototypes of the light sources and the colorimetric sensor have been developed in order to experimentally demonstrate the theoretical framework generated. All the prototypes have been characterized and the errors generated with respect to the theoretical models have been evaluated. The results obtained have been validated through the application of different industry standards by comparison with calibrated commercial devices.[cat] Aquesta tesi doctoral realitza un avançament en l’estat de l’art i en el coneixement sobre la tecnologia LED a nivell d’emissió espectral combinada, a més d’aplicar tècniques innovadores de reconstrucció espectral a un sensor colorimètric multicanal comercial. Per una banda, s’han desenvolupat nous algoritmes de simulació espectral que permeten obtenir un nombre molt elevat de resultats, sent capaços d’obtenir valors optimitzats de paràmetres colorimètrics i no-visuals en fonts de llum multicanal. S’ha fet ús del supercomputador MareNostrum i s’han trobat noves relacions entre paràmetres colorimètrics i no visuals en conjunts de LEDs blancs comercials a través de l’anàlisi de dades. Per altra banda, s’ha explorat la millora funcional d’un sensor colorimètric multicanal, dotant-lo d’una xarxa neuronal per a la reconstrucció espectral. S’han generat una gran quantitat de dades que han permès realitzar simulacions i estudis estadístics sobre l’error comès en el procés de reconstrucció espectral utilitzant diferents tècniques. Aquesta millora ha implicat un augment de la resolució espectral mesurada pel sensor, permetent obtenir una millor precisió en el càlcul de paràmetres colorimètrics. S’han desenvolupat prototips de les fonts de llum i del sensor colorimètric amb l’objectiu de demostrar experimentalment el marc teòric generat. Tots els prototips han estat caracteritzats i s’han avaluat els errors generats respecte els models teòrics. Els resultats obtinguts s’han validat a través de l’aplicació de diferents estàndards de la indústria o a través de la comparativa amb dispositius comercials calibrats

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    OutCast: Outdoor Single-image Relighting with Cast Shadows

    Full text link
    We propose a relighting method for outdoor images. Our method mainly focuses on predicting cast shadows in arbitrary novel lighting directions from a single image while also accounting for shading and global effects such the sun light color and clouds. Previous solutions for this problem rely on reconstructing occluder geometry, e.g. using multi-view stereo, which requires many images of the scene. Instead, in this work we make use of a noisy off-the-shelf single-image depth map estimation as a source of geometry. Whilst this can be a good guide for some lighting effects, the resulting depth map quality is insufficient for directly ray-tracing the shadows. Addressing this, we propose a learned image space ray-marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal. Our proposed method achieves, for the first time, state-of-the-art relighting results, with only a single image as input. For supplementary material visit our project page at: https://dgriffiths.uk/outcast.Comment: Eurographics 2022 - Accepte

    A Differential Approach for Gaze Estimation

    Full text link
    Non-invasive gaze estimation methods usually regress gaze directions directly from a single face or eye image. However, due to important variabilities in eye shapes and inner eye structures amongst individuals, universal models obtain limited accuracies and their output usually exhibit high variance as well as biases which are subject dependent. Therefore, increasing accuracy is usually done through calibration, allowing gaze predictions for a subject to be mapped to his/her actual gaze. In this paper, we introduce a novel image differential method for gaze estimation. We propose to directly train a differential convolutional neural network to predict the gaze differences between two eye input images of the same subject. Then, given a set of subject specific calibration images, we can use the inferred differences to predict the gaze direction of a novel eye sample. The assumption is that by allowing the comparison between two eye images, annoyance factors (alignment, eyelid closing, illumination perturbations) which usually plague single image prediction methods can be much reduced, allowing better prediction altogether. Experiments on 3 public datasets validate our approach which constantly outperforms state-of-the-art methods even when using only one calibration sample or when the latter methods are followed by subject specific gaze adaptation.Comment: Extension to our paper A differential approach for gaze estimation with calibration (BMVC 2018) Submitted to PAMI on Aug. 7th, 2018 Accepted by PAMI short on Dec. 2019, in IEEE Transactions on Pattern Analysis and Machine Intelligenc

    An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games

    Get PDF
    Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor

    Deep Visual Unsupervised Domain Adaptation for Classification Tasks:A Survey

    Get PDF
    • …
    corecore