80 research outputs found

    Probeless Illumination Estimation for Outdoor Augmented Reality

    Get PDF

    Estimating Outdoor Illumination Conditions Based on Detection of Dynamic Shadows

    Get PDF

    Outdoor Illumination Estimation in Image Sequences for Augmented Reality

    Get PDF

    A Scalable GPU-Based Approach to Shading and Shadowing for Photo-Realistic Real-Time Augmented Reality

    Get PDF

    Learning geometric and lighting priors from natural images

    Get PDF
    Comprendre les images est d’une importance cruciale pour une plĂ©thore de tĂąches, de la composition numĂ©rique au rĂ©-Ă©clairage d’une image, en passant par la reconstruction 3D d’objets. Ces tĂąches permettent aux artistes visuels de rĂ©aliser des chef-d’oeuvres ou d’aider des opĂ©rateurs Ă  prendre des dĂ©cisions de façon sĂ©curitaire en fonction de stimulis visuels. Pour beaucoup de ces tĂąches, les modĂšles physiques et gĂ©omĂ©triques que la communautĂ© scientifique a dĂ©veloppĂ©s donnent lieu Ă  des problĂšmes mal posĂ©s possĂ©dant plusieurs solutions, dont gĂ©nĂ©ralement une seule est raisonnable. Pour rĂ©soudre ces indĂ©terminations, le raisonnement sur le contexte visuel et sĂ©mantique d’une scĂšne est habituellement relayĂ© Ă  un artiste ou un expert qui emploie son expĂ©rience pour rĂ©aliser son travail. Ceci est dĂ» au fait qu’il est gĂ©nĂ©ralement nĂ©cessaire de raisonner sur la scĂšne de façon globale afin d’obtenir des rĂ©sultats plausibles et apprĂ©ciables. Serait-il possible de modĂ©liser l’expĂ©rience Ă  partir de donnĂ©es visuelles et d’automatiser en partie ou en totalitĂ© ces tĂąches ? Le sujet de cette thĂšse est celui-ci : la modĂ©lisation d’a priori par apprentissage automatique profond pour permettre la rĂ©solution de problĂšmes typiquement mal posĂ©s. Plus spĂ©cifiquement, nous couvrirons trois axes de recherche, soient : 1) la reconstruction de surface par photomĂ©trie, 2) l’estimation d’illumination extĂ©rieure Ă  partir d’une seule image et 3) l’estimation de calibration de camĂ©ra Ă  partir d’une seule image avec un contenu gĂ©nĂ©rique. Ces trois sujets seront abordĂ©s avec une perspective axĂ©e sur les donnĂ©es. Chacun de ces axes comporte des analyses de performance approfondies et, malgrĂ© la rĂ©putation d’opacitĂ© des algorithmes d’apprentissage machine profonds, nous proposons des Ă©tudes sur les indices visuels captĂ©s par nos mĂ©thodes.Understanding images is needed for a plethora of tasks, from compositing to image relighting, including 3D object reconstruction. These tasks allow artists to realize masterpieces or help operators to safely make decisions based on visual stimuli. For many of these tasks, the physical and geometric models that the scientific community has developed give rise to ill-posed problems with several solutions, only one of which is generally reasonable. To resolve these indeterminations, the reasoning about the visual and semantic context of a scene is usually relayed to an artist or an expert who uses his experience to carry out his work. This is because humans are able to reason globally on the scene in order to obtain plausible and appreciable results. Would it be possible to model this experience from visual data and partly or totally automate tasks? This is the topic of this thesis: modeling priors using deep machine learning to solve typically ill-posed problems. More specifically, we will cover three research axes: 1) surface reconstruction using photometric cues, 2) outdoor illumination estimation from a single image and 3) camera calibration estimation from a single image with generic content. These three topics will be addressed from a data-driven perspective. Each of these axes includes in-depth performance analyses and, despite the reputation of opacity of deep machine learning algorithms, we offer studies on the visual cues captured by our methods

    Photorealistic physically based render engines: a comparative study

    Full text link
    PĂ©rez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad

    Real-Time Inverse Lighting for Augmented Reality Using a Dodecahedral Marker

    Get PDF
    Lighting is a major factor in the perceived realism of virtual objects, and thus lighting virtual objects so that they appear to be illuminated by real-world light sources - a process known as inverse lighting - is a crucial component to creating realistic augmented reality images. This work presents a new, real-time inverse lighting method that samples the light reflected off of a regular, twelve-sided (dodecahedral), 3D object to estimate the light direction of a scene's primary light source. Using the light sample results, each visible face of the dodecahedron is determined to either be in light or in shadow. One or more light vectors then are calculated for each face by either using the surface normal vector of the face as a light direction vector if the face is in light, or by reflecting the face's surface normal across the normal vector of every adjacent illuminated face in the case of shadowed faces. If the shadowed face is not adjacent to any illuminated faces, the normal vector is reversed instead. These light vectors then are averaged to produce a vector pointing to the primary light source in the environment. This method is designed with special consideration to ease of use for the user, requiring no configuration stages.Computer Scienc

    Comparative study of the performance of real-time inverse lighting with matte, semi-gloss and gloss surfaces

    Get PDF
    Augmented Reality (AR) is the interactive process of introducing virtual objects or characters to real-world scenes. An effective way to increase the realism in AR is by mimicking real-world lighting conditions on the virtual objects. The process of gathering and analyzing real-world lighting information is called inverse-lighting. The surface textures of real-world objects may have different levels of glossiness. The goal of this research is to compare the effects that different glossiness levels have on the outcomes of the calculations. Several models of a regular dodecahedron were created using the Blender modeling software. These models were used to calculate and compare inverse-lighting on different levels of surface glossiness. Physical dodecahedrons also were created and used to see whether the Blender models accurately represent reality

    Colour coded

    Get PDF
    This 300 word publication to be published by the Society of Dyers and Colourists (SDC) is a collection of the best papers from a 4-year European project that has considered colour from the perspective of both the arts and sciences.The notion of art and science and the crossovers between the two resulted in application and funding for cross disciplinary research to host a series of training events between 2006 and 2010 Marie Curie Conferences & Training Courses (SCF) Call Identifier: FP6-Mobility-4, Euros 532,363.80 CREATE – Colour Research for European Advanced Technology Employment. The research crossovers between the fields of art, science and technology was also a subject that was initiated through Bristol’s Festival if Ideas events in May 2009. The author coordinated and chaired an event during which the C.P Snow lecture “On Two Cultures’ (1959) was re-presented by Actor Simon Cook and then a lecture made by Raymond Tallis on the notion of the Polymath. The CREATE project has a worldwide impact for researchers, academics and scientists. Between January and October 2009, the site has received 221, 414 visits. The most popular route into the site is via the welcome page. The main groups of visitors originate in the UK (including Northern Ireland), Italy, France, Finland, Norway, Hungary, USA, Finland and Spain. A basic percentage breakdown of the traffic over ten months indicates: USA -15%; UK - 16%; Italy - 13%; France -12%; Hungary - 10%; Spain - 6%; Finland - 9%; Norway - 5%. The remaining approximate 14% of visitors are from other countries including Belgium, The Netherlands and Germany (approx 3%). A discussion group has been initiated by the author as part of the CREATE project to facilitate an ongoing dialogue between artists and scientists. http://createcolour.ning.com/group/artandscience www.create.uwe.ac.uk.Related papers to this research: A report on the CREATE Italian event: Colour in cultural heritage.C. Parraman, A. Rizzi, ‘Developing the CREATE network in Europe’, in Colour in Art, Design and Nature, Edinburgh, 24 October 2008.C. Parraman, “Mixing and describing colour”. CREATE (Training event 1), France, 2008
    • 

    corecore