493 research outputs found

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Face recognition in the wild.

    Get PDF
    Research in face recognition deals with problems related to Age, Pose, Illumination and Expression (A-PIE), and seeks approaches that are invariant to these factors. Video images add a temporal aspect to the image acquisition process. Another degree of complexity, above and beyond A-PIE recognition, occurs when multiple pieces of information are known about people, which may be distorted, partially occluded, or disguised, and when the imaging conditions are totally unorthodox! A-PIE recognition in these circumstances becomes really “wild” and therefore, Face Recognition in the Wild has emerged as a field of research in the past few years. Its main purpose is to challenge constrained approaches of automatic face recognition, emulating some of the virtues of the Human Visual System (HVS) which is very tolerant to age, occlusion and distortions in the imaging process. HVS also integrates information about individuals and adds contexts together to recognize people within an activity or behavior. Machine vision has a very long road to emulate HVS, but face recognition in the wild, using the computer, is a road to perform face recognition in that path. In this thesis, Face Recognition in the Wild is defined as unconstrained face recognition under A-PIE+; the (+) connotes any alterations to the design scenario of the face recognition system. This thesis evaluates the Biometric Optical Surveillance System (BOSS) developed at the CVIP Lab, using low resolution imaging sensors. Specifically, the thesis tests the BOSS using cell phone cameras, and examines the potential of facial biometrics on smart portable devices like iPhone, iPads, and Tablets. For quantitative evaluation, the thesis focused on a specific testing scenario of BOSS software using iPhone 4 cell phones and a laptop. Testing was carried out indoor, at the CVIP Lab, using 21 subjects at distances of 5, 10 and 15 feet, with three poses, two expressions and two illumination levels. The three steps (detection, representation and matching) of the BOSS system were tested in this imaging scenario. False positives in facial detection increased with distances and with pose angles above ± 15°. The overall identification rate (face detection at confidence levels above 80%) also degraded with distances, pose, and expressions. The indoor lighting added challenges also, by inducing shadows which affected the image quality and the overall performance of the system. While this limited number of subjects and somewhat constrained imaging environment does not fully support a “wild” imaging scenario, it did provide a deep insight on the issues with automatic face recognition. The recognition rate curves demonstrate the limits of low-resolution cameras for face recognition at a distance (FRAD), yet it also provides a plausible defense for possible A-PIE face recognition on portable devices

    Matrix-based Parameterizations of Skeletal Animated Appearance

    Full text link
    Alors que le rendu rĂ©aliste gagne de l’ampleur dans l’industrie, les techniques Ă  la fois photorĂ©alistes et basĂ©es sur la physique, complexes en terme de temps de calcul, requiĂšrent souvent une Ă©tape de prĂ©calcul hors-ligne. Les applications en temps rĂ©el, comme les jeux vidĂ©o et la rĂ©alitĂ© virtuelle, se basent sur des techniques d’approximation et de prĂ©calcul pour atteindre des rĂ©sultats rĂ©alistes. L’objectif de ce mĂ©moire est l’investigation de diffĂ©rentes paramĂ©trisations animĂ©es pour concevoir une technique d’approximation de rendu rĂ©aliste en temps rĂ©el. Notre investigation se concentre sur le rendu d’effets visuels appliquĂ©s Ă  des personnages animĂ©s par modĂšle d’armature squelettique. Des paramĂ©trisations combinant des donnĂ©es de mouvement et d’apparence nous permettent l’extraction de paramĂštres pour le processus en temps rĂ©el. Établir une dĂ©pendance linĂ©aire entre le mouvement et l’apparence est ainsi au coeur de notre mĂ©thode. Nous nous concentrons sur l’occultation ambiante, oĂč la simulation de l’occultation est causĂ©e par des objets Ă  proximitĂ© bloquant la lumiĂšre environnante, jugĂ©e uniforme. L’occultation ambiante est une technique indĂ©pendante du point de vue, et elle est dĂ©sormais essentielle pour le rĂ©alisme en temps rĂ©el. Nous examinons plusieurs paramĂ©trisations qui traitent l’espace du maillage en fonction de l’information d’animation par squelette et/ou du maillage gĂ©omĂ©trique. Nous sommes capables d’approximer la rĂ©alitĂ© pour l’occultation ambiante avec une faible erreur. Notre technique pourrait Ă©galement ĂȘtre Ă©tendue Ă  d’autres effets visuels tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur dĂ©pendant du point de vue, les dĂ©formations musculaires, la fourrure ou encore les vĂȘtements.While realistic rendering gains more popularity in industry, photorealistic and physically- based techniques often necessitate offline processing due to their computational complexity. Real-time applications, such as video games and virtual reality, rely mostly on approximation and precomputation techniques to achieve realistic results. The objective of this thesis is to investigate different animated parameterizations in order to devise a technique that can approximate realistic rendering results in real time. Our investigation focuses on rendering visual effects applied to skinned skeletonbased characters. Combined parameterizations of motion and appearance data are used to extract parameters that can be used in a real-time approximation. Trying to establish a linear dependency between motion and appearance is the basis of our method. We focus on ambient occlusion, a simulation of shadowing caused by objects that block ambient light. Ambient occlusion is a view-independent technique important for realism. We consider different parameterization techniques that treat the mesh space depending on skeletal animation information and/or mesh geometry. We are able to approximate ground-truth ambient occlusion with low error. Our technique can also be extended to different visual effects, such as rendering human skin (subsurface scattering), changes in color due to the view orientation, deformation of muscles, fur, or clothe

    Measuring perceived gloss of rough surfaces

    Get PDF
    This thesis is concerned with the visual perception of glossy rough surfaces, specifically those characterised by 1/fB noise. Computer graphics were used to model these natural looking surfaces, which were generated and animated to provide realistic stimuli for observers. Different methods were employed to investigate the effects of varying surface roughness and reflection model parameters on perceived gloss. We first investigated how the perceived gloss of a matte Lambertian surface varies with RMS roughness. Then we estimated the perceived gloss of moderate RMS height surfaces rendered using a gloss reflection model. We found that adjusting parameters of the gloss reflection model on the moderate RMS height surfaces produces similar levels of gloss to the high RMS height Lambertian surfaces. More realistic stimuli were modelled using improvements in the reflection model, rendering technique, illumination and viewing conditions. In contrast with previous research, a non-monotonic relationship was found between perceived gloss and mesoscale roughness when microscale parameters were held constant. Finally, the joint effect of variations in mesoscale roughness (surface geometry) and microscale roughness (reflection model) on perceived gloss was investigated and tested against conjoint measurement models. It was concluded that perceived gloss of rough surfaces is significantly affected by surface roughness in both mesoscale and microscale and can be described by a full conjoint measurement model

    Image based surface reflectance remapping for consistent and tool independent material appearence

    Get PDF
    Physically-based rendering in Computer Graphics requires the knowledge of material properties other than 3D shapes, textures and colors, in order to solve the rendering equation. A number of material models have been developed, since no model is currently able to reproduce the full range of available materials. Although only few material models have been widely adopted in current rendering systems, the lack of standardisation causes several issues in the 3D modelling workflow, leading to a heavy tool dependency of material appearance. In industry, final decisions about products are often based on a virtual prototype, a crucial step for the production pipeline, usually developed by a collaborations among several departments, which exchange data. Unfortunately, exchanged data often tends to differ from the original, when imported into a different application. As a result, delivering consistent visual results requires time, labour and computational cost. This thesis begins with an examination of the current state of the art in material appearance representation and capture, in order to identify a suitable strategy to tackle material appearance consistency. Automatic solutions to this problem are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders. In particular, two image-based frameworks are proposed, working under these constraints. The first one, validated by means of perceptual studies, is aimed to the remapping of BRDF parameters and useful when the parameters used for the reference rendering are available. The second one provides consistent material appearance across different renderers, even when the parameters used for the reference are unknown. It allows the selection of an arbitrary reference rendering tool, and manipulates the output of other renderers in order to be consistent with the reference
    • 

    corecore