493 research outputs found
Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019
International audienc
Face recognition in the wild.
Research in face recognition deals with problems related to Age, Pose, Illumination and Expression (A-PIE), and seeks approaches that are invariant to these factors. Video images add a temporal aspect to the image acquisition process. Another degree of complexity, above and beyond A-PIE recognition, occurs when multiple pieces of information are known about people, which may be distorted, partially occluded, or disguised, and when the imaging conditions are totally unorthodox! A-PIE recognition in these circumstances becomes really âwildâ and therefore, Face Recognition in the Wild has emerged as a field of research in the past few years. Its main purpose is to challenge constrained approaches of automatic face recognition, emulating some of the virtues of the Human Visual System (HVS) which is very tolerant to age, occlusion and distortions in the imaging process. HVS also integrates information about individuals and adds contexts together to recognize people within an activity or behavior. Machine vision has a very long road to emulate HVS, but face recognition in the wild, using the computer, is a road to perform face recognition in that path. In this thesis, Face Recognition in the Wild is defined as unconstrained face recognition under A-PIE+; the (+) connotes any alterations to the design scenario of the face recognition system. This thesis evaluates the Biometric Optical Surveillance System (BOSS) developed at the CVIP Lab, using low resolution imaging sensors. Specifically, the thesis tests the BOSS using cell phone cameras, and examines the potential of facial biometrics on smart portable devices like iPhone, iPads, and Tablets. For quantitative evaluation, the thesis focused on a specific testing scenario of BOSS software using iPhone 4 cell phones and a laptop. Testing was carried out indoor, at the CVIP Lab, using 21 subjects at distances of 5, 10 and 15 feet, with three poses, two expressions and two illumination levels. The three steps (detection, representation and matching) of the BOSS system were tested in this imaging scenario. False positives in facial detection increased with distances and with pose angles above ± 15°. The overall identification rate (face detection at confidence levels above 80%) also degraded with distances, pose, and expressions. The indoor lighting added challenges also, by inducing shadows which affected the image quality and the overall performance of the system. While this limited number of subjects and somewhat constrained imaging environment does not fully support a âwildâ imaging scenario, it did provide a deep insight on the issues with automatic face recognition. The recognition rate curves demonstrate the limits of low-resolution cameras for face recognition at a distance (FRAD), yet it also provides a plausible defense for possible A-PIE face recognition on portable devices
Matrix-based Parameterizations of Skeletal Animated Appearance
Alors que le rendu rĂ©aliste gagne de lâampleur dans lâindustrie, les techniques Ă la
fois photoréalistes et basées sur la physique, complexes en terme de temps de calcul,
requiÚrent souvent une étape de précalcul hors-ligne. Les applications en temps réel,
comme les jeux vidĂ©o et la rĂ©alitĂ© virtuelle, se basent sur des techniques dâapproximation
et de prĂ©calcul pour atteindre des rĂ©sultats rĂ©alistes. Lâobjectif de ce mĂ©moire est lâinvestigation
de diffĂ©rentes paramĂ©trisations animĂ©es pour concevoir une technique dâapproximation
de rendu réaliste en temps réel.
Notre investigation se concentre sur le rendu dâeffets visuels appliquĂ©s Ă des personnages
animĂ©s par modĂšle dâarmature squelettique. Des paramĂ©trisations combinant
des donnĂ©es de mouvement et dâapparence nous permettent lâextraction de paramĂštres
pour le processus en temps rĂ©el. Ătablir une dĂ©pendance linĂ©aire entre le mouvement et
lâapparence est ainsi au coeur de notre mĂ©thode.
Nous nous concentrons sur lâoccultation ambiante, oĂč la simulation de lâoccultation
est causée par des objets à proximité bloquant la lumiÚre environnante, jugée uniforme.
Lâoccultation ambiante est une technique indĂ©pendante du point de vue, et elle est dĂ©sormais
essentielle pour le réalisme en temps réel. Nous examinons plusieurs paramétrisations
qui traitent lâespace du maillage en fonction de lâinformation dâanimation par
squelette et/ou du maillage géométrique.
Nous sommes capables dâapproximer la rĂ©alitĂ© pour lâoccultation ambiante avec une
faible erreur. Notre technique pourrait Ă©galement ĂȘtre Ă©tendue Ă dâautres effets visuels
tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur
dĂ©pendant du point de vue, les dĂ©formations musculaires, la fourrure ou encore les vĂȘtements.While realistic rendering gains more popularity in industry, photorealistic and physically-
based techniques often necessitate offline processing due to their computational
complexity. Real-time applications, such as video games and virtual reality, rely mostly
on approximation and precomputation techniques to achieve realistic results. The objective
of this thesis is to investigate different animated parameterizations in order to devise
a technique that can approximate realistic rendering results in real time.
Our investigation focuses on rendering visual effects applied to skinned skeletonbased
characters. Combined parameterizations of motion and appearance data are used
to extract parameters that can be used in a real-time approximation. Trying to establish
a linear dependency between motion and appearance is the basis of our method.
We focus on ambient occlusion, a simulation of shadowing caused by objects that
block ambient light. Ambient occlusion is a view-independent technique important for
realism. We consider different parameterization techniques that treat the mesh space
depending on skeletal animation information and/or mesh geometry.
We are able to approximate ground-truth ambient occlusion with low error. Our
technique can also be extended to different visual effects, such as rendering human skin
(subsurface scattering), changes in color due to the view orientation, deformation of
muscles, fur, or clothe
Measuring perceived gloss of rough surfaces
This thesis is concerned with the visual perception of glossy rough surfaces, specifically those characterised by 1/fB noise.
Computer graphics were used to model these natural looking surfaces, which were
generated and animated to provide realistic stimuli for observers. Different methods
were employed to investigate the effects of varying surface roughness and reflection
model parameters on perceived gloss.
We first investigated how the perceived gloss of a matte Lambertian surface varies
with RMS roughness. Then we estimated the perceived gloss of moderate RMS
height surfaces rendered using a gloss reflection model. We found that adjusting parameters
of the gloss reflection model on the moderate RMS height surfaces produces
similar levels of gloss to the high RMS height Lambertian surfaces.
More realistic stimuli were modelled using improvements in the reflection model,
rendering technique, illumination and viewing conditions. In contrast with previous
research, a non-monotonic relationship was found between perceived gloss and
mesoscale roughness when microscale parameters were held constant. Finally, the
joint effect of variations in mesoscale roughness (surface geometry) and microscale
roughness (reflection model) on perceived gloss was investigated and tested against
conjoint measurement models. It was concluded that perceived gloss of rough surfaces
is significantly affected by surface roughness in both mesoscale and microscale
and can be described by a full conjoint measurement model
Image based surface reflectance remapping for consistent and tool independent material appearence
Physically-based rendering in Computer Graphics requires the knowledge of material properties other than 3D shapes, textures and colors, in order to solve the rendering equation. A number of material models have been developed, since no model is currently able to reproduce the full range of available materials. Although only few material models have been widely adopted in current rendering systems, the lack of standardisation causes several issues in the 3D modelling
workflow, leading to a heavy tool dependency of material appearance. In industry, final decisions about products are often based on a virtual prototype, a crucial step for the production pipeline, usually developed by a collaborations among several
departments, which exchange data. Unfortunately, exchanged data often tends to differ from the original, when imported into a different application. As a result, delivering consistent visual results requires time, labour and computational cost.
This thesis begins with an examination of the current state of the art in material appearance representation and capture, in order to identify a suitable strategy to tackle material appearance consistency. Automatic solutions to this problem are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders. In particular, two image-based frameworks are proposed, working under these constraints.
The first one, validated by means of perceptual studies, is aimed to the remapping of BRDF parameters and useful when the parameters used for the reference rendering are available. The second one provides consistent material appearance across different renderers, even when the parameters used for the reference are unknown. It allows the selection of an arbitrary reference rendering tool, and manipulates the output of other renderers in order to be consistent with the reference
- âŠ