204 research outputs found
Face recognition under varying pose: The role of texture and shape
Although remarkably robust, face recognition is not perfectly invariant to pose and viewpoint changes. It has been known since long, that the profile as well as the full-face view result in a recognition performance that is worse than a view from within that range. However, only few data exists that investigate this phenomenon in detail. This work intends to provide such data using a high angular resolution and a large range of poses. Since there are inconsistencies in the literature concerning these issues, we emphasize on the different role of the learning view and the testing view in the recognition experiment and on the role of information contained in the texture and in the shape of a face. Our stimuli were generated from laser-scanned head models and contained either the natural texture or only Lambertian shading and no texture. The results of our same/different face recognition experiments are: 1. Only the learning view but not the testing view effects the recognition performance. 2. For the textured faces the optimal learning view is closer to the full-face view than for the shaded faces. 3. For the shaded faces, we find a significantly better recognition performance for the symmetric view. The results can be interpreted in terms of different strategies to recover invariants from texture and from shading
Optimizing the colour and fabric of targets for the control of the tsetse fly Glossina fuscipes fuscipes
Background:
Most cases of human African trypanosomiasis (HAT) start with a bite from one of the subspecies of Glossina fuscipes. Tsetse use a range of olfactory and visual stimuli to locate their hosts and this response can be exploited to lure tsetse to insecticide-treated targets thereby reducing transmission. To provide a rational basis for cost-effective designs of target, we undertook studies to identify the optimal target colour.
Methodology/Principal Findings:
On the Chamaunga islands of Lake Victoria , Kenya, studies were made of the numbers of G. fuscipes fuscipes attracted to targets consisting of a panel (25 cm square) of various coloured fabrics flanked by a panel (also 25 cm square) of fine black netting. Both panels were covered with an electrocuting grid to catch tsetse as they contacted the target. The reflectances of the 37 different-coloured cloth panels utilised in the study were measured spectrophotometrically. Catch was positively correlated with percentage reflectance at the blue (460 nm) wavelength and negatively correlated with reflectance at UV (360 nm) and green (520 nm) wavelengths. The best target was subjectively blue, with percentage reflectances of 3%, 29%, and 20% at 360 nm, 460 nm and 520 nm respectively. The worst target was also, subjectively, blue, but with high reflectances at UV (35% reflectance at 360 nm) wavelengths as well as blue (36% reflectance at 460 nm); the best low UV-reflecting blue caught 3× more tsetse than the high UV-reflecting blue.
Conclusions/Significance:
Insecticide-treated targets to control G. f. fuscipes should be blue with low reflectance in both the UV and green bands of the spectrum. Targets that are subjectively blue will perform poorly if they also reflect UV strongly. The selection of fabrics for targets should be guided by spectral analysis of the cloth across both the spectrum visible to humans and the UV region
Classifying faces by sex is more accurate with 3D shape information than with texture
Purpose: We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods: 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A "leave-one-out" technique was applied to measure the gen-eralizability of the perceptron's sex predictions. Results: While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1 correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8 correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9 correct generalization was achieved with 17 projection coefficients. Conclusions: These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands
Panel: Bodily Expressed Emotion Understanding Research: A Multidisciplinary Perspective
Developing computational methods for bodily expressed emotion understanding can benefit from knowledge and approaches of multiple fields, including computer vision, robotics, psychology/psychiatry, graphics, data mining, machine learning, and movement analysis. The panel, consisting of active researchers in some closely-related fields, attempts to open a discussion on the future of this new and exciting research area. This paper documents the opinions expressed by the individual panelists
Visual ecology of aphids – a critical review on the role of colours in host finding
We review the rich literature on behavioural responses of aphids (Hemiptera: Aphididae) to stimuli of different colours. Only in one species there are adequate physiological data on spectral sensitivity to explain behaviour crisply in mechanistic terms.
Because of the great interest in aphid responses to coloured targets from an evolutionary, ecological and applied perspective, there is a substantial need to expand these studies to more species of aphids, and to quantify spectral properties of stimuli rigorously. We show that aphid responses to colours, at least for some species, are likely based on a specific colour opponency mechanism, with positive input from the green domain of the spectrum and negative input from the blue and/or UV region.
We further demonstrate that the usual yellow preference of aphids encountered in field experiments is not a true colour preference but involves additional brightness effects. We discuss the implications for agriculture and sensory ecology, with special respect to the recent debate on autumn leaf colouration. We illustrate that recent evolutionary theories concerning aphid–tree interactions imply far-reaching assumptions on aphid responses to colours
that are not likely to hold. Finally we also discuss the
implications for developing and optimising strategies
of aphid control and monitoring
Appraising the intention of other people: Ecological validity and procedures for investigating effects of lighting for pedestrians
One of the aims of outdoor lighting public spaces such as pathways and subsidiary roads is to help pedestrians to evaluate the intentions of other people. This paper discusses how a pedestrians’ appraisal of another persons’ intentions in artificially lit outdoor environments can be studied. We review the visual cues that might be used, and the experimental design with which effects of changes in lighting could be investigated to best resemble the pedestrian experience in artificially lit urban environments. Proposals are made to establish appropriate operationalisation of the identified visual cues, choice of methods and measurements representing critical situations. It is concluded that the intentions of other people should be evaluated using facial emotion recognition; eye tracking data suggest a tendency to make these observations at an interpersonal distance of 15 m and for a duration of 500 ms. Photographs are considered suitable for evaluating the effect of changes in light level and spectral power distribution. To support investigation of changes in spatial distribution further investigation is needed with 3D targets. Further data are also required to examine the influence of glare
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex
Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions.National Science Foundation (U.S.). Science and Technology Center (Award CCF-1231216)National Science Foundation (U.S.) (Grant NSF-0640097)National Science Foundation (U.S.) (Grant NSF-0827427)United States. Air Force Office of Scientific Research (Grant FA8650-05-C-7262)Eugene McDermott Foundatio
Do I Have My Attention? Speed of Processing Advantages for the Self-Face Are Not Driven by Automatic Attention Capture
We respond more quickly to our own face than to other faces, but there is debate over whether this is connected to attention-grabbing properties of the self-face. In two experiments, we investigate whether the self-face selectively captures attention, and the attentional conditions under which this might occur. In both experiments, we examined whether different types of face (self, friend, stranger) provide differential levels of distraction when processing self, friend and stranger names. In Experiment 1, an image of a distractor face appeared centrally – inside the focus of attention – behind a target name, with the faces either upright or inverted. In Experiment 2, distractor faces appeared peripherally – outside the focus of attention – in the left or right visual field, or bilaterally. In both experiments, self-name recognition was faster than other name recognition, suggesting a self-referential processing advantage. The presence of the self-face did not cause more distraction in the naming task compared to other types of face, either when presented inside (Experiment 1) or outside (Experiment 2) the focus of attention. Distractor faces had different effects across the two experiments: when presented inside the focus of attention (Experiment 1), self and friend images facilitated self and friend naming, respectively. This was not true for stranger stimuli, suggesting that faces must be robustly represented to facilitate name recognition. When presented outside the focus of attention (Experiment 2), no facilitation occurred. Instead, we report an interesting distraction effect caused by friend faces when processing strangers’ names. We interpret this as a “social importance” effect, whereby we may be tuned to pick out and pay attention to familiar friend faces in a crowd. We conclude that any speed of processing advantages observed in the self-face processing literature are not driven by automatic attention capture
Structural encoding and recognition of biological motion: evidence from event-related potentials and source analysis
- …
