11 research outputs found

    Body image distortion in photography

    Get PDF
    This thesis investigates the theory that photography is, in terms of body image perception, an intrinsically distorting and often fattening medium. In the professional practice of photography, film and television, there is a widely held belief that the camera "adds 10lbs" to the portrayed weight of actors and presenters. The primary questions addressed here relate to the true extent of the fattening effect, to what perceptual mechanisms it can be ascribed and if it can be counteracted in common practice. Current theories in the perception of photographic images rarely, if ever discuss the medium's perceptual accuracy in recording the original scene. It is assumed by many users that most photographs convey essentially the same information they would have seen had they been present when they were taken. Further, it is generally accepted that photographs are an accurate, veridical and scientific method of record and their content should be trusted unless there is evidence of a technical failure, editing or deliberate tampering. This thesis investigates whether this level of trust is appropriate, specifically by examining the reliability of photography in relation to reproducing the face and form of human subjects. Body Image Distortion (B.I.D.) is a term normally used to describe the primary diagnostic symptom of the slimming disease, anorexia nervosa. However, it is demonstrated here that people viewing 2D photographic portraits often make very significant overestimations of size when comparing otherwise identical stereoscopic images. The conclusion is that losing stereoscopic information in conventional 2D photography will cause distortions of perceived body image, and that this is often seen as a distinct flattening and fattening effect. A second fattening effect was also identified in the use of telephoto lenses. It is demonstrated, using psychophysical experiments and geometry that these 2D images cannot convey the same spatial or volumetric information that normal human orthostereoscopic perception will give. The evidence gathered suggests that the Human Visual System requires images to be orthostereoscopic, and be captured using two cameras that mimic as closely as possible the natural vergences, angle of view, depth of field, magnification, brightness, contrast and colour to reproduce scenes as accurately as possible. The experiments reported use three different size estimation methodologies: stereoscopic versus monocular comparisons of human and virtual targets, bodyweight estimations in portraits taken at differing camera to subject distances and synoptic versus direct viewing comparisons. The three techniques were used because photographic images are typically made without disparity and accommodation/vergence information, but with magnifications that are greater than found with direct viewing of a target. By separately analysing the effects of disparity, magnification and accommodation/vergence the reported experiments show how changes in each condition can effect size estimation in photographs. The data suggest that photographs made without orthostereoscopic information will lead to predictably distorted perception and that conventional 2D imaging will almost always cause a significant flattening and fattening effect. In addition, it is argued that the conveyed jaw size, in relation to neck width is an important factor in body-weight perception and this will lead to sexually dimorphic perception: disproportionately larger estimations of bodyweight are made for female faces than male faces under the same photographic conditions

    Human factors in the perception of stereoscopic images

    Get PDF
    Research into stereoscopic displays is largely divided into how stereo 3D content looks, a field concerned with distortion, and how such content feels to the viewer, that is, comfort. However, seldom are these measures presented simultaneously. Both comfortable displays with unacceptable 3D and uncomfortable displays with great 3D are undesirable. These two scenarios can render conclusions based on research into these measures both moot and impractical. Furthermore, there is a consensus that more disparity correlates directly with greater viewer discomfort. These experiments, and the dissertation thereof, challenge this notion and argue for a more nuanced argument related to acquisition factors such as interaxial distance (IA) and post processing in the form of horizontal image translation (HIT). Indeed, this research seeks to measure tolerance limits for viewing comfort and perceptual distortions across different camera separations. In the experiments, HIT and IA were altered together. Following Banks et al. (2009), our stimuli were simple stereoscopic hinges, and we measured the perceived angle as a function of camera separation. We compared the predictions based on a ray-tracing model with the perceived 3D shape obtained psychophysically. Participants were asked to judge the angles of 250 hinges at different camera separations (IA and HIT remained linked across a 20 to 100mm range, but the angles ranged between 50° and 130°). In turn, comfort data was obtained using a five-point Likert scale for each trial. Stimuli were presented in orthoscopic conditions with screen and observer field of view (FOV) matched at 45°. The 3D hinge and experimental parameters were run across three distinct series of experiments. The first series involved replicating a typical laboratory scenario where screen position was unchanged (Experiment I), the other presenting scenarios representative of real-world applications for a single viewer (Experiments II, III, and IV), and the last presenting real-world applications for multiple viewers (Experiment V). While the laboratory scenario revealed greatest viewer comfort occurred when a virtual hinge was placed on the screen plane, the single-viewer experiment revealed into-the-screen stereo stimuli was judged flatter while out-of-screen content was perceived more veridically. The multi-viewer scenario revealed a marked decline in comfort for off-axis viewing, but no commensurate effect on distortion; importantly, hinge angles were judged as being the same regardless of off-axis viewing for angles of up to 45. More specifically, the main results are as follows. 1) Increased viewing distance enhances viewer comfort for stereoscopic perception. 2) The amount of disparity present was not correlated with comfort. Comfort is not correlated with angular distortion. 3) Distortion is affected by hinge placement on-screen. There is only a significant effect on comfort when the Camera Separation is at 60mm. 4) A perceptual bias between into the depth orientation of the screen stimuli, in to the screen stimuli were judged as flatter than out of the screen stimuli. 5) Perceived distortion not being affected by oblique viewing. Oblique viewing does not affect perceived comfort. In conclusion, the laboratory experiment highlights the limitations of extrapolating a controlled empirical stimulus into a less controlled “real world” environment. The typical usage scenarios consistently reveal no correlation between the amount of screen disparity (parallax) in the stimulus and the comfort rating. The final usage scenario reveals a perceptual constancy in off-axis viewer conditions for angles of up to 45, which, as reported, is not reflected by a typical ray-tracing model. Stereoscopic presentation with non-orthoscopic HIT may give comfortable 3D. However, there is good reason to believe that this 3D is not being perceived veridically. Comfortable 3D is often incorrectly converged due to the differences between distances specified by disparity and monocular cues. This conflict between monocular and stereo cues in the presentation of S3D content leads to loss of veridicality i.e. a perception of flatness. Therefore, correct HIT is recommended as the starting point for creating realistic and comfortable 3D, and this factor is shown by data to be far more important than limiting screen disparity (i.e. parallax). Based on these findings, this study proposes a predictive model of stereoscopic space for 3D content generators who require flexibility in acquisition parameters. This is important as there is no data for viewing conditions where the acquisition parameters are changed

    Arguments Against a Configural Processing Account of Familiar Face Recognition

    Get PDF
    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition

    Camera-to-subject distance affects face configuration and perceived identity

    Get PDF
    Face identification is reliable for viewers who are familiar with the face, and unreliable for viewers who are not. One account of this contrast is that people become good at recognising a face by learning its configuration-the specific pattern of feature-to-feature measurements. In practice, these measurements differ across photos of the same face because objects appear more flat or convex depending on their distance from the camera. Here we connect this optical understanding to face configuration and identification accuracy. Changing camera-to-subject distance (0.32m versus 2.70m) impaired perceptual matching of unfamiliar faces, even though the images were presented at the same size. Familiar face matching was accurate across conditions. Reinstating valid distance cues mitigated the performance cost, suggesting that perceptual constancy compensates for distance-related changes in optical face shape. Acknowledging these distance effects could reduce identification errors in applied settings such as passport control

    Why has research in face recognition progressed so slowly? The importance of variability

    Get PDF
    Despite many years of research, there has been surprisingly little progress in our understanding of how faces are identified. Here I argue that there are two contributory factors: (a) Our methods have obscured a critical aspect of the problem, within-person variability; and (b) research has tended to conflate familiar and unfamiliar face processing. Examples of procedures for studying variability are given, and a case is made for studying real faces, of the type people recognize every day. I argue that face recognition (specifically identification) may only be understood by adopting new techniques that acknowledge statistical patterns in the visual environment. As a consequence, some of our current methods will need to be abandoned

    Spatial and human factors affecting image quality and viewer experience of stereoscopic 3D in television and cinema

    Get PDF
    PhD ThesisThe horizontal offset in the two eyes’ locations in the skull means that they receive slightly different images of the world. The visual cortex uses these disparities to calculate where in depth different objects are, absolutely (physical distance from the viewer, perceived very imprecisely) and relatively (whether one object is in front of another, perceived with great precision). For well over a century, stereoscopic 3D (S3D) technology has existed which can generate an artificial sense of depth by displaying images with slight disparities to the different retinas. S3D technology is now considerably cheaper to access in the home, but remains a niche market, partly reflecting problems with viewer experience and enjoyment of S3D. This thesis considers some of the factors that could affect viewer experience of S3D content. While S3D technology can give a vivid depth percept, it can also lead to distortions in perceived size and shape, particularly if content is viewed at the wrong distance or angle. Almost all S3D content is designed for a viewing angle perpendicular to the screen, and with a recommended viewing distance, but little is known about the viewing distance typically used for S3D, or the effect of viewing angle. Accordingly, Chapter 2 of this thesis reports a survey of members of the British public. Chapters 3 and 4 report two experiments, one designed to assess the effect of oblique viewing, and another to consider the interaction between S3D and perceived size. S3D content is expensive to generate, hence producers sometimes “fake” 3D by shifting 2D content behind the screen plane. Chapter 5 investigates viewer experience with this fake 3D, and finds it is not a viable substitute for genuine S3D while also examining whether viewers fixate on different image features when video content is viewed in S3D, as compared to 2D.part-funded by BSkyB and EPSRC as a CASE PhD studentship supporting PH

    The Forensic Identification of CCTV Images of Unfamiliar Faces

    Get PDF
    Government and private crime prevention initiatives in recent years have resulted in the increasingly widespread establishment of Closed Circuit Television (CCTV) systems. This thesis discusses the history, development, social impact and the efficacy of video surveillance with particular emphasis placed on the admissibility in court of CCTV evidence for identification purposes. Indeed, a verdict may depend on the judgement by members of a jury that the defendant is depicted in video footage. A series of 8 experiments, mainly employing a single-item identity-verification simultaneous matching design were conducted to evaluate human ability in this context, using both photographs and actors present in person as targets. Across all experiments, some trials were target absent in which a physically matched distracter replaced the target. Specific features were varied such as video quality, the age of participants, the use of disguise and the period of time between image acquisition and identification session. Across all experiments performance was found to be error prone, even if the quality of the images was high and depicted targets in close-up. Further experiments examined jury decision making when presented with CCTV evidence and also whether extensive examination of images would aid identification performance. In addition, evidence may be presented in court by facial structure experts in order to verify the identity of an offender caught on CCTV. Some of these methods were discussed and a software package was designed to aid in the identification of facial landmarks in photographs and to provide a database of the physical and angular distance between them for this purpose. A series of analyses were conducted and on the majority of these, the system was found to be more reliable than humans at facial discrimination. All the results are discussed in a forensic context and the implications for current legal practices are considered

    Face Recognition in Challenging Situations

    Get PDF
    A great deal of previous research has demonstrated that face recognition is unreliable for unfamiliar faces and reliable for familiar faces. However, such findings typically came from tasks that used ‘cooperative’ images, where there was no deliberate attempt to alter apparent identity. In applied settings, images are often far more challenging in nature. For example multiple images of the same identity may appear to be different identities, due to either incidental changes in appearance (such as age or style related change, or differences in images capture) or deliberate changes (evading own identity through disguise). At the same time, images of different identities may look like the same person, due to either incidental changes (natural similarities in appearance), or deliberate changes (attempts to impersonate someone else, such as in the case of identity fraud). Thus, past studies may have underestimated the applied problem. In this thesis I examine face recognition performance for these challenging image scenarios and test whether the familiarity advantage extends to these situations. I found that face recognition was indeed even poorer for challenging images than previously found using cooperative images. Familiar viewers were still better than unfamiliar viewers, yet familiarity did not bring performance to ceiling level for challenging images as it had done in the cooperative tasks in the past. I investigated several ways of improving performance, including image manipulations, exploiting perceptual constancy, crowd analysis of identity judgments, and viewing by super-recognisers. This thesis provides interesting insights into theory regarding what it is that familiar viewers are learning when they are becoming familiar with a face. It also has important practical implications; both for improving performance in challenging situations and for understanding deliberate disguise

    Within-person variability in facial appearance

    Get PDF
    Within-person variability has largely been neglected in face processing research, with research typically focusing instead on between-person variability. In experimental settings between-person variability often becomes between-image variability with research using one image to represent a face. This implies that an image is an adequate representation of a face, however one image cannot illustrate the variability that can occur in facial appearance. This thesis argues that overlooking within-person variability is a fundamental flaw in face processing research, as within-person variability is surprisingly large. The experiments in this thesis illustrate the effect of within-person variability in different face processing contexts: face identification, face perception and image memory. Experiments 1 – 7 demonstrate the difficulty of identifying familiar and unfamiliar faces across within-person variability using an image-sorting task. Experiment 8 illustrates the within-person variability that occurs in personality perception, and Experiments 9 and 10 illustrate the within-person variability that occurs in the perception of facial attractiveness. Lastly Experiments 11 and 12 introduce within-person variability to memory recognition and demonstrate the difficulties of remembering multiple images of the same face. From the results of Experiments 1 – 12 it is concluded that within-person variability is highly influential in all the discussed areas of face processing and therefore needs to be taken seriously in face processing research
    corecore