119 research outputs found

    Part decomposition of 3D surfaces

    Get PDF
    This dissertation describes a general algorithm that automatically decomposes realworld scenes and objects into visual parts. The input to the algorithm is a 3 D triangle mesh that approximates the surfaces of a scene or object. This geometric mesh completely specifies the shape of interest. The output of the algorithm is a set of boundary contours that dissect the mesh into parts where these parts agree with human perception. In this algorithm, shape alone defines the location of a bom1dary contour for a part. The algorithm leverages a human vision theory known as the minima rule that states that human visual perception tends to decompose shapes into parts along lines of negative curvature minima. Specifically, the minima rule governs the location of part boundaries, and as a result the algorithm is known as the Minima Rule Algorithm. Previous computer vision methods have attempted to implement this rule but have used pseudo measures of surface curvature. Thus, these prior methods are not true implementations of the rule. The Minima Rule Algorithm is a three step process that consists of curvature estimation, mesh segmentation, and quality evaluation. These steps have led to three novel algorithms known as Normal Vector Voting, Fast Marching Watersheds, and Part Saliency Metric, respectively. For each algorithm, this dissertation presents both the supporting theory and experimental results. The results demonstrate the effectiveness of the algorithm using both synthetic and real data and include comparisons with previous methods from the research literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art

    Comparing featural and holistic composite systems with the aid of guided memory techniques

    Get PDF
    Includes bibliographical references (leaves 138-147)This study compares the effectiveness of two computerised composite construction systems - a holistic, recognition-based system named ID and a featural system that is utilized internationally, namely FACES. The comparison aimed to test whether ID produces better quality composites to FACES, and whether these composites could be improved with the aid of context reinstatement tehcniques, in particular guided memory. Participants (n=64) attended a staged event where they witnessed a female 'numerologist' for 20 minutes. Five weeks later they were asked to return to create a composite of the woman using either FACES or ID. Reconstructions were made in view, from memory after a South African Police interview or from memory after a guided memory interview. In addition, experts for each system constructed composites of each perpetrator. Studies have reported enhanced identification when multiple composites are combined to create a morpho. Hence, the guided memory composites for each perpetrator were morphed to create three ID and three FACES morphs. The complete set of 76 composites was then evaluated by 503 independent judges using matching and rating tasks. The study hypothesised that ID would perform better, but results suggest that the two systems performed equivalently. Results also suggest that the guided memory interview did not have the desired effect of significantly improving participants' memories of the perpetrator, and that contrary to expectations, the morphed composites performed extremely poorly and were rated the worst and identified the least. Related findings and ideas for future research are discussed

    The Reflection and Reification of Racialized Language in Popular Media

    Get PDF
    This work highlights specific lexical items that have become racialized in specific contextual applications and tests how these words are cognitively processed. This work presents the results of a visual world (Huettig et al 2011) eye-tracking study designed to determine the perception and application of racialized (Coates 2011) adjectives. To objectively select the racialized adjectives used, I developed a corpus comprised of popular media sources, designed specifically to suit my research question. I collected publications from digital media sources such as Sports Illustrated, USA Today, and Fortune by scraping articles featuring specific search terms from their websites. This experiment seeks to aid in the demarcation of socially salient groups whose application of racialized adjectives to racialized images is near instantaneous, or at least less questioned. As we view growing social movements which revolve around the significant marks unconscious assumptions leave on American society, revealing how and where these lexical assignments arise and thrive allows us to interrogate the forces which build and reify such biases. Future research should attempt to address the harmful semiotics these lexical choices sustain

    Automatic facial recognition based on facial feature analysis

    Get PDF

    Portraits, Likenesses, Composites? Facial Difference in Forensic Art

    Get PDF
    The police composite sketch is arguably the most fundamental example of forensic art, and one which enjoys considerable cultural prominence. Intended to produce a positive identification of a specific individual, composites are a form of visual intelligence rather than hard evidence. Based on verbal descriptions drawn from memory deriving from highly contingent and possibly traumatic events, composites are by definition unique and precarious forensic objects, representing an epistemological paradox in their definition as simultaneous ‘artistic impression’ and ‘pictorial statement’. And despite decades of operational use, only in recent years has the field of cognitive psychology begun to fully understand and address the conditions that affect recognition rates both positively and negatively. How might composites contribute to our understanding of representational concepts such as ‘likeness’ and ‘accuracy’? And what role does visual medium – drawn, photographic or computerized depiction – play in the legibility of these images? Situated within the broader context of forensic art practices, this paper proceeds from an understanding that the face is simultaneously crafted as an analogy of the self and a forensic technology. In other words, the face is a space where concepts of identification and identity, sameness and difference (often uncomfortably) converge. With reference to selected examples from laboratory research, field application and artistic practice, I consider how composites, through their particular techniques and form, contribute to subject-making, and how they embody the fugitive, in literal and figurative terms

    About face, computergraphic synthesis and manipulation of facial imagery

    Get PDF
    Thesis (M.S.V.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1982.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH. VIDEODISC IN ARCHIVES AND ROTCH VISUAL COLLECTIONS.Includes bibliographical references (leaves 87-90).A technique of pictorially synthesizing facial imagery using optical videodiscs under computer control is described. Search, selection and averaging processes are performed on a catalogue of whole faces and facial features to yield a composite, expressive, recognizable face. An immediate application of this technique is the reconstruction of a particular face from memory for police identification, thus the project is called , IDENTIDISC. Part I-PACEMAKER describes the production and implementation of the IDENTIDISC system to produce composite faces. Part II-EXPRESSIONMAKER describes animation techniques to add expression and motion to composite faces . Expression sequences are manipulated to make 'anyface' make any face. Historical precedents of making facial composites, theories of facial recognition, classification and expression are also discussed. This thesis is accompanied by two copies of PACEMAKER-III, an optical videodisc produced at the Architecture Machine Group in 1982. The disc can be played on an optical videodisc player . The length is approximately 15 , 0000 frames. Frame numbers are indicated in the text by [ ].by Peggy Weil.M.S.V.S

    Being observed detrimentally affects face perception

    Get PDF
    In Experiment 1, simulated social pressure was manipulated through two factors: whether participants believed they were interacting with others or not via a webcam and whether they believed they were being recorded or not. Participants who believed they were being recorded, were significantly less accurate at recognising faces than those who did not believe they were being recorded. For Experiment 2, we found that the recognition of own-ethnicity faces was negatively affected by observation but not the recognition of other-ethnicity faces, and then only when observed during learning. Experiment 3 demonstrated that observation affected the recognition of upright faces more so than that of objects and inverted faces. Experiment 4 showed that observation does not affect the amount of holistic processing engaged in, but does affect how people view faces. Such results indicate that expert face recognition is susceptible to increased error if participants are being observed whilst encoding faces

    Interactive-time vision--face recognition as a visual behavior

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 1991.Includes bibliographical references (leaves 107-115).by Matthew Alan Turk.Ph.D

    Block-level discrete cosine transform coefficients for autonomic face recognition

    Get PDF
    This dissertation presents a novel method of autonomic face recognition based on the recently proposed biologically plausible network of networks (NoN) model of information processing. The NoN model is based on locally parallel and globally coordinated transformations. In the NoN architecture, the neurons or computational units form distributed networks, which themselves link to form larger networks. In the general case, an n-level hierarchy of nested distributed networks is constructed. This models the structures in the cerebral cortex described by Mountcastle and the architecture based on that proposed for information processing by Sutton. In the implementation proposed in the dissertation, the image is processed by a nested family of locally operating networks along with a hierarchically superior network that classifies the information from each of the local networks. The implementation of this approach helps obtain sensitivity to the contrast sensitivity function (CSF) in the middle of the spectrum, as is true for the human vision system. The input images are divided into blocks to define the local regions of processing. The two-dimensional Discrete Cosine Transform (DCT), a spatial frequency transform, is used to transform the data into the frequency domain. Thereafter, statistical operators that calculate various functions of spatial frequency in the block are used to produce a block-level DCT coefficient. The image is now transformed into a variable length vector that is trained with respect to the data set. The classification was done by the use of a backpropagation neural network. The proposed method yields excellent results on a benchmark database. The results of the experiments yielded a maximum of 98.5% recognition accuracy and an average of 97.4% recognition accuracy. An advanced version of the method where the local processing is done on offset blocks has also been developed. This has validated the NoN approach and further research using local processing as well as more advanced global operators is likely to yield even better results
    • …
    corecore