24,839 research outputs found

    University for the Creative Arts staff research 2011

    Get PDF
    This publication brings together a selection of the University’s current research. The contributions foreground areas of research strength including still and moving image research, applied arts and crafts, as well as emerging fields of investigations such as design and architecture. It also maps thematic concerns across disciplinary areas that focus on models and processes of creative practice, value formations and processes of identification through art and artefacts as well as cross-cultural connectivity. Dr. Seymour Roworth-Stoke

    Probing clustering features around Cl 0024+17

    Full text link
    I present a spatial analysis of the galaxy distribution around the cluster Cl 0024+17. The basic aim is to find the scales where galaxies present a significant deviation from an inhomogeneous Poisson statistical process. Using the generalization of the Ripley, Besag, and the pair correlation functions for non-stationary point patterns, I estimate these transition scales for a set of 1,000 Monte Carlo realizations of the Cl 0024+17 field, corrected for completeness up to the outskirts. The results point out the presence of at least two physical scales in this field at 31.4 and 112.9 arcseconds. The second one is statistically consistent with the dark matter ring radius (about 75 arcseconds) previously identified by Jee et al. (2007). However, morphology and anisotropy tests point out that a clump at about 120 arcseconds NW from the cluster center could be the responsible for the second transition scale. These results do not indicate the existence of a galaxy counterpart of the dark matter ring, but the methodology developed to study the galaxy field as a spatial point pattern provides a good statistical evaluation of the physical scales around the cluster. I briefly discuss the usefulness of this approach to probe features in galaxy distribution and N-body dark matter simulation data.Comment: Accepted for publication in New Astronomy. 14 pages. 8 figure

    Constructing Melchior Lorichs's 'Panorama of Constantinople'

    Get PDF
    In Constructing Melchior Lorichs's Panorama of Constantinople, Nigel Westbrook, Kenneth Rainsbury Dark, and Rene Van Meeuwen propose that Melchior Lorichs's 1559 Panorama of Constantinople was created by using a viewing grid. The panorama is thus a reliable graphic source for the lost or since-altered Ottoman and Byzantine buildings of the city. The panorama appears to lie outside the conventional symbolic mode of topographical depiction common for its period and constitutes a rare "scientific" record of an encounter of a perspicacious observer with a vast subject. The drawing combines elements of allegory with extensive empirical observation. Several unknown structures, shown on the drawing, have been located in relation to the present-day topography of Istanbul, as a test-case for further research

    Facial Expression Recognition from World Wild Web

    Full text link
    Recognizing facial expression in a wild setting has remained a challenging task in computer vision. The World Wide Web is a good source of facial images which most of them are captured in uncontrolled conditions. In fact, the Internet is a Word Wild Web of facial images with expressions. This paper presents the results of a new study on collecting, annotating, and analyzing wild facial expressions from the web. Three search engines were queried using 1250 emotion related keywords in six different languages and the retrieved images were mapped by two annotators to six basic expressions and neutral. Deep neural networks and noise modeling were used in three different training scenarios to find how accurately facial expressions can be recognized when trained on noisy images collected from the web using query terms (e.g. happy face, laughing man, etc)? The results of our experiments show that deep neural networks can recognize wild facial expressions with an accuracy of 82.12%

    Line drawings for face portraits from photos using global and local structure based GANs

    Get PDF
    Despite significant effort and notable success of neural style transfer, it remains challenging for highly abstract styles, in particular line drawings. In this paper, we propose APDrawingGAN++, a generative adversarial network (GAN) for transforming face photos to artistic portrait drawings (APDrawings), which addresses substantial challenges including highly abstract style, different drawing techniques for different facial features, and high perceptual sensitivity to artifacts. To address these, we propose a composite GAN architecture that consists of local networks (to learn effective representations for specific facial features) and a global network (to capture the overall content). We provide a theoretical explanation for the necessity of this composite GAN structure by proving that any GAN with a single generator cannot generate artistic styles like APDrawings. We further introduce a classification-and-synthesis approach for lips and hair where different drawing styles are used by artists, which applies suitable styles for a given input. To capture the highly abstract art form inherent in APDrawings, we address two challenging operations — (1) coping with lines with small misalignments while penalizing large discrepancy and (2) generating more continuous lines — by introducing two novel loss terms: one is a novel distance transform loss with nonlinear mapping and the other is a novel line continuity loss, both of which improve the line quality. We also develop dedicated data augmentation and pre-training to further improve results. Extensive experiments, including a user study, show that our method outperforms state-of-the-art methods, both qualitatively and quantitatively

    A painterly approach to human skin

    Get PDF
    technical reportRendering convincing human figures is one of the unsolved goals of computer graphics. Previous work has concentrated on modeling physics of human skin. We have taken a different approach. We are exploring techniques used by artists, specifically artists who paint air-brushed portraits. Our goal is to give the impression of skin without extraneous physical details such as pores, veins, and blemishes. In this paper, we provide rendering algorithms which are easy to incorporate into existing shaders, making rendering skin for medical illustration, computer animations, and other applications fast and simple. We accomplish this by using algorithms for real time drawing and shading of silhouette curves. We also build upon current non-photorealistic lighting methods using complementary colors to convey 3D shape information. Users select areas from a scanned art work and manipulate these areas to create shading models. The flexibility of this method of generating a shading model allows users to portray individuals with different skin tones or to capture the look and feel of a work of art
    corecore