2,123 research outputs found

    Photo2Relief: Let Human in the Photograph Stand Out

    Full text link
    In this paper, we propose a technique for making humans in photographs protrude like reliefs. Unlike previous methods which mostly focus on the face and head, our method aims to generate art works that describe the whole body activity of the character. One challenge is that there is no ground-truth for supervised deep learning. We introduce a sigmoid variant function to manipulate gradients tactfully and train our neural networks by equipping with a loss function defined in gradient domain. The second challenge is that actual photographs often across different light conditions. We used image-based rendering technique to address this challenge and acquire rendering images and depth data under different lighting conditions. To make a clear division of labor in network modules, a two-scale architecture is proposed to create high-quality relief from a single photograph. Extensive experimental results on a variety of scenes show that our method is a highly effective solution for generating digital 2.5D artwork from photographs.Comment: 10 pages, 11 figure

    Shading with Painterly Filtered Layers: A Process to Obtain Painterly Portraits

    Get PDF
    In this thesis, I study how color data from different styles of paintings can be extracted from photography with the end result maintaining the artistic integrity of the art style and having the look and feel of skin. My inspiration for this work came from the impasto style portraitures of painters such as Rembrandt and Greg Cartmell. I analyzed and studied the important visual characteristics of both Rembrandt’s and Cartmell’s styles of painting.These include how the artist develops shadow and shading, creates the illusion of subsurface scattering, and applies color to the canvas, which will be used as references to help develop the final renders in computer graphics. I also examined how color information can be extracted from portrait photography in order to gather accurate dark, medium, and light skin shades. Based on this analysis, I have developed a process for creating portrait paintings from 3D facial models. My process consists of four stages: (1) Modeling a 3D portrait of the subject, (2) data collection by photographing the subjects, (3) Barycentric shader development using photographs, and (4) Compositing with filtered layers. My contributions has been in stages (3) and (4) as follows: Development of an impasto-style Barycentric shader by extracting color information from gathered photographic images. This shader can result in realistic looking skin rendering. Development of a compositing technique that involves filtering layers of images that correspond to different effects such as diffuse, specular and ambient. To demonstrate proof-of-concept, I have created a few animations of the impasto style portrait painting for a single subject. For these animations, I have also sculpted high polygon count 3D model of the torso and head of my subject. Using my shading and compositing techniques, I have created rigid body animations that demonstrate the power of my techniques to obtain impasto style portraiture during animation under different lighting conditions

    Hairstyle modelling based on a single image.

    Get PDF
    Hair is an important feature to form character appearance in both film and video game industry. Hair grooming and combing for virtual characters was traditionally an exclusive task for professional designers because of its requirements for both technical manipulation and artistic inspiration. However, this manual process is time-consuming and further limits the flexibility of customised hairstyle modelling. In addition, it is hard to manipulate virtual hairstyle due to intrinsic hair shape. The fast development of related industrial applications demand an intuitive tool for efficiently creating realistic hairstyle for non-professional users. Recently, image-based hair modelling has been investigated for generating realistic hairstyle. This thesis demonstrates a framework Struct2Hair that robustly captures a hairstyle from a single portrait input. Specifically, the 2D hair strands are traced from the input with the help of image processing enhancement first. Then the 2D hair sketch of a hairstyle on a coarse level is extracted from generated 2D hair strands by clustering. To solve the inherently ill-posed single-view reconstruction problem, a critical hair shape database has been built by analysing an existing hairstyle model database. The critical hair shapes is a group of hair strands which possess similar shape appearance and close space location. Once the prior shape knowledge is prepared, the hair shape descriptor (HSD) is introduced to encode the structure of the target hairstyle. The HSD is constructed by retrieving and matching corresponding critical hair shape centres in the database. The full-head hairstyle is reconstructed by uniformly diffusing the hair strands on the scalp surface under the guidance of extracted HSD. The produced results are evaluated and compared with the state-of-the-art image based hair modelling methods. The findings of this thesis lead to some promising applications such as blending hairstyles to populate novel hair model, editing hairstyle (adding fringe hair, curling and cutting/extending hairstyle) and a case study of Bas-relief hair modelling on pre-processed hair images

    The development of a seventh grade unit in sculpture

    Full text link
    Thesis (M.A.)--Boston University, 1947. This item was digitized by the Internet Archive

    Josiah Wedgwood, manufacturing and craft

    Get PDF

    The visual rhetoric of Charles Callahan Perkins: the early Italian Renaissance and a New Fine Arts paradigm for Boston

    Full text link
    The art historian Charles Callahan Perkins (1823–1886) taught Boston elites to embrace early Italian Renaissance art, and, in so doing, transformed the cultural landscape of his city. Mostly Unitarian in their religious beliefs, the local elites had previously spurned Italian paintings and sculpture of the fourteenth and fifteenth centuries for their Roman Catholicism. However, when the new Museum of Fine Arts opened on July 4, 1876, the institution displayed close to one hundred art objects of the period, mostly copies. Perkins, who had returned recently from twenty-five years in Europe as an acclaimed scholar and illustrator of early Italian Renaissance sculpture and an expert in fine arts museums, was responsible for this result. Perkins focused on art whose “visual rhetoric” reflected the early Italian Renaissance humanist belief in clarity of line and subject as the most pleasing and edifying in art. These Renaissance principles emerged in his view from classical rhetoric, that is strategies for persuasive spoken and written communication, which had long been the core curriculum of Harvard University where Boston elites studied. Perkins also capitalized on the city’s taste for classical sculpture by privileging quattrocento sculpture, which, while more devotional in subject than had traditionally been displayed, did feature a naturalism that evoked ancient art. Chapter one presents four biographical case studies of individuals who were important players in shaping the fertile cultural ground upon which Perkins built a generation later. Chapter two forges the link between classical rhetoric and the fine arts in ante-bellum Boston. Chapter three examines the broad-based revival of early Italian Renaissance art that Perkins encountered in mid-century Europe. Chapter four assesses his own professional oeuvre within that context. The concluding chapter demonstrates how Perkins revamped ideas of what constituted fine art and how it could be viewed by positioning early Renaissance art at the new Museum as a powerful visually rhetorical tool, thus achieving a far more wide-reaching cultural change than previous scholarship has suggested

    Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks

    Get PDF
    In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain
    • …
    corecore