528 research outputs found

    Printed texture guided color feature fusion for impressionism style rendering of oil paintings.

    Get PDF
    As a major branch of Non-Photorealistic Rendering (NPR), image stylization mainly uses computer algorithms to render a photo into an artistic painting. Recent work has shown that the ex-traction of style information such as stroke texture and color of the target style image is the key to image stylization. Given its stroke texture and color characteristics, a new stroke rendering method is proposed. By fully considering the tonal characteristics and the representative color of the original oil painting, it can fit the tone of the original oil painting image into a stylized image whilst keeping the artist's creative effect. The experiments have validated the efficacy of the proposed model in comparison to three state-of-the-arts. This method would be more suitable for the works of pointillism painters with a relatively uniform style, especially for natural scenes, otherwise, the results can be less satisfactory

    Transforming Information Into Knowledge

    Get PDF

    An Empirical Comparison of Different Machine

    Get PDF
    Sketching has been used by humans to visualize and narrate the aesthetics of the world for a long time. With the onset of touch devices and augmented technologies, it has attracted more and more attention in recent years. Recognition of free-hand sketches is an extremely cumbersome and challenging task due to its abstract qualities and lack of visual cues. Most of the previous work has been done to identify objects in real pictorial images using neural networks instead of a more abstract depiction of the same objects in sketch. This research aims at comparing the performance of different machine learning algorithms and their learned inner representations. This research studies some of the famous machine learning models in classifying sketch images. It also does a study of legacy and the new datasets to classify a new sketch through various classifiers like support vector machines and the use of deep neural networks. It achieved remarkable results but still lacking behind the accuracy in the classification of the sketch images

    A Computational Approach to Hand Pose Recognition in Early Modern Paintings

    Full text link
    Hands represent an important aspect of pictorial narration but have rarely been addressed as an object of study in art history and digital humanities. Although hand gestures play a significant role in conveying emotions, narratives, and cultural symbolism in the context of visual art, a comprehensive terminology for the classification of depicted hand poses is still lacking. In this article, we present the process of creating a new annotated dataset of pictorial hand poses. The dataset is based on a collection of European early modern paintings, from which hands are extracted using human pose estimation (HPE) methods. The hand images are then manually annotated based on art historical categorization schemes. From this categorization, we introduce a new classification task and perform a series of experiments using different types of features, including our newly introduced 2D hand keypoint features, as well as existing neural network-based features. This classification task represents a new and complex challenge due to the subtle and contextually dependent differences between depicted hands. The presented computational approach to hand pose recognition in paintings represents an initial attempt to tackle this challenge, which could potentially advance the use of HPE methods on paintings, as well as foster new research on the understanding of hand gestures in art

    DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization

    Full text link
    Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into the stylized one according to textual descriptions of the target style provided by the user. Unlike previous image-to-image transfer approaches, text-guided stylization progress provides users with a more precise and intuitive way to express the desired style. However, the huge discrepancy between cross-modal inputs/outputs makes it challenging to conduct text-driven image stylization in a typical feed-forward CNN pipeline. In this paper, we present DiffStyler on the basis of diffusion models. The cross-modal style information can be easily integrated as guidance during the diffusion progress step-by-step. In particular, we use a dual diffusion processing architecture to control the balance between the content and style of the diffused results. Furthermore, we propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image. We validate the proposed DiffStyler beyond the baseline methods through extensive qualitative and quantitative experiments
    • …
    corecore