138,748 research outputs found
FastCLIPstyler: Optimisation-free Text-based Image Style Transfer Using Style Representations
In recent years, language-driven artistic style transfer has emerged as a new
type of style transfer technique, eliminating the need for a reference style
image by using natural language descriptions of the style. The first model to
achieve this, called CLIPstyler, has demonstrated impressive stylisation
results. However, its lengthy optimisation procedure at runtime for each query
limits its suitability for many practical applications. In this work, we
present FastCLIPstyler, a generalised text-based image style transfer model
capable of stylising images in a single forward pass for arbitrary text inputs.
Furthermore, we introduce EdgeCLIPstyler, a lightweight model designed for
compatibility with resource-constrained devices. Through quantitative and
qualitative comparisons with state-of-the-art approaches, we demonstrate that
our models achieve superior stylisation quality based on measurable metrics
while offering significantly improved runtime efficiency, particularly on edge
devices.Comment: Accepted at the 2024 IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV 2024
PAI-Diffusion: Constructing and Serving a Family of Open Chinese Diffusion Models for Text-to-image Synthesis on the Cloud
Text-to-image synthesis for the Chinese language poses unique challenges due
to its large vocabulary size, and intricate character relationships. While
existing diffusion models have shown promise in generating images from textual
descriptions, they often neglect domain-specific contexts and lack robustness
in handling the Chinese language. This paper introduces PAI-Diffusion, a
comprehensive framework that addresses these limitations. PAI-Diffusion
incorporates both general and domain-specific Chinese diffusion models,
enabling the generation of contextually relevant images. It explores the
potential of using LoRA and ControlNet for fine-grained image style transfer
and image editing, empowering users with enhanced control over image
generation. Moreover, PAI-Diffusion seamlessly integrates with Alibaba Cloud's
Machine Learning Platform for AI, providing accessible and scalable solutions.
All the Chinese diffusion model checkpoints, LoRAs, and ControlNets, including
domain-specific ones, are publicly available. A user-friendly Chinese WebUI and
the diffusers-api elastic inference toolkit, also open-sourced, further
facilitate the easy deployment of PAI-Diffusion models in various environments,
making it a valuable resource for Chinese text-to-image synthesis
Automatic Image Captioning with Style
This thesis connects two core topics in machine learning, vision
and language. The problem of choice is image caption generation:
automatically constructing natural language descriptions of image
content. Previous research into image caption generation has
focused on generating purely descriptive captions; I focus on
generating visually relevant captions with a distinct linguistic
style. Captions with style have the potential to ease
communication and add a new layer of personalisation.
First, I consider naming variations in image captions, and
propose a method for predicting context-dependent names that
takes into account visual and linguistic information. This method
makes use of a large-scale image caption dataset, which I also
use to explore naming conventions and report naming conventions
for hundreds of animal classes. Next I propose the SentiCap
model, which relies on recent advances in artificial neural
networks to generate visually relevant image captions with
positive or negative sentiment. To balance descriptiveness and
sentiment, the SentiCap model dynamically switches between two
recurrent neural networks, one tuned for descriptive words and
one for sentiment words. As the first published model for
generating captions with sentiment, SentiCap has influenced a
number of subsequent works. I then investigate the sub-task of
modelling styled sentences without images. The specific task
chosen is sentence simplification: rewriting news article
sentences to make them easier to understand.
For this task I design a neural sequence-to-sequence model that
can work with
limited training data, using novel adaptations for word copying
and sharing
word embeddings. Finally, I present SemStyle, a system for
generating visually
relevant image captions in the style of an arbitrary text corpus.
A shared term
space allows a neural network for vision and content planning to
communicate
with a network for styled language generation. SemStyle achieves
competitive
results in human and automatic evaluations of descriptiveness and
style.
As a whole, this thesis presents two complete systems for styled
caption generation that are first of their kind and demonstrate,
for the first time, that automatic style transfer for image
captions is achievable. Contributions also include novel ideas
for object naming and sentence simplification. This thesis opens
up inquiries into highly personalised image captions; large scale
visually grounded concept naming; and more generally, styled text
generation with content control
Generative Adversarial Text to Image Synthesis
Automatic synthesis of realistic images from text would be interesting and
useful, but current AI systems are still far from this goal. However, in recent
years generic and powerful recurrent neural network architectures have been
developed to learn discriminative text feature representations. Meanwhile, deep
convolutional generative adversarial networks (GANs) have begun to generate
highly compelling images of specific categories, such as faces, album covers,
and room interiors. In this work, we develop a novel deep architecture and GAN
formulation to effectively bridge these advances in text and image model- ing,
translating visual concepts from characters to pixels. We demonstrate the
capability of our model to generate plausible images of birds and flowers from
detailed text descriptions.Comment: ICML 201
Semantic Image Synthesis via Adversarial Learning
In this paper, we propose a way of synthesizing realistic images directly
with natural language description, which has many useful applications, e.g.
intelligent image manipulation. We attempt to accomplish such synthesis: given
a source image and a target text description, our model synthesizes images to
meet two requirements: 1) being realistic while matching the target text
description; 2) maintaining other image features that are irrelevant to the
text description. The model should be able to disentangle the semantic
information from the two modalities (image and text), and generate new images
from the combined semantics. To achieve this, we proposed an end-to-end neural
architecture that leverages adversarial learning to automatically learn
implicit loss functions, which are optimized to fulfill the aforementioned two
requirements. We have evaluated our model by conducting experiments on
Caltech-200 bird dataset and Oxford-102 flower dataset, and have demonstrated
that our model is capable of synthesizing realistic images that match the given
descriptions, while still maintain other features of original images.Comment: Accepted to ICCV 201
Semi-supervised FusedGAN for Conditional Image Generation
We present FusedGAN, a deep network for conditional image synthesis with
controllable sampling of diverse images. Fidelity, diversity and controllable
sampling are the main quality measures of a good image generation model. Most
existing models are insufficient in all three aspects. The FusedGAN can perform
controllable sampling of diverse images with very high fidelity. We argue that
controllability can be achieved by disentangling the generation process into
various stages. In contrast to stacked GANs, where multiple stages of GANs are
trained separately with full supervision of labeled intermediate images, the
FusedGAN has a single stage pipeline with a built-in stacking of GANs. Unlike
existing methods, which requires full supervision with paired conditions and
images, the FusedGAN can effectively leverage more abundant images without
corresponding conditions in training, to produce more diverse samples with high
fidelity. We achieve this by fusing two generators: one for unconditional image
generation, and the other for conditional image generation, where the two
partly share a common latent space thereby disentangling the generation. We
demonstrate the efficacy of the FusedGAN in fine grained image generation tasks
such as text-to-image, and attribute-to-face generation
Adversarial nets with perceptual losses for text-to-image synthesis
Recent approaches in generative adversarial networks (GANs) can automatically
synthesize realistic images from descriptive text. Despite the overall fair
quality, the generated images often expose visible flaws that lack structural
definition for an object of interest. In this paper, we aim to extend state of
the art for GAN-based text-to-image synthesis by improving perceptual quality
of generated images. Differentiated from previous work, our synthetic image
generator optimizes on perceptual loss functions that measure pixel, feature
activation, and texture differences against a natural image. We present
visually more compelling synthetic images of birds and flowers generated from
text descriptions in comparison to some of the most prominent existing work
- …