1,984 research outputs found
GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation
Text-to-image (T2I) models based on diffusion processes have achieved
remarkable success in controllable image generation using user-provided
captions. However, the tight coupling between the current text encoder and
image decoder in T2I models makes it challenging to replace or upgrade. Such
changes often require massive fine-tuning or even training from scratch with
the prohibitive expense. To address this problem, we propose GlueGen, which
applies a newly proposed GlueNet model to align features from single-modal or
multi-modal encoders with the latent space of an existing T2I model. The
approach introduces a new training objective that leverages parallel corpora to
align the representation spaces of different encoders. Empirical results show
that GlueNet can be trained efficiently and enables various capabilities beyond
previous state-of-the-art models: 1) multilingual language models such as
XLM-Roberta can be aligned with existing T2I models, allowing for the
generation of high-quality images from captions beyond English; 2) GlueNet can
align multi-modal encoders such as AudioCLIP with the Stable Diffusion model,
enabling sound-to-image generation; 3) it can also upgrade the current text
encoder of the latent diffusion model for challenging case generation. By the
alignment of various feature representations, the GlueNet allows for flexible
and efficient integration of new functionality into existing T2I models and
sheds light on X-to-image (X2I) generation.Comment: ICCV 202
Pathway to Future Symbiotic Creativity
This report presents a comprehensive view of our vision on the development
path of the human-machine symbiotic art creation. We propose a classification
of the creative system with a hierarchy of 5 classes, showing the pathway of
creativity evolving from a mimic-human artist (Turing Artists) to a Machine
artist in its own right. We begin with an overview of the limitations of the
Turing Artists then focus on the top two-level systems, Machine Artists,
emphasizing machine-human communication in art creation. In art creation, it is
necessary for machines to understand humans' mental states, including desires,
appreciation, and emotions, humans also need to understand machines' creative
capabilities and limitations. The rapid development of immersive environment
and further evolution into the new concept of metaverse enable symbiotic art
creation through unprecedented flexibility of bi-directional communication
between artists and art manifestation environments. By examining the latest
sensor and XR technologies, we illustrate the novel way for art data collection
to constitute the base of a new form of human-machine bidirectional
communication and understanding in art creation. Based on such communication
and understanding mechanisms, we propose a novel framework for building future
Machine artists, which comes with the philosophy that a human-compatible AI
system should be based on the "human-in-the-loop" principle rather than the
traditional "end-to-end" dogma. By proposing a new form of inverse
reinforcement learning model, we outline the platform design of machine
artists, demonstrate its functions and showcase some examples of technologies
we have developed. We also provide a systematic exposition of the ecosystem for
AI-based symbiotic art form and community with an economic model built on NFT
technology. Ethical issues for the development of machine artists are also
discussed
A Review on Human-Computer Interaction and Intelligent Robots
In the field of artificial intelligence, human–computer interaction (HCI) technology and its related intelligent robot technologies are essential and interesting contents of research. From the perspective of software algorithm and hardware system, these above-mentioned technologies study and try to build a natural HCI environment. The purpose of this research is to provide an overview of HCI and intelligent robots. This research highlights the existing technologies of listening, speaking, reading, writing, and other senses, which are widely used in human interaction. Based on these same technologies, this research introduces some intelligent robot systems and platforms. This paper also forecasts some vital challenges of researching HCI and intelligent robots. The authors hope that this work will help researchers in the field to acquire the necessary information and technologies to further conduct more advanced research
DEsignBench: Exploring and Benchmarking DALL-E 3 for Imagining Visual Design
We introduce DEsignBench, a text-to-image (T2I) generation benchmark tailored
for visual design scenarios. Recent T2I models like DALL-E 3 and others, have
demonstrated remarkable capabilities in generating photorealistic images that
align closely with textual inputs. While the allure of creating visually
captivating images is undeniable, our emphasis extends beyond mere aesthetic
pleasure. We aim to investigate the potential of using these powerful models in
authentic design contexts. In pursuit of this goal, we develop DEsignBench,
which incorporates test samples designed to assess T2I models on both "design
technical capability" and "design application scenario." Each of these two
dimensions is supported by a diverse set of specific design categories. We
explore DALL-E 3 together with other leading T2I models on DEsignBench,
resulting in a comprehensive visual gallery for side-by-side comparisons. For
DEsignBench benchmarking, we perform human evaluations on generated images in
DEsignBench gallery, against the criteria of image-text alignment, visual
aesthetic, and design creativity. Our evaluation also considers other
specialized design capabilities, including text rendering, layout composition,
color harmony, 3D design, and medium style. In addition to human evaluations,
we introduce the first automatic image generation evaluator powered by GPT-4V.
This evaluator provides ratings that align well with human judgments, while
being easily replicable and cost-efficient. A high-resolution version is
available at
https://github.com/design-bench/design-bench.github.io/raw/main/designbench.pdf?download=Comment: Project page at https://design-bench.github.io
Text-Guided Neural Image Inpainting
Image inpainting task requires filling the corrupted image with contents
coherent with the context. This research field has achieved promising progress
by using neural image inpainting methods. Nevertheless, there is still a
critical challenge in guessing the missed content with only the context pixels.
The goal of this paper is to fill the semantic information in corrupted images
according to the provided descriptive text. Unique from existing text-guided
image generation works, the inpainting models are required to compare the
semantic content of the given text and the remaining part of the image, then
find out the semantic content that should be filled for missing part. To
fulfill such a task, we propose a novel inpainting model named Text-Guided Dual
Attention Inpainting Network (TDANet). Firstly, a dual multimodal attention
mechanism is designed to extract the explicit semantic information about the
corrupted regions, which is done by comparing the descriptive text and
complementary image areas through reciprocal attention. Secondly, an image-text
matching loss is applied to maximize the semantic similarity of the generated
image and the text. Experiments are conducted on two open datasets. Results
show that the proposed TDANet model reaches new state-of-the-art on both
quantitative and qualitative measures. Result analysis suggests that the
generated images are consistent with the guidance text, enabling the generation
of various results by providing different descriptions. Codes are available at
https://github.com/idealwhite/TDANetComment: ACM MM'2020 (Oral). 9 pages, 4 tables, 7 figure
Exploring the Intersection of Complex Aesthetics and Generative AI for Promoting Cultural Creativity in Rural China after the Post-Pandemic Era
This paper explores using generative AI and aesthetics to promote cultural
creativity in rural China amidst COVID-19's impact. Through literature reviews,
case studies, surveys, and text analysis, it examines art and technology
applications in rural contexts and identifies key challenges. The study finds
artworks often fail to resonate locally, while reliance on external artists
limits sustainability. Hence, nurturing grassroots "artist villagers" through
AI is proposed. Our approach involves training machine learning on subjective
aesthetics to generate culturally relevant content. Interactive AI media can
also boost tourism while preserving heritage. This pioneering research puts
forth original perspectives on the intersection of AI and aesthetics to
invigorate rural culture. It advocates holistic integration of technology and
emphasizes AI's potential as a creative enabler versus replacement. Ultimately,
it lays the groundwork for further exploration of leveraging AI innovations to
empower rural communities. This timely study contributes to growing interest in
emerging technologies to address critical issues facing rural China.Comment: Accepted by 2023 the 1st International Conference on AI-generated
Content (AIGC2023
GPT-4V(ision) as A Social Media Analysis Engine
Recent research has offered insights into the extraordinary capabilities of
Large Multimodal Models (LMMs) in various general vision and language tasks.
There is growing interest in how LMMs perform in more specialized domains.
Social media content, inherently multimodal, blends text, images, videos, and
sometimes audio. Understanding social multimedia content remains a challenging
problem for contemporary machine learning frameworks. In this paper, we explore
GPT-4V(ision)'s capabilities for social multimedia analysis. We select five
representative tasks, including sentiment analysis, hate speech detection, fake
news identification, demographic inference, and political ideology detection,
to evaluate GPT-4V. Our investigation begins with a preliminary quantitative
analysis for each task using existing benchmark datasets, followed by a careful
review of the results and a selection of qualitative samples that illustrate
GPT-4V's potential in understanding multimodal social media content. GPT-4V
demonstrates remarkable efficacy in these tasks, showcasing strengths such as
joint understanding of image-text pairs, contextual and cultural awareness, and
extensive commonsense knowledge. Despite the overall impressive capacity of
GPT-4V in the social media domain, there remain notable challenges. GPT-4V
struggles with tasks involving multilingual social multimedia comprehension and
has difficulties in generalizing to the latest trends in social media.
Additionally, it exhibits a tendency to generate erroneous information in the
context of evolving celebrity and politician knowledge, reflecting the known
hallucination problem. The insights gleaned from our findings underscore a
promising future for LMMs in enhancing our comprehension of social media
content and its users through the analysis of multimodal information
- …