274 research outputs found
Manipulating Attributes of Natural Scenes via Hallucination
In this study, we explore building a two-stage framework for enabling users
to directly manipulate high-level attributes of a natural scene. The key to our
approach is a deep generative network which can hallucinate images of a scene
as if they were taken at a different season (e.g. during winter), weather
condition (e.g. in a cloudy day) or time of the day (e.g. at sunset). Once the
scene is hallucinated with the given attributes, the corresponding look is then
transferred to the input image while preserving the semantic details intact,
giving a photo-realistic manipulation result. As the proposed framework
hallucinates what the scene will look like, it does not require any reference
style image as commonly utilized in most of the appearance or style transfer
approaches. Moreover, it allows to simultaneously manipulate a given scene
according to a diverse set of transient attributes within a single model,
eliminating the need of training multiple networks per each translation task.
Our comprehensive set of qualitative and quantitative results demonstrate the
effectiveness of our approach against the competing methods.Comment: Accepted for publication in ACM Transactions on Graphic
Taming AI Bots: Controllability of Neural States in Large Language Models
We tackle the question of whether an agent can, by suitable choice of
prompts, control an AI bot to any state. To that end, we first introduce a
formal definition of ``meaning'' that is amenable to analysis. Then, we
characterize ``meaningful data'' on which large language models (LLMs) are
ostensibly trained, and ``well-trained LLMs'' through conditions that are
largely met by today's LLMs. While a well-trained LLM constructs an embedding
space of meanings that is Euclidean, meanings themselves do not form a vector
(linear) subspace, but rather a quotient space within. We then characterize the
subset of meanings that can be reached by the state of the LLMs for some input
prompt, and show that a well-trained bot can reach any meaning albeit with
small probability. We then introduce a stronger notion of controllability as
{\em almost certain reachability}, and show that, when restricted to the space
of meanings, an AI bot is controllable. We do so after introducing a functional
characterization of attentive AI bots, and finally derive necessary and
sufficient conditions for controllability. The fact that AI bots are
controllable means that an adversary could steer them towards any state.
However, the sampling process can be designed to counteract adverse actions and
avoid reaching undesirable regions of state space before their boundary is
crossed.Comment: TLDR: AI Bots are stochastic dynamical systems whose mental state can
be controlled by both the user and the designer. The space of meanings,
defined as equivalence classes of sentences, is learned during fine-tuning
with human supervision, and safeguarding can be designed into the bot by
establishing controls both at its input and outpu
- …