10 research outputs found
Self-conditioned Embedding Diffusion for Text Generation
Can continuous diffusion models bring the same performance breakthrough on
natural language they did for image generation? To circumvent the discrete
nature of text data, we can simply project tokens in a continuous space of
embeddings, as is standard in language modeling. We propose Self-conditioned
Embedding Diffusion, a continuous diffusion mechanism that operates on token
embeddings and allows to learn flexible and scalable diffusion models for both
conditional and unconditional text generation. Through qualitative and
quantitative evaluation, we show that our text diffusion models generate
samples comparable with those produced by standard autoregressive language
models - while being in theory more efficient on accelerator hardware at
inference time. Our work paves the way for scaling up diffusion models for
text, similarly to autoregressive models, and for improving performance with
recent refinements to continuous diffusion.Comment: 15 page
Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination
An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions. 1
Large-Scale Retrieval for Reinforcement Learning
Effective decision making involves flexibly relating past experiences and
relevant contextual information to a novel situation. In deep reinforcement
learning, the dominant paradigm is for an agent to amortise information that
helps decision-making into its network weights via gradient descent on training
losses. Here, we pursue an alternative approach in which agents can utilise
large-scale context-sensitive database lookups to support their parametric
computations. This allows agents to directly learn in an end-to-end manner to
utilise relevant information to inform their outputs. In addition, new
information can be attended to by the agent, without retraining, by simply
augmenting the retrieval dataset. We study this approach in Go, a challenging
game for which the vast combinatorial state space privileges generalisation
over direct matching to past experiences. We leverage fast, approximate nearest
neighbor techniques in order to retrieve relevant data from a set of tens of
millions of expert demonstration states. Attending to this information provides
a significant boost to prediction accuracy and game-play performance over
simply using these demonstrations as training trajectories, providing a
compelling demonstration of the value of large-scale retrieval in reinforcement
learning agents.Comment: Preprint, 16 page