27 research outputs found
Notes on the Shannon Entropy of the Neural Response
In these notes we focus on the concept of Shannon entropy in an attempt to provide a systematic way of assessing the discrimination properties of the neural response, and quantifying the role played by the number of layers and the number of templates
Generative Models: What do they know? Do they know things? Let's find out!
Generative models have been shown to be capable of synthesizing highly
detailed and realistic images. It is natural to suspect that they implicitly
learn to model some image intrinsics such as surface normals, depth, or
shadows. In this paper, we present compelling evidence that generative models
indeed internally produce high-quality scene intrinsic maps. We introduce
Intrinsic LoRA (I LoRA), a universal, plug-and-play approach that transforms
any generative model into a scene intrinsic predictor, capable of extracting
intrinsic scene maps directly from the original generator network without
needing additional decoders or fully fine-tuning the original network. Our
method employs a Low-Rank Adaptation (LoRA) of key feature maps, with newly
learned parameters that make up less than 0.6% of the total parameters in the
generative model. Optimized with a small set of labeled images, our
model-agnostic approach adapts to various generative architectures, including
Diffusion models, GANs, and Autoregressive models. We show that the scene
intrinsic maps produced by our method compare well with, and in some cases
surpass those generated by leading supervised techniques.Comment: https://intrinsic-lora.github.io