8 research outputs found
"This is my unicorn, Fluffy": Personalizing frozen vision-language representations
Large Vision & Language models pretrained on web-scale data provide
representations that are invaluable for numerous V&L problems. However, it is
unclear how they can be used for reasoning about user-specific visual concepts
in unstructured language. This problem arises in multiple domains, from
personalized image retrieval to personalized interaction with smart devices. We
introduce a new learning setup called Personalized Vision & Language (PerVL)
with two new benchmark datasets for retrieving and segmenting user-specific
"personalized" concepts "in the wild". In PerVL, one should learn personalized
concepts (1) independently of the downstream task (2) allowing a pretrained
model to reason about them with free language, and (3) does not require
personalized negative examples. We propose an architecture for solving PerVL
that operates by extending the input vocabulary of a pretrained model with new
word embeddings for the new personalized concepts. The model can then reason
about them by simply using them in a sentence. We demonstrate that our approach
learns personalized visual concepts from a few examples and can effectively
apply them in image retrieval and semantic segmentation using rich textual
queries
Learning to reason about and to act on physical cascading events
Reasoning and interacting with dynamic environments is a fundamental problem
in AI, but it becomes extremely challenging when actions can trigger cascades
of cross-dependent events. We introduce a new supervised learning setup called
{\em Cascade} where an agent is shown a video of a physically simulated dynamic
scene, and is asked to intervene and trigger a cascade of events, such that the
system reaches a "counterfactual" goal. For instance, the agent may be asked to
"Make the blue ball hit the red one, by pushing the green ball". The agent
intervention is drawn from a continuous space, and cascades of events makes the
dynamics highly non-linear.
We combine semantic tree search with an event-driven forward model and devise
an algorithm that learns to search in semantic trees in continuous spaces. We
demonstrate that our approach learns to effectively follow instructions to
intervene in previously unseen complex scenes. It can also reason about
alternative outcomes, when provided an observed cascade of events