50 research outputs found
Deep Cross-Modal Audio-Visual Generation
Cross-modal audio-visual perception has been a long-lasting topic in
psychology and neurology, and various studies have discovered strong
correlations in human perception of auditory and visual stimuli. Despite works
in computational multimodal modeling, the problem of cross-modal audio-visual
generation has not been systematically studied in the literature. In this
paper, we make the first attempt to solve this cross-modal generation problem
leveraging the power of deep generative adversarial training. Specifically, we
use conditional generative adversarial networks to achieve cross-modal
audio-visual generation of musical performances. We explore different encoding
methods for audio and visual signals, and work on two scenarios:
instrument-oriented generation and pose-oriented generation. Being the first to
explore this new problem, we compose two new datasets with pairs of images and
sounds of musical performances of different instruments. Our experiments using
both classification and human evaluations demonstrate that our model has the
ability to generate one modality, i.e., audio/visual, from the other modality,
i.e., visual/audio, to a good extent. Our experiments on various design choices
along with the datasets will facilitate future research in this new problem
space
Unsupervised grounding of textual descriptions of object features and actions in video
We propose a novel method for learning visual concepts and their correspondence to the words of a natural language. The concepts and correspondences are jointly inferred from video clips depicting simple actions involving multiple objects, together with corresponding natural language commands that would elicit these actions. Individual objects are first detected, together with quantitative measurements of their colour, shape, location and motion. Visual concepts emerge from the co-occurrence of regions within a measurement space and words of the language. The method is evaluated on a set of videos generated automatically using computer graphics from a database of initial and goal configurations of objects. Each video is annotated with multiple commands in natural language obtained from human annotators using crowd sourcing
Unsupervised grounding of textual descriptions of object features and actions in video
We propose a novel method for learning visual concepts and their correspondence to the words of a natural language. The concepts and correspondences are jointly inferred from video clips depicting simple actions involving multiple objects, together with corresponding natural language commands that would elicit these actions. Individual objects are first detected, together with quantitative measurements of their colour, shape, location and motion. Visual concepts emerge from the co-occurrence of regions within a measurement space and words of the language. The method is evaluated on a set of videos generated automatically using computer graphics from a database of initial and goal configurations of objects. Each video is annotated with multiple commands in natural language obtained from human annotators using crowd sourcing
A Review on Deep Learning Techniques for Video Prediction
The ability to predict, anticipate and reason about future outcomes is a key component of intelligent decision-making systems. In light of the success of deep learning in computer vision, deep-learning-based video prediction emerged as a promising research direction. Defined as a self-supervised learning task, video prediction represents a suitable framework for representation learning, as it demonstrated potential capabilities for extracting meaningful representations of the underlying patterns in natural videos. Motivated by the increasing interest in this task, we provide a review on the deep learning methods for prediction in video sequences. We firstly define the video prediction fundamentals, as well as mandatory background concepts and the most used datasets. Next, we carefully analyze existing video prediction models organized according to a proposed taxonomy, highlighting their contributions and their significance in the field. The summary of the datasets and methods is accompanied with experimental results that facilitate the assessment of the state of the art on a quantitative basis. The paper is summarized by drawing some general conclusions, identifying open research challenges and by pointing out future research directions.This work has been funded by the Spanish Government PID2019-104818RB-I00 grant for the MoDeaAS project, supported with Feder funds. This work has also been supported by two Spanish national grants for PhD studies, FPU17/00166, and ACIF/2018/197 respectively
Recommended from our members
Competitive multi-agent search
textWhile evolutionary computation is well suited for automatic discovery in engineering, it can also be used to gain insight into how humans and organizations could perform more effectively. Using a real-world problem of innovation search in organizations as the motivating example, this dissertation formalizes human creative problem solving as competitive multi-agent search. It differs from existing single-agent and team-search problems in that the agents interact through knowledge of other agents' searches and through the dynamic changes in the search landscape caused by these searches. The main hypothesis is that evolutionary computation can be used to discover effective strategies for competitive multi-agent search. This hypothesis is verified in experiments using an abstract domain based on the NK model, i.e. partially correlated and tunably rugged fitness landscapes, and a concrete domain in the form of a social innovation game. In both domains, different specialized strategies are evolved for each different competitive environment, and also strategies that generalize across environments. Strategies evolved in the abstract domain are more effective and more complex than hand-designed strategies and one based on traditional tree search. Using a novel spherical visualization of the fitness landscapes of the abstract domain, insight is gained about how successful strategies work, e.g. by tracking positive changes in the landscape. In the concrete game domain, human players were modeled using backpropagation, and used as opponents to create environments for evolution. Evolved strategies scored significantly higher than the human models by using a different proportion of actions, providing insights into how performance could be improved in social innovation domains. The work thus provides a possible framework for studying various human creative activities as competitive multi-agent search in the future.Computer Science
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Towards music perception by redundancy reduction and unsupervised learning in probabilistic models
PhDThe study of music perception lies at the intersection of several disciplines: perceptual
psychology and cognitive science, musicology, psychoacoustics, and acoustical
signal processing amongst others. Developments in perceptual theory over the last
fifty years have emphasised an approach based on Shannon’s information theory and
its basis in probabilistic systems, and in particular, the idea that perceptual systems
in animals develop through a process of unsupervised learning in response to natural
sensory stimulation, whereby the emerging computational structures are well adapted
to the statistical structure of natural scenes. In turn, these ideas are being applied to
problems in music perception.
This thesis is an investigation of the principle of redundancy reduction through
unsupervised learning, as applied to representations of sound and music.
In the first part, previous work is reviewed, drawing on literature from some of the
fields mentioned above, and an argument presented in support of the idea that perception
in general and music perception in particular can indeed be accommodated within
a framework of unsupervised learning in probabilistic models.
In the second part, two related methods are applied to two different low-level representations.
Firstly, linear redundancy reduction (Independent Component Analysis)
is applied to acoustic waveforms of speech and music. Secondly, the related method of
sparse coding is applied to a spectral representation of polyphonic music, which proves
to be enough both to recognise that the individual notes are the important structural elements,
and to recover a rough transcription of the music.
Finally, the concepts of distance and similarity are considered, drawing in ideas
about noise, phase invariance, and topological maps. Some ecologically and information
theoretically motivated distance measures are suggested, and put in to practice in
a novel method, using multidimensional scaling (MDS), for visualising geometrically
the dependency structure in a distributed representation.Engineering and Physical Science Research Counci
Nonparametric enrichment in computational and biological representations of distributions
This thesis proposes nonparametric techniques to enhance unsupervised learning methods in computational or biological contexts. Representations of intractable distributions and their relevant statistics are enhanced by nonparametric components trained to handle challenging estimation problems. The first part introduces a generic algorithm for learning generative latent variable models. In contrast to traditional variational learning, no representation for the intractable posterior distributions are computed, making it agnostic to the model structure and the support of latent variables. Kernel ridge regression is used to consistently estimate the gradient for learning. In many unsupervised tasks, this approach outperforms advanced alternatives based on the expectation-maximisation algorithm and variational approximate inference. In the second part, I train a model of data known as the kernel exponential family density. The kernel, used to describe smooth functions, is augmented by a parametric component trained using an efficient meta-learning procedure; meta-learning prevents overfitting as would occur using conventional routines. After training, the contours of the kernel become adaptive to the local geometry of the underlying density. Compared to maximum-likelihood learning, our method better captures the shape of the density, which is the desired quantity in many downstream applications. The final part sees how nonparametric ideas contribute to understanding uncertainty computation in the brain. First, I show that neural networks can learn to represent uncertainty using the distributed distributional code (DDC), a representation similar to the nonparametric kernel mean embedding. I then derive several DDC-based message-passing algorithms, including computations of filtering and real-time smoothing. The latter is a common neural computation embodied in many postdictive phenomena of perception in multiple modalities. The main idea behind these algorithms is least-squares regression, where the training data are simulated from an internal model. The internal model can be concurrently updated to follow the statistics in sensory stimuli, enabling adaptive inference
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do
humans make meaning from language -- and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose \textit{rational meaning construction}, a computational
framework for language-informed thinking that combines neural models of
language with probabilistic models for rational inference. We frame linguistic
meaning as a context-sensitive mapping from natural language into a
\textit{probabilistic language of thought} (PLoT) -- a general-purpose symbolic
substrate for probabilistic, generative world modeling. Our architecture
integrates two powerful computational tools that have not previously come
together: we model thinking with \textit{probabilistic programs}, an expressive
representation for flexible commonsense reasoning; and we model meaning
construction with \textit{large language models} (LLMs), which support
broad-coverage translation from natural language utterances to code expressions
in a probabilistic programming language. We illustrate our framework in action
through examples covering four core domains from cognitive science:
probabilistic reasoning, logical and relational reasoning, visual and physical
reasoning, and social reasoning about agents and their plans. In each, we show
that LLMs can generate context-sensitive translations that capture
pragmatically-appropriate linguistic meanings, while Bayesian inference with
the generated programs supports coherent and robust commonsense reasoning. We
extend our framework to integrate cognitively-motivated symbolic modules to
provide a unified commonsense thinking interface from language. Finally, we
explore how language can drive the construction of world models themselves