6,514 research outputs found
Audio-Visual Sentiment Analysis for Learning Emotional Arcs in Movies
Stories can have tremendous power -- not only useful for entertainment, they
can activate our interests and mobilize our actions. The degree to which a
story resonates with its audience may be in part reflected in the emotional
journey it takes the audience upon. In this paper, we use machine learning
methods to construct emotional arcs in movies, calculate families of arcs, and
demonstrate the ability for certain arcs to predict audience engagement. The
system is applied to Hollywood films and high quality shorts found on the web.
We begin by using deep convolutional neural networks for audio and visual
sentiment analysis. These models are trained on both new and existing
large-scale datasets, after which they can be used to compute separate audio
and visual emotional arcs. We then crowdsource annotations for 30-second video
clips extracted from highs and lows in the arcs in order to assess the
micro-level precision of the system, with precision measured in terms of
agreement in polarity between the system's predictions and annotators' ratings.
These annotations are also used to combine the audio and visual predictions.
Next, we look at macro-level characterizations of movies by investigating
whether there exist `universal shapes' of emotional arcs. In particular, we
develop a clustering approach to discover distinct classes of emotional arcs.
Finally, we show on a sample corpus of short web videos that certain emotional
arcs are statistically significant predictors of the number of comments a video
receives. These results suggest that the emotional arcs learned by our approach
successfully represent macroscopic aspects of a video story that drive audience
engagement. Such machine understanding could be used to predict audience
reactions to video stories, ultimately improving our ability as storytellers to
communicate with each other.Comment: Data Mining (ICDM), 2017 IEEE 17th International Conference o
When Spandrels Become Arches: Neural crosstalk and the evolution of consciousness
Once cognition is recognized as having a 'dual' information source, the information theory chain rule implies that isolating coresident information sources from crosstalk requires more metabolic free energy than permitting correlation. This provides conditions for an evolutionary exaptation leading to the rapid, shifting global neural broadcasts of consciousness. The argument is quite analogous to the well-studied exaptation of noise to trigger stochastic resonance amplification in neurons and neuronal subsystems. Astrobiological implications are obvious
Neural NILM: Deep Neural Networks Applied to Energy Disaggregation
Energy disaggregation estimates appliance-by-appliance electricity
consumption from a single meter that measures the whole home's electricity
demand. Recently, deep neural networks have driven remarkable improvements in
classification performance in neighbouring machine learning fields such as
image classification and automatic speech recognition. In this paper, we adapt
three deep neural network architectures to energy disaggregation: 1) a form of
recurrent neural network called `long short-term memory' (LSTM); 2) denoising
autoencoders; and 3) a network which regresses the start time, end time and
average power demand of each appliance activation. We use seven metrics to test
the performance of these algorithms on real aggregate power data from five
appliances. Tests are performed against a house not seen during training and
against houses seen during training. We find that all three neural nets achieve
better F1 scores (averaged over all five appliances) than either combinatorial
optimisation or factorial hidden Markov models and that our neural net
algorithms generalise well to an unseen house.Comment: To appear in ACM BuildSys'15, November 4--5, 2015, Seou
Consciousness: A Simple Information Theory Global Workspace Model
The asymptotic limit theorems of information theory permit a concise formulation of Bernard Baars' global workspace/global broadcast picture of consciousness, focusing on how networks of unconscious cognitive modules are driven by the classic 'no free lunch' argument into shifting, tunable, alliances having variable thresholds for signal detection. The model directly accounts for the punctuated characteristics of many conscious phenomena, and derives the inherent necessity of inattentional blindness and related effects
RoboCup 2D Soccer Simulation League: Evaluation Challenges
We summarise the results of RoboCup 2D Soccer Simulation League in 2016
(Leipzig), including the main competition and the evaluation round. The
evaluation round held in Leipzig confirmed the strength of RoboCup-2015
champion (WrightEagle, i.e. WE2015) in the League, with only eventual finalists
of 2016 competition capable of defeating WE2015. An extended, post-Leipzig,
round-robin tournament which included the top 8 teams of 2016, as well as
WE2015, with over 1000 games played for each pair, placed WE2015 third behind
the champion team (Gliders2016) and the runner-up (HELIOS2016). This
establishes WE2015 as a stable benchmark for the 2D Simulation League. We then
contrast two ranking methods and suggest two options for future evaluation
challenges. The first one, "The Champions Simulation League", is proposed to
include 6 previous champions, directly competing against each other in a
round-robin tournament, with the view to systematically trace the advancements
in the League. The second proposal, "The Global Challenge", is aimed to
increase the realism of the environmental conditions during the simulated
games, by simulating specific features of different participating countries.Comment: 12 pages, RoboCup-2017, Nagoya, Japan, July 201
On Real-Time AER 2-D Convolutions Hardware for Neuromorphic Spike-Based Cortical Processing
In this paper, a chip that performs real-time image
convolutions with programmable kernels of arbitrary shape is presented.
The chip is a first experimental prototype of reduced size
to validate the implemented circuits and system level techniques.
The convolution processing is based on the address–event-representation
(AER) technique, which is a spike-based biologically
inspired image and video representation technique that favors
communication bandwidth for pixels with more information. As
a first test prototype, a pixel array of 16x16 has been implemented
with programmable kernel size of up to 16x16. The
chip has been fabricated in a standard 0.35- m complimentary
metal–oxide–semiconductor (CMOS) process. The technique also
allows to process larger size images by assembling 2-D arrays of
such chips. Pixel operation exploits low-power mixed analog–digital
circuit techniques. Because of the low currents involved (down
to nanoamperes or even picoamperes), an important amount of
pixel area is devoted to mismatch calibration. The rest of the
chip uses digital circuit techniques, both synchronous and asynchronous.
The fabricated chip has been thoroughly tested, both at
the pixel level and at the system level. Specific computer interfaces
have been developed for generating AER streams from conventional
computers and feeding them as inputs to the convolution
chip, and for grabbing AER streams coming out of the convolution
chip and storing and analyzing them on computers. Extensive
experimental results are provided. At the end of this paper, we
provide discussions and results on scaling up the approach for
larger pixel arrays and multilayer cortical AER systems.Commission of the European Communities IST-2001-34124 (CAVIAR)Commission of the European Communities 216777 (NABAB)Ministerio de EducaciĂłn y Ciencia TIC-2000-0406-P4Ministerio de EducaciĂłn y Ciencia TIC-2003-08164-C03-01Ministerio de EducaciĂłn y Ciencia TEC2006-11730-C03-01Junta de AndalucĂa TIC-141
A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration
Image registration, the process of aligning two or more images, is the core
technique of many (semi-)automatic medical image analysis tasks. Recent studies
have shown that deep learning methods, notably convolutional neural networks
(ConvNets), can be used for image registration. Thus far training of ConvNets
for registration was supervised using predefined example registrations.
However, obtaining example registrations is not trivial. To circumvent the need
for predefined examples, and thereby to increase convenience of training
ConvNets for image registration, we propose the Deep Learning Image
Registration (DLIR) framework for \textit{unsupervised} affine and deformable
image registration. In the DLIR framework ConvNets are trained for image
registration by exploiting image similarity analogous to conventional
intensity-based image registration. After a ConvNet has been trained with the
DLIR framework, it can be used to register pairs of unseen images in one shot.
We propose flexible ConvNets designs for affine image registration and for
deformable image registration. By stacking multiple of these ConvNets into a
larger architecture, we are able to perform coarse-to-fine image registration.
We show for registration of cardiac cine MRI and registration of chest CT that
performance of the DLIR framework is comparable to conventional image
registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie
- …