2,879 research outputs found
Analysis of the divide-and-conquer method for electronic structure calculations
We study the accuracy of the divide-and-conquer method for electronic
structure calculations. The analysis is conducted for a prototypical subdomain
problem in the method. We prove that the pointwise difference between electron
densities of the global system and the subsystem decays exponentially as a
function of the distance away from the boundary of the subsystem, under the gap
assumption of both the global system and the subsystem. We show that gap
assumption is crucial for the accuracy of the divide-and-conquer method by
numerical examples. In particular, we show examples with the loss of accuracy
when the gap assumption of the subsystem is invalid
Facial Motion Prior Networks for Facial Expression Recognition
Deep learning based facial expression recognition (FER) has received a lot of
attention in the past few years. Most of the existing deep learning based FER
methods do not consider domain knowledge well, which thereby fail to extract
representative features. In this work, we propose a novel FER framework, named
Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition
branch to generate a facial mask so as to focus on facial muscle moving
regions. To guide the facial mask learning, we propose to incorporate prior
domain knowledge by using the average differences between neutral faces and the
corresponding expressive faces as the training guidance. Extensive experiments
on three facial expression benchmark datasets demonstrate the effectiveness of
the proposed method, compared with the state-of-the-art approaches.Comment: VCIP 2019, Oral. Code is available at
https://github.com/donydchen/FMPN-FE
Language-Based Image Editing with Recurrent Attentive Models
We investigate the problem of Language-Based Image Editing (LBIE). Given a
source image and a natural language description, we want to generate a target
image by editing the source image based on the description. We propose a
generic modeling framework for two sub-tasks of LBIE: language-based image
segmentation and image colorization. The framework uses recurrent attentive
models to fuse image and language features. Instead of using a fixed step size,
we introduce for each region of the image a termination gate to dynamically
determine after each inference step whether to continue extrapolating
additional information from the textual description. The effectiveness of the
framework is validated on three datasets. First, we introduce a synthetic
dataset, called CoSaL, to evaluate the end-to-end performance of our LBIE
system. Second, we show that the framework leads to state-of-the-art
performance on image segmentation on the ReferIt dataset. Third, we present the
first language-based colorization result on the Oxford-102 Flowers dataset.Comment: Accepted to CVPR 2018 as a Spotligh
Maximum likelihood decoding of neuronal inputs from an interspike interval distribution
An expression for the probability distribution of the interspike interval
of a leaky integrate-and-fire (LIF) model neuron is rigorously derived,
based on recent theoretical developments in the theory of stochastic processes.
This enables us to find for the first time a way of developing
maximum likelihood estimates (MLE) of the input information (e.g., afferent
rate and variance) for an LIF neuron from a set of recorded spike
trains. Dynamic inputs to pools of LIF neurons both with and without
interactions are efficiently and reliably decoded by applying the MLE,
even within time windows as short as 25 msec
- ā¦