10,702 research outputs found
Empirical Bounds on Linear Regions of Deep Rectifier Networks
We can compare the expressiveness of neural networks that use rectified
linear units (ReLUs) by the number of linear regions, which reflect the number
of pieces of the piecewise linear functions modeled by such networks. However,
enumerating these regions is prohibitive and the known analytical bounds are
identical for networks with same dimensions. In this work, we approximate the
number of linear regions through empirical bounds based on features of the
trained network and probabilistic inference. Our first contribution is a method
to sample the activation patterns defined by ReLUs using universal hash
functions. This method is based on a Mixed-Integer Linear Programming (MILP)
formulation of the network and an algorithm for probabilistic lower bounds of
MILP solution sets that we call MIPBound, which is considerably faster than
exact counting and reaches values in similar orders of magnitude. Our second
contribution is a tighter activation-based bound for the maximum number of
linear regions, which is particularly stronger in networks with narrow layers.
Combined, these bounds yield a fast proxy for the number of linear regions of a
deep neural network.Comment: AAAI 202
Personalized Automatic Estimation of Self-reported Pain Intensity from Facial Expressions
Pain is a personal, subjective experience that is commonly evaluated through
visual analog scales (VAS). While this is often convenient and useful,
automatic pain detection systems can reduce pain score acquisition efforts in
large-scale studies by estimating it directly from the participants' facial
expressions. In this paper, we propose a novel two-stage learning approach for
VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs)
to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels
from face images. The estimated scores are then fed into the personalized
Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by
each person. Personalization of the model is performed using a newly introduced
facial expressiveness score, unique for each person. To the best of our
knowledge, this is the first approach to automatically estimate VAS from face
images. We show the benefits of the proposed personalized over traditional
non-personalized approach on a benchmark dataset for pain analysis from face
images.Comment: Computer Vision and Pattern Recognition Conference, The 1st
International Workshop on Deep Affective Learning and Context Modelin
- …