200,726 research outputs found
Simulating Human Gaze with Neural Visual Attention
Existing models of human visual attention are generally unable to incorporate
direct task guidance and therefore cannot model an intent or goal when
exploring a scene. To integrate guidance of any downstream visual task into
attention modeling, we propose the Neural Visual Attention (NeVA) algorithm. To
this end, we impose to neural networks the biological constraint of foveated
vision and train an attention mechanism to generate visual explorations that
maximize the performance with respect to the downstream task. We observe that
biologically constrained neural networks generate human-like scanpaths without
being trained for this objective. Extensive experiments on three common
benchmark datasets show that our method outperforms state-of-the-art
unsupervised human attention models in generating human-like scanpaths
Enhancing Trip Distribution Using Twitter Data: Comparison of Gravity and Neural Networks
Predicting human mobility within cities is an important task in urban and transportation planning. With the vast amount of digital traces available through social media platforms, we investigate the potential application of such data in predicting commuter trip distribution at small spatial scale. We develop back propagation (BP) neural network and gravity models using both traditional and Twitter data in New York City to explore their performance and compare the results. Our results suggest the potential of using social media data in transportation modeling to improve the prediction accuracy. Adding Twitter data to both models improved the performance with a slight decrease in root mean square error (RMSE) and an increase in R-squared (R2) value. The findings indicate that the traditional gravity models outperform neural networks in terms of having lower RMSE. However, the R2 results show higher values for neural networks suggesting a better fit between the real and predicted outputs. Given the complex nature of transportation networks and different reasons for limited performance of neural networks with the data, we conclude that more research is needed to explore the performance of such models with additional inputs
Relightable Neural Human Assets from Multi-view Gradient Illuminations
Human modeling and relighting are two fundamental problems in computer vision
and graphics, where high-quality datasets can largely facilitate related
research. However, most existing human datasets only provide multi-view human
images captured under the same illumination. Although valuable for modeling
tasks, they are not readily used in relighting problems. To promote research in
both fields, in this paper, we present UltraStage, a new 3D human dataset that
contains more than 2,000 high-quality human assets captured under both
multi-view and multi-illumination settings. Specifically, for each example, we
provide 32 surrounding views illuminated with one white light and two gradient
illuminations. In addition to regular multi-view images, gradient illuminations
help recover detailed surface normal and spatially-varying material maps,
enabling various relighting applications. Inspired by recent advances in neural
representation, we further interpret each example into a neural human asset
which allows novel view synthesis under arbitrary lighting conditions. We show
our neural human assets can achieve extremely high capture performance and are
capable of representing fine details such as facial wrinkles and cloth folds.
We also validate UltraStage in single image relighting tasks, training neural
networks with virtual relighted data from neural assets and demonstrating
realistic rendering improvements over prior arts. UltraStage will be publicly
available to the community to stimulate significant future developments in
various human modeling and rendering tasks. The dataset is available at
https://miaoing.github.io/RNHA.Comment: Project page: https://miaoing.github.io/RNH
Weakly-Supervised Action Segmentation with Iterative Soft Boundary Assignment
In this work, we address the task of weakly-supervised human action
segmentation in long, untrimmed videos. Recent methods have relied on expensive
learning models, such as Recurrent Neural Networks (RNN) and Hidden Markov
Models (HMM). However, these methods suffer from expensive computational cost,
thus are unable to be deployed in large scale. To overcome the limitations, the
keys to our design are efficiency and scalability. We propose a novel action
modeling framework, which consists of a new temporal convolutional network,
named Temporal Convolutional Feature Pyramid Network (TCFPN), for predicting
frame-wise action labels, and a novel training strategy for weakly-supervised
sequence modeling, named Iterative Soft Boundary Assignment (ISBA), to align
action sequences and update the network in an iterative fashion. The proposed
framework is evaluated on two benchmark datasets, Breakfast and Hollywood
Extended, with four different evaluation metrics. Extensive experimental
results show that our methods achieve competitive or superior performance to
state-of-the-art methods.Comment: CVPR 201
Understanding and Comparing Deep Neural Networks for Age and Gender Classification
Recently, deep neural networks have demonstrated excellent performances in
recognizing the age and gender on human face images. However, these models were
applied in a black-box manner with no information provided about which facial
features are actually used for prediction and how these features depend on
image preprocessing, model initialization and architecture choice. We present a
study investigating these different effects.
In detail, our work compares four popular neural network architectures,
studies the effect of pretraining, evaluates the robustness of the considered
alignment preprocessings via cross-method test set swapping and intuitively
visualizes the model's prediction strategies in given preprocessing conditions
using the recent Layer-wise Relevance Propagation (LRP) algorithm. Our
evaluations on the challenging Adience benchmark show that suitable parameter
initialization leads to a holistic perception of the input, compensating
artefactual data representations. With a combination of simple preprocessing
steps, we reach state of the art performance in gender recognition.Comment: 8 pages, 5 figures, 5 tables. Presented at ICCV 2017 Workshop: 7th
IEEE International Workshop on Analysis and Modeling of Faces and Gesture
- …