167,319 research outputs found
Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting
This paper proposes a weakly- and self-supervised deep convolutional neural
network (WSSDCNN) for content-aware image retargeting. Our network takes a
source image and a target aspect ratio, and then directly outputs a retargeted
image. Retargeting is performed through a shift map, which is a pixel-wise
mapping from the source to the target grid. Our method implicitly learns an
attention map, which leads to a content-aware shift map for image retargeting.
As a result, discriminative parts in an image are preserved, while background
regions are adjusted seamlessly. In the training phase, pairs of an image and
its image-level annotation are used to compute content and structure losses. We
demonstrate the effectiveness of our proposed method for a retargeting
application with insightful analyses.Comment: 10 pages, 11 figures. To appear in ICCV 2017, Spotlight Presentatio
Critical Line in Random Threshold Networks with Inhomogeneous Thresholds
We calculate analytically the critical connectivity of Random Threshold
Networks (RTN) for homogeneous and inhomogeneous thresholds, and confirm the
results by numerical simulations. We find a super-linear increase of with
the (average) absolute threshold , which approaches for large , and show that this asymptotic scaling is
universal for RTN with Poissonian distributed connectivity and threshold
distributions with a variance that grows slower than . Interestingly, we
find that inhomogeneous distribution of thresholds leads to increased
propagation of perturbations for sparsely connected networks, while for densely
connected networks damage is reduced; the cross-over point yields a novel,
characteristic connectivity , that has no counterpart in Boolean networks.
Last, local correlations between node thresholds and in-degree are introduced.
Here, numerical simulations show that even weak (anti-)correlations can lead to
a transition from ordered to chaotic dynamics, and vice versa. It is shown that
the naive mean-field assumption typical for the annealed approximation leads to
false predictions in this case, since correlations between thresholds and
out-degree that emerge as a side-effect strongly modify damage propagation
behavior.Comment: 18 figures, 17 pages revte
Calibrated Prediction Intervals for Neural Network Regressors
Ongoing developments in neural network models are continually advancing the
state of the art in terms of system accuracy. However, the predicted labels
should not be regarded as the only core output; also important is a
well-calibrated estimate of the prediction uncertainty. Such estimates and
their calibration are critical in many practical applications. Despite their
obvious aforementioned advantage in relation to accuracy, contemporary neural
networks can, generally, be regarded as poorly calibrated and as such do not
produce reliable output probability estimates. Further, while post-processing
calibration solutions can be found in the relevant literature, these tend to be
for systems performing classification. In this regard, we herein present two
novel methods for acquiring calibrated predictions intervals for neural network
regressors: empirical calibration and temperature scaling. In experiments using
different regression tasks from the audio and computer vision domains, we find
that both our proposed methods are indeed capable of producing calibrated
prediction intervals for neural network regressors with any desired confidence
level, a finding that is consistent across all datasets and neural network
architectures we experimented with. In addition, we derive an additional
practical recommendation for producing more accurate calibrated prediction
intervals. We release the source code implementing our proposed methods for
computing calibrated predicted intervals. The code for computing calibrated
predicted intervals is publicly available
Damage Spreading and Criticality in Finite Random Dynamical Networks
We systematically study and compare damage spreading at the sparse
percolation (SP) limit for random boolean and threshold networks with
perturbations that are independent of the network size . This limit is
relevant to information and damage propagation in many technological and
natural networks. Using finite size scaling, we identify a new characteristic
connectivity , at which the average number of damaged nodes ,
after a large number of dynamical updates, is independent of . Based on
marginal damage spreading, we determine the critical connectivity
for finite at the SP limit and show that it
systematically deviates from , established by the annealed approximation,
even for large system sizes. Our findings can potentially explain the results
recently obtained for gene regulatory networks and have important implications
for the evolution of dynamical networks that solve specific computational or
functional tasks.Comment: 4 pages, 4 eps figure
- …