237 research outputs found
Modeling Crowd Turbulence by Many-Particle Simulations
A recent study [D. Helbing, A. Johansson and H. Z. Al-Abideen, {\it Phys.
Rev. E} 75, 046109 (2007)] has revealed a "turbulent" state of pedestrian
flows, which is characterized by sudden displacements and causes the falling
and trampling of people. However, turbulent crowd motion is not reproduced well
by current many-particle models due to their insufficient representation of the
local interactions in areas of extreme densities. In this contribution, we
extend the repulsive force term of the social force model to reproduce crowd
turbulence. We perform numerical simulations of pedestrians moving through a
bottleneck area with this new model. The transitions from laminar to
stop-and-go and turbulent flows are observed. The empirical features
characterizing crowd turbulence, such as the structure function and the
probability density function of velocity increments are reproduced well, i.e.
they are well compatible with an analysis of video data during the annual
Muslim pilgrimage
Efficient Randomized Algorithms for the Fixed-Precision Low-Rank Matrix Approximation
Randomized algorithms for low-rank matrix approximation are investigated,
with the emphasis on the fixed-precision problem and computational efficiency
for handling large matrices. The algorithms are based on the so-called QB
factorization, where Q is an orthonormal matrix. Firstly, a mechanism for
calculating the approximation error in Frobenius norm is proposed, which
enables efficient adaptive rank determination for large and/or sparse matrix.
It can be combined with any QB-form factorization algorithm in which B's rows
are incrementally generated. Based on the blocked randQB algorithm by P.-G.
Martinsson and S. Voronin, this results in an algorithm called randQB EI. Then,
we further revise the algorithm to obtain a pass-efficient algorithm, randQB
FP, which is mathematically equivalent to the existing randQB algorithms and
also suitable for the fixed-precision problem. Especially, randQB FP can serve
as a single-pass algorithm for calculating leading singular values, under
certain condition. With large and/or sparse test matrices, we have empirically
validated the merits of the proposed techniques, which exhibit remarkable
speedup and memory saving over the blocked randQB algorithm. We have also
demonstrated that the single-pass algorithm derived by randQB FP is much more
accurate than an existing single-pass algorithm. And with data from a scenic
image and an information retrieval application, we have shown the advantages of
the proposed algorithms over the adaptive range finder algorithm for solving
the fixed-precision problem.Comment: 21 pages, 10 figure
A Unified Approximation Framework for Compressing and Accelerating Deep Neural Networks
Deep neural networks (DNNs) have achieved significant success in a variety of
real world applications, i.e., image classification. However, tons of
parameters in the networks restrict the efficiency of neural networks due to
the large model size and the intensive computation. To address this issue,
various approximation techniques have been investigated, which seek for a light
weighted network with little performance degradation in exchange of smaller
model size or faster inference. Both low-rankness and sparsity are appealing
properties for the network approximation. In this paper we propose a unified
framework to compress the convolutional neural networks (CNNs) by combining
these two properties, while taking the nonlinear activation into consideration.
Each layer in the network is approximated by the sum of a structured sparse
component and a low-rank component, which is formulated as an optimization
problem. Then, an extended version of alternating direction method of
multipliers (ADMM) with guaranteed convergence is presented to solve the
relaxed optimization problem. Experiments are carried out on VGG-16, AlexNet
and GoogLeNet with large image classification datasets. The results outperform
previous work in terms of accuracy degradation, compression rate and speedup
ratio. The proposed method is able to remarkably compress the model (with up to
4.9x reduction of parameters) at a cost of little loss or without loss on
accuracy.Comment: 8 pages, 5 figures, 6 table
- …