268 research outputs found
Global Well-Posedness of the Landau-Lifshitz-Gilbert equation for initial data in Morrey space
We establish the global well-posedness of the Landau-Lifshitz-Gilbert
equation in for any initial data whose gradient belongs to the Morrey space with small norm . The method is based on priori estimates of a dissipative Schr\"odinger
equation of Ginzburg-Landau types obtained from the Landau-Lifshitz-Gilbert
equation by the moving frame technique.Comment: 21 page
Forward self-similar solutions to the viscoelastic Navier-Stokes equation with damping
Motivated by \cite{JS}, we prove that there exists a global, forward
self-similar solution to the viscoelastic Navier-Stokes equation with damping,
that is smooth for , for any initial data that is homogeneous of degree
.Comment: 30 page
Global finite energy weak solutions to the compressible nematic liquid crystal flow in dimension three
In this paper, we consider the initial and boundary value problem of a
simplified compressible nematic liquid crystal flow in . We establish the existence of global weak solutions, provided the initial
orientational director field lies in the hemisphere .Comment: 27 page
Adaptive Stochastic Variance Reduction for Subsampled Newton Method with Cubic Regularization
The cubic regularized Newton method of Nesterov and Polyak has become
increasingly popular for non-convex optimization because of its capability of
finding an approximate local solution with second-order guarantee. Several
recent works extended this method to the setting of minimizing the average of N
smooth functions by replacing the exact gradients and Hessians with subsampled
approximations. It has been shown that the total Hessian sample complexity can
be reduced to be sublinear in N per iteration by leveraging stochastic variance
reduction techniques. We present an adaptive variance reduction scheme for
subsampled Newton method with cubic regularization, and show that the expected
Hessian sample complexity is O(N + N^{2/3}\epsilon^{-3/2}) for finding an
(\epsilon,\epsilon^{1/2})-approximate local solution (in terms of first and
second-order guarantees respectively). Moreover, we show that the same Hessian
sample complexity retains with fixed sample sizes if exact gradients are used.
The techniques of our analysis are different from previous works in that we do
not rely on high probability bounds based on matrix concentration inequalities.
Instead, we derive and utilize bounds on the 3rd and 4th order moments of the
average of random matrices, which are of independent interest on their own
Learning from Synthetic Data for Crowd Counting in the Wild
Recently, counting the number of people for crowd scenes is a hot topic
because of its widespread applications (e.g. video surveillance, public
security). It is a difficult task in the wild: changeable environment,
large-range number of people cause the current methods can not work well. In
addition, due to the scarce data, many methods suffer from over-fitting to a
different extent. To remedy the above two problems, firstly, we develop a data
collector and labeler, which can generate the synthetic crowd scenes and
simultaneously annotate them without any manpower. Based on it, we build a
large-scale, diverse synthetic dataset. Secondly, we propose two schemes that
exploit the synthetic data to boost the performance of crowd counting in the
wild: 1) pretrain a crowd counter on the synthetic data, then finetune it using
the real data, which significantly prompts the model's performance on real
data; 2) propose a crowd counting method via domain adaptation, which can free
humans from heavy data annotations. Extensive experiments show that the first
method achieves the state-of-the-art performance on four real datasets, and the
second outperforms our baselines. The dataset and source code are available at
https://gjy3035.github.io/GCC-CL/.Comment: Accepted by CVPR201
NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization
In the last decade, crowd counting and localization attract much attention of
researchers due to its wide-spread applications, including crowd monitoring,
public safety, space design, etc. Many Convolutional Neural Networks (CNN) are
designed for tackling this task. However, currently released datasets are so
small-scale that they can not meet the needs of the supervised CNN-based
algorithms. To remedy this problem, we construct a large-scale congested crowd
counting and localization dataset, NWPU-Crowd, consisting of 5,109 images, in a
total of 2,133,375 annotated heads with points and boxes. Compared with other
real-world datasets, it contains various illumination scenes and has the
largest density range (0~20,033). Besides, a benchmark website is developed for
impartially evaluating the different methods, which allows researchers to
submit the results of the test set. Based on the proposed dataset, we further
describe the data characteristics, evaluate the performance of some mainstream
state-of-the-art (SOTA) methods, and analyze the new problems that arise on the
new data. What's more, the benchmark is deployed at
\url{https://www.crowdbenchmark.com/}, and the dataset/code/models/results are
available at \url{https://gjy3035.github.io/NWPU-Crowd-Sample-Code/}.Comment: Accepted by T-PAM
Anisotropic Polarizability of Ultracold Ground-state NaRb Molecules
We report measurements of the ac polarizabilities of ultracold ground-state
molecules. While the polarizability of the ground
rotational state is isotropic, that of the first excited rotational
state is anisotropic and depends strongly on the light polarization
angle. We obtain both polarizabilities precisely by combining trap oscillation
frequency measurement and high resolution rotational spectroscopy driven by
microwave. With the optimized light polarization angle and intensity
combination, the nonuniformity of the differential ac Stark shift between the
two rotational states is minimized and the rotational coherence time is
observed to be the longest.Comment: 6 pages, 5 figure
Stochastic Variance-Reduced Prox-Linear Algorithms for Nonconvex Composite Optimization
We consider minimization of composite functions of the form ,
where and are convex functions (which can be nonsmooth) and is a
smooth vector mapping. In addition, we assume that is the average of finite
number of component mappings or the expectation over a family of random
component mappings. We propose a class of stochastic variance-reduced
prox-linear algorithms for solving such problems and bound their sample
complexities for finding an -stationary point in terms of the total
number of evaluations of the component mappings and their Jacobians. When
is a finite average of components, we obtain sample complexity
for both mapping and Jacobian
evaluations. When is a general expectation, we obtain sample complexities
of and for
component mappings and their Jacobians respectively. If in addition is
smooth, then improved sample complexities of
and are
derived for being a finite average and a general expectation respectively,
for both component mapping and Jacobian evaluations
Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
We consider multi-level composite optimization problems where each mapping in
the composition is the expectation over a family of random smooth mappings or
the sum of some finite number of smooth mappings. We present a normalized
proximal approximate gradient (NPAG) method where the approximate gradients are
obtained via nested stochastic variance reduction. In order to find an
approximate stationary point where the expected norm of its gradient mapping is
less than , the total sample complexity of our method is
in the expectation case, and in
the finite-sum case where is the total number of functions across all
composition levels. In addition, the dependence of our total sample complexity
on the number of composition levels is polynomial, rather than exponential as
in previous work
A Stochastic Composite Gradient Method with Incremental Variance Reduction
We consider the problem of minimizing the composition of a smooth (nonconvex)
function and a smooth vector mapping, where the inner mapping is in the form of
an expectation over some random variable or a finite sum. We propose a
stochastic composite gradient method that employs an incremental
variance-reduced estimator for both the inner vector mapping and its Jacobian.
We show that this method achieves the same orders of complexity as the best
known first-order methods for minimizing expected-value and finite-sum
nonconvex functions, despite the additional outer composition which renders the
composite gradient estimator biased. This finding enables a much broader range
of applications in machine learning to benefit from the low complexity of
incremental variance-reduction methods
- β¦