268 research outputs found

    Global Well-Posedness of the Landau-Lifshitz-Gilbert equation for initial data in Morrey space

    Full text link
    We establish the global well-posedness of the Landau-Lifshitz-Gilbert equation in Rn\mathbb R^n for any initial data m0∈Hβˆ—1(Rn,S2){\bf m}_0\in H^1_*(\mathbb R^n,\mathbb S^2) whose gradient belongs to the Morrey space M2,2(Rn)M^{2,2}(\mathbb R^n) with small norm βˆ₯βˆ‡m0βˆ₯M2,2(Rn)\displaystyle\|\nabla {\bf m}_0\|_{M^{2,2}(\mathbb R^n)}. The method is based on priori estimates of a dissipative Schr\"odinger equation of Ginzburg-Landau types obtained from the Landau-Lifshitz-Gilbert equation by the moving frame technique.Comment: 21 page

    Forward self-similar solutions to the viscoelastic Navier-Stokes equation with damping

    Full text link
    Motivated by \cite{JS}, we prove that there exists a global, forward self-similar solution to the viscoelastic Navier-Stokes equation with damping, that is smooth for t>0t>0, for any initial data that is homogeneous of degree βˆ’1-1.Comment: 30 page

    Global finite energy weak solutions to the compressible nematic liquid crystal flow in dimension three

    Full text link
    In this paper, we consider the initial and boundary value problem of a simplified compressible nematic liquid crystal flow in Ξ©βŠ‚R3\Omega\subset\mathbb R^3. We establish the existence of global weak solutions, provided the initial orientational director field d0d_0 lies in the hemisphere S+2\mathbb S^2_+.Comment: 27 page

    Adaptive Stochastic Variance Reduction for Subsampled Newton Method with Cubic Regularization

    Full text link
    The cubic regularized Newton method of Nesterov and Polyak has become increasingly popular for non-convex optimization because of its capability of finding an approximate local solution with second-order guarantee. Several recent works extended this method to the setting of minimizing the average of N smooth functions by replacing the exact gradients and Hessians with subsampled approximations. It has been shown that the total Hessian sample complexity can be reduced to be sublinear in N per iteration by leveraging stochastic variance reduction techniques. We present an adaptive variance reduction scheme for subsampled Newton method with cubic regularization, and show that the expected Hessian sample complexity is O(N + N^{2/3}\epsilon^{-3/2}) for finding an (\epsilon,\epsilon^{1/2})-approximate local solution (in terms of first and second-order guarantees respectively). Moreover, we show that the same Hessian sample complexity retains with fixed sample sizes if exact gradients are used. The techniques of our analysis are different from previous works in that we do not rely on high probability bounds based on matrix concentration inequalities. Instead, we derive and utilize bounds on the 3rd and 4th order moments of the average of random matrices, which are of independent interest on their own

    Learning from Synthetic Data for Crowd Counting in the Wild

    Full text link
    Recently, counting the number of people for crowd scenes is a hot topic because of its widespread applications (e.g. video surveillance, public security). It is a difficult task in the wild: changeable environment, large-range number of people cause the current methods can not work well. In addition, due to the scarce data, many methods suffer from over-fitting to a different extent. To remedy the above two problems, firstly, we develop a data collector and labeler, which can generate the synthetic crowd scenes and simultaneously annotate them without any manpower. Based on it, we build a large-scale, diverse synthetic dataset. Secondly, we propose two schemes that exploit the synthetic data to boost the performance of crowd counting in the wild: 1) pretrain a crowd counter on the synthetic data, then finetune it using the real data, which significantly prompts the model's performance on real data; 2) propose a crowd counting method via domain adaptation, which can free humans from heavy data annotations. Extensive experiments show that the first method achieves the state-of-the-art performance on four real datasets, and the second outperforms our baselines. The dataset and source code are available at https://gjy3035.github.io/GCC-CL/.Comment: Accepted by CVPR201

    NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization

    Full text link
    In the last decade, crowd counting and localization attract much attention of researchers due to its wide-spread applications, including crowd monitoring, public safety, space design, etc. Many Convolutional Neural Networks (CNN) are designed for tackling this task. However, currently released datasets are so small-scale that they can not meet the needs of the supervised CNN-based algorithms. To remedy this problem, we construct a large-scale congested crowd counting and localization dataset, NWPU-Crowd, consisting of 5,109 images, in a total of 2,133,375 annotated heads with points and boxes. Compared with other real-world datasets, it contains various illumination scenes and has the largest density range (0~20,033). Besides, a benchmark website is developed for impartially evaluating the different methods, which allows researchers to submit the results of the test set. Based on the proposed dataset, we further describe the data characteristics, evaluate the performance of some mainstream state-of-the-art (SOTA) methods, and analyze the new problems that arise on the new data. What's more, the benchmark is deployed at \url{https://www.crowdbenchmark.com/}, and the dataset/code/models/results are available at \url{https://gjy3035.github.io/NWPU-Crowd-Sample-Code/}.Comment: Accepted by T-PAM

    Anisotropic Polarizability of Ultracold Ground-state 23^{23}Na87^{87}Rb Molecules

    Full text link
    We report measurements of the ac polarizabilities of ultracold ground-state 23Na87Rb^{23}\rm{Na}^{87}\rm{Rb} molecules. While the polarizability of the ground rotational state J=0J = 0 is isotropic, that of the first excited rotational state J=1J = 1 is anisotropic and depends strongly on the light polarization angle. We obtain both polarizabilities precisely by combining trap oscillation frequency measurement and high resolution rotational spectroscopy driven by microwave. With the optimized light polarization angle and intensity combination, the nonuniformity of the differential ac Stark shift between the two rotational states is minimized and the rotational coherence time is observed to be the longest.Comment: 6 pages, 5 figure

    Stochastic Variance-Reduced Prox-Linear Algorithms for Nonconvex Composite Optimization

    Full text link
    We consider minimization of composite functions of the form f(g(x))+h(x)f(g(x))+h(x), where ff and hh are convex functions (which can be nonsmooth) and gg is a smooth vector mapping. In addition, we assume that gg is the average of finite number of component mappings or the expectation over a family of random component mappings. We propose a class of stochastic variance-reduced prox-linear algorithms for solving such problems and bound their sample complexities for finding an Ο΅\epsilon-stationary point in terms of the total number of evaluations of the component mappings and their Jacobians. When gg is a finite average of NN components, we obtain sample complexity O(N+N4/5Ο΅βˆ’1)\mathcal{O}(N+ N^{4/5}\epsilon^{-1}) for both mapping and Jacobian evaluations. When gg is a general expectation, we obtain sample complexities of O(Ο΅βˆ’5/2)\mathcal{O}(\epsilon^{-5/2}) and O(Ο΅βˆ’3/2)\mathcal{O}(\epsilon^{-3/2}) for component mappings and their Jacobians respectively. If in addition ff is smooth, then improved sample complexities of O(N+N1/2Ο΅βˆ’1)\mathcal{O}(N+N^{1/2}\epsilon^{-1}) and O(Ο΅βˆ’3/2)\mathcal{O}(\epsilon^{-3/2}) are derived for gg being a finite average and a general expectation respectively, for both component mapping and Jacobian evaluations

    Multi-Level Composite Stochastic Optimization via Nested Variance Reduction

    Full text link
    We consider multi-level composite optimization problems where each mapping in the composition is the expectation over a family of random smooth mappings or the sum of some finite number of smooth mappings. We present a normalized proximal approximate gradient (NPAG) method where the approximate gradients are obtained via nested stochastic variance reduction. In order to find an approximate stationary point where the expected norm of its gradient mapping is less than Ο΅\epsilon, the total sample complexity of our method is O(Ο΅βˆ’3)O(\epsilon^{-3}) in the expectation case, and O(N+NΟ΅βˆ’2)O(N+\sqrt{N}\epsilon^{-2}) in the finite-sum case where NN is the total number of functions across all composition levels. In addition, the dependence of our total sample complexity on the number of composition levels is polynomial, rather than exponential as in previous work

    A Stochastic Composite Gradient Method with Incremental Variance Reduction

    Full text link
    We consider the problem of minimizing the composition of a smooth (nonconvex) function and a smooth vector mapping, where the inner mapping is in the form of an expectation over some random variable or a finite sum. We propose a stochastic composite gradient method that employs an incremental variance-reduced estimator for both the inner vector mapping and its Jacobian. We show that this method achieves the same orders of complexity as the best known first-order methods for minimizing expected-value and finite-sum nonconvex functions, despite the additional outer composition which renders the composite gradient estimator biased. This finding enables a much broader range of applications in machine learning to benefit from the low complexity of incremental variance-reduction methods
    • …
    corecore