682 research outputs found

    Kinetic theory of discontinuous shear thickening

    Full text link
    A simple kinetic theory to exhibit a discontinuous shear thickening (DST) is proposed. The model includes the collision integral and the friction from environment as well as a thermostat term characterized by TexT_{\rm ex}. The viscosity of this model is proportional to γ˙2\dot\gamma^2 for large shear rate γ˙\dot\gamma, while it is Newtonian for low γ˙\dot\gamma. The emergence of the DST is enhanced for lower density and lower nonzero TexT_{\rm ex}.Comment: 4 pages, 2 figures, Powders and Grains 2017 (in press

    Pattern dynamics of cohesive granular particles under a plane shear

    Full text link
    We perform three dimensional molecular dynamics simulations of cohesive granular particles under a plane shear. From the simulations, we found that the granular temperature of the system abruptly decreases to zero after reaching the critical temperature, where the characteristic time tclt_{\rm cl} is approximately represented by tcl(ζζcr)βt_{\rm cl} \propto (\zeta-\zeta_{\rm cr})^{-\beta} with the dissipation rate ζ\zeta, the critical dissipation rate ζcr\zeta_{\rm cr} and the exponent β0.8\beta \simeq 0.8. We also found that there exist a variety types of clusters depending on the initial density and the dissipation rate.Comment: 4 pages, 5 figure

    On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces

    Full text link
    Deep learning has been applied to various tasks in the field of machine learning and has shown superiority to other common procedures such as kernel methods. To provide a better theoretical understanding of the reasons for its success, we discuss the performance of deep learning and other methods on a nonparametric regression problem with a Gaussian noise. Whereas existing theoretical studies of deep learning have been based mainly on mathematical theories of well-known function classes such as H\"{o}lder and Besov classes, we focus on function classes with discontinuity and sparsity, which are those naturally assumed in practice. To highlight the effectiveness of deep learning, we compare deep learning with a class of linear estimators representative of a class of shallow estimators. It is shown that the minimax risk of a linear estimator on the convex hull of a target function class does not differ from that of the original target function class. This results in the suboptimality of linear methods over a simple but non-convex function class, on which deep learning can attain nearly the minimax-optimal rate. In addition to this extreme case, we consider function classes with sparse wavelet coefficients. On these function classes, deep learning also attains the minimax rate up to log factors of the sample size, and linear methods are still suboptimal if the assumed sparsity is strong. We also point out that the parameter sharing of deep neural networks can remarkably reduce the complexity of the model in our setting.Comment: 33 page

    Kinetic theory for dilute cohesive granular gases with a square well potential

    Get PDF
    We develop the kinetic theory of dilute cohesive granular gases in which the attractive part is described by a square well potential. We derive the hydrodynamic equations from the kinetic theory with the microscopic expressions for the dissipation rate and the transport coefficients. We check the validity of our theory by performing the direct simulation Monte Carlo.Comment: 22 pages, 11 figure

    Monte Carlo Cubature Construction

    Full text link
    In numerical integration, cubature methods are effective, especially when the integrands can be well-approximated by known test functions, such as polynomials. However, the construction of cubature formulas has not generally been known, and existing examples only represent the particular domains of integrands, such as hypercubes and spheres. In this study, we show that cubature formulas can be constructed for probability measures provided that we have an i.i.d. sampler from the measure and the mean values of given test functions. Moreover, the proposed method also works as a means of data compression, even if sufficient prior information of the measure is not available.Comment: 10 page

    Random convex hulls and kernel quadrature

    Get PDF
    Discretization of probability measures is ubiquitous in the field of applied mathematics, from classical numerical integration to data compression and algorithmic acceleration in machine learning. In this thesis, starting from generalized Tchakaloff-type cubature, we investigate random convex hulls and kernel quadrature. In the first two chapters after the introduction, we investigate the probability that a given vector θ is contained in the convex hull of independent copies of a random vector X. After deriving a sharp inequality that describes the relationship between the said probability and Tukey’s halfspace depth, we explore the case θ = E[X] by using moments of X and further the case when X enjoys some additional structure, which are of primary interest from the context of cubature. In the subsequent two chapters, we study kernel quadrature, which is numerical integration where integrands live in a reproducing kernel Hilbert space. By explicitly exploiting the spectral properties of the associated integral operator, we derive convex kernel quadrature with theoretical guarantees described by its eigenvalue decay. We further derive practical variants of the proposed algorithm and discuss their theoretical and computational aspects. Finally, we briefly discuss the applications and future work of the thesis, including Bayesian numerical methods, in the concluding chapter
    corecore