2,763 research outputs found

    Equivalent Properties of CD Inequality on Graph

    Full text link
    We study some equivalent properties of the curvature-dimension conditions CD(n,K)CD(n,K) inequality on infinite, but locally finite graph. These equivalences are gradient estimate, Poincar\'e type inequalities and reverse Poincar\'e inequalities. And we also obtain one equivalent property of gradient estimate for a new notion of curvature-dimension conditions CDE′(∞,K)CDE'(\infty, K) at the same assumption of graphs.Comment: 13 page

    The first laws of thermodynamics of the (2+1)-dimensional BTZ black holes and Kerr-de Sitter spacetimes

    Full text link
    We investigate the first law of thermodynamics in the case of the (2+1)-dimensional BTZ black holes and Kerr-de Sitter spacetimes, in particular, we focus on the integral mass formulas. It is found that by assuming the cosmological constant as a variable state parameter, both the differential and integral mass formulas of the first law of black hole thermodynamics in the asymptotic flat spacetimes can be directly extended to those of rotating black holes in anti-de Sitter and de Sitter backgrounds. It should be pointed that these formulas come into existence in any dimensions also.Comment: 3 pages, no figure, revtex4, references added, to appear in CP

    Ultracontractivity and functional inequalities on infinite graphs

    Full text link
    In this paper, we prove the equivalent of ultracontractive bound of heat semigroup or the uniform upper bound of the heat kernel with the Nash inequality, Log-Sobolev inequalities on graphs. We also show that under the assumption of volume growth and nonnegative curvature CDE′(n,0)CDE'(n,0) the Sobolev inequality, Nash inequality, Faber-Krahn inequality, Log-Sobolev inequalities, discrete and continuous-time uniform upper estimate of heat kernel are all true on graph.Comment: 13 page

    A gradient estimate for positive functions on graphs

    Full text link
    We derive a gradient estimate for positive functions, in particular for positive solutions to the heat equation, on finite or locally finite graphs. Unlike the well known Li-Yau estimate, which is based on the maximum principle, our estimate follows from the graph structure of the gradient form and the Laplacian operator. Though our assumption on graphs is slightly stronger than that of Bauer, Horn, Lin, Lippner, Mangoubi, and Yau (J. Differential Geom. 99 (2015) 359-405), our estimate can be easily applied to nonlinear differential equations, as well as differential inequalities. As applications, we estimate the greatest lower bound of Cheng's eigenvalue and an upper bound of the minimal heat kernel, which is recently studied by Bauer, Hua and Yau (Preprint, 2015) by the Li-Yau estimate. Moreover, generalizing an earlier result of Lin and Yau (Math. Res. Lett. 17 (2010) 343-356), we derive a lower bound of nonzero eigenvalues by our gradient estimate.Comment: 11 page

    Global gradient estimate on graph and its applications

    Full text link
    Continuing our previous work (arXiv:1509.07981v1), we derive another global gradient estimate for positive functions, particularly for positive solutions to the heat equation on finite or locally finite graphs. In general, the gradient estimate in the present paper is independent of our previous one. As applications, it can be used to get an upper bound and a lower bound of the heat kernel on locally finite graphs. These global gradient estimates can be compared with the Li-Yau inequality on graphs contributed by Bauer, Horn, Lin, Lipper, Mangoubi and Yau (J. Differential Geom. 99 (2015) 359-409). In many topics, such as eigenvalue estimate and heat kernel estimate (not including the Liouville type theorems), replacing the Li-Yau inequality by the global gradient estimate, we can get similar results.Comment: 7 page

    The Second Order Linear Model

    Full text link
    We study a fundamental class of regression models called the second order linear model (SLM). The SLM extends the linear model to high order functional space and has attracted considerable research interest recently. Yet how to efficiently learn the SLM under full generality using nonconvex solver still remains an open question due to several fundamental limitations of the conventional gradient descent learning framework. In this study, we try to attack this problem from a gradient-free approach which we call the moment-estimation-sequence (MES) method. We show that the conventional gradient descent heuristic is biased by the skewness of the distribution therefore is no longer the best practice of learning the SLM. Based on the MES framework, we design a nonconvex alternating iteration process to train a dd-dimension rank-kk SLM within O(kd)O(kd) memory and one-pass of the dataset. The proposed method converges globally and linearly, achieves ϵ\epsilon recovery error after retrieving O[k2d⋅polylog(kd/ϵ)]O[k^{2}d\cdot\mathrm{polylog}(kd/\epsilon)] samples. Furthermore, our theoretical analysis reveals that not all SLMs can be learned on every sub-gaussian distribution. When the instances are sampled from a so-called τ\tau-MIP distribution, the SLM can be learned by O(p/τ2)O(p/\tau^{2}) samples where pp and τ\tau are positive constants depending on the skewness and kurtosis of the distribution. For non-MIP distribution, an addition diagonal-free oracle is necessary and sufficient to guarantee the learnability of the SLM. Numerical simulations verify the sharpness of our bounds on the sampling complexity and the linear convergence rate of our algorithm

    Nonconvex One-bit Single-label Multi-label Learning

    Full text link
    We study an extreme scenario in multi-label learning where each training instance is endowed with a single one-bit label out of multiple labels. We formulate this problem as a non-trivial special case of one-bit rank-one matrix sensing and develop an efficient non-convex algorithm based on alternating power iteration. The proposed algorithm is able to recover the underlying low-rank matrix model with linear convergence. For a rank-kk model with d1d_1 features and d2d_2 classes, the proposed algorithm achieves O(ϵ)O(\epsilon) recovery error after retrieving O(k1.5d1d2/ϵ)O(k^{1.5}d_1 d_2/\epsilon) one-bit labels within O(kd)O(kd) memory. Our bound is nearly optimal in the order of O(1/ϵ)O(1/\epsilon). This significantly improves the state-of-the-art sampling complexity of one-bit multi-label learning. We perform experiments to verify our theory and evaluate the performance of the proposed algorithm

    Li-Yau inequality for unbounded Laplacian on graphs

    Full text link
    In this paper, we derive Li-Yau inequality for unbounded Laplacian on complete weighted graphs with the assumption of the curvature-dimension inequality CDE′(n,K)CDE'(n,K), which can be regarded as a notion of curvature on graphs. Furthermore, we obtain some applications of Li-Yau inequality, including Harnack inequality, heat kernel bounds and Cheng's eigenvalue estimate. These are first kind of results on this direction for unbounded Laplacian on graphs.Comment: 19 page

    Volume doubling, Poincar\'e inequality and Guassian heat kernel estimate for nonnegative curvature graphs

    Full text link
    By studying the heat semigroup, we prove Li-Yau type estimates for bounded and positive solutions of the heat equation on graphs, under the assumption of the curvature-dimension inequality CDE′(n,0)CDE'(n,0), which can be consider as a notion of curvature for graphs. Furthermore, we derive that if a graph has non-negative curvature then it has the volume doubling property, from this we can prove the Gaussian estimate for heat kernel, and then Poincar\'e inequality and Harnack inequality. As a consequence, we obtain that the dimension of space of harmonic functions on graphs with polynomial growth is finite, which original is a conjecture of Yau on Riemannian manifold proved by Colding and Minicozzi. Under the assumption of positive curvature on graphs, we derive the Bonnet-Myers type theorem that the diameter of graphs is finite and bounded above in terms of the positive curvature by proving some Log Sobolev inequalities.Comment: 45 pages. arXiv admin note: text overlap with arXiv:0801.0812, arXiv:0911.1819 by other author

    Transkernel: Bridging Monolithic Kernels to Peripheral Cores

    Full text link
    Smart devices see a large number of ephemeral tasks driven by background activities. In order to execute such a task, the OS kernel wakes up the platform beforehand and puts it back to sleep afterwards. In doing so, the kernel operates various IO devices and orchestrates their power state transitions. Such kernel executions are inefficient as they mismatch typical CPU hardware. They are better off running on a low-power, microcontroller-like core, i.e., peripheral core, relieving CPU from the inefficiency. We therefore present a new OS structure, in which a lightweight virtual executor called transkernel offloads specific phases from a monolithic kernel. The transkernel translates stateful kernel execution through cross-ISA, dynamic binary translation (DBT); it emulates a small set of stateless kernel services behind a narrow, stable binary interface; it specializes for hot paths; it exploits ISA similarities for lowering DBT cost. Through an ARM-based prototype, we demonstrate transkernel's feasibility and benefit. We show that while cross-ISA DBT is typically used under the assumption of efficiency loss, it can enable efficiency gain, even on off-the-shelf hardware.Comment: The camera-ready version of this paper, will appear at USENIX ATC'1
    • …
    corecore