455 research outputs found

    Moduli Space of Paired Punctures, Cyclohedra and Particle Pairs on a Circle

    Full text link
    In this paper, we study a new moduli space Mn+1c\mathcal{M}_{n+1}^{\mathrm{c}}, which is obtained from M0,2n+2\mathcal{M}_{0,2n+2} by identifying pairs of punctures. We find that this space is tiled by 2n−1n!2^{n-1}n! cyclohedra, and construct the canonical form for each chamber. We also find the corresponding Koba-Nielsen factor can be viewed as the potential of the system of n+1n{+}1 pairs of particles on a circle, which is similar to the original case of M0,n\mathcal{M}_{0,n} where the system is n−3n{-}3 particles on a line. We investigate the intersection numbers of chambers equipped with Koba-Nielsen factors. Then we construct cyclohedra in kinematic space and show that the scattering equations serve as a map between the interior of worldsheet cyclohedron and kinematic cyclohedron. Finally, we briefly discuss string-like integrals over such moduli space.Comment: 23 pages, 7 figure

    Convex Optimization for Linear Query Processing under Approximate Differential Privacy

    Full text link
    Differential privacy enables organizations to collect accurate aggregates over sensitive data with strong, rigorous guarantees on individuals' privacy. Previous work has found that under differential privacy, computing multiple correlated aggregates as a batch, using an appropriate \emph{strategy}, may yield higher accuracy than computing each of them independently. However, finding the best strategy that maximizes result accuracy is non-trivial, as it involves solving a complex constrained optimization program that appears to be non-linear and non-convex. Hence, in the past much effort has been devoted in solving this non-convex optimization program. Existing approaches include various sophisticated heuristics and expensive numerical solutions. None of them, however, guarantees to find the optimal solution of this optimization problem. This paper points out that under (ϵ\epsilon, δ\delta)-differential privacy, the optimal solution of the above constrained optimization problem in search of a suitable strategy can be found, rather surprisingly, by solving a simple and elegant convex optimization program. Then, we propose an efficient algorithm based on Newton's method, which we prove to always converge to the optimal solution with linear global convergence rate and quadratic local convergence rate. Empirical evaluations demonstrate the accuracy and efficiency of the proposed solution.Comment: to appear in ACM SIGKDD 201

    Large Deviations for Non-Markovian Diffusions and a Path-Dependent Eikonal Equation

    Full text link
    This paper provides a large deviation principle for Non-Markovian, Brownian motion driven stochastic differential equations with random coefficients. Similar to Gao and Liu \cite{GL}, this extends the corresponding results collected in Freidlin and Wentzell \cite{FreidlinWentzell}. However, we use a different line of argument, adapting the PDE method of Fleming \cite{Fleming} and Evans and Ishii \cite{EvansIshii} to the path-dependent case, by using backward stochastic differential techniques. Similar to the Markovian case, we obtain a characterization of the action function as the unique bounded solution of a path-dependent version of the Eikonal equation. Finally, we provide an application to the short maturity asymptotics of the implied volatility surface in financial mathematics

    HARL: Hierarchical Adaptive Reinforcement Learning Based Auto Scheduler for Neural Networks

    Full text link
    To efficiently perform inference with neural networks, the underlying tensor programs require sufficient tuning efforts before being deployed into production environments. Usually, enormous tensor program candidates need to be sufficiently explored to find the one with the best performance. This is necessary to make the neural network products meet the high demand of real-world applications such as natural language processing, auto-driving, etc. Auto-schedulers are being developed to avoid the need for human intervention. However, due to the gigantic search space and lack of intelligent search guidance, current auto-schedulers require hours to days of tuning time to find the best-performing tensor program for the entire neural network. In this paper, we propose HARL, a reinforcement learning (RL) based auto-scheduler specifically designed for efficient tensor program exploration. HARL uses a hierarchical RL architecture in which learning-based decisions are made at all different levels of search granularity. It also automatically adjusts exploration configurations in real-time for faster performance convergence. As a result, HARL improves the tensor operator performance by 22% and the search speed by 4.3x compared to the state-of-the-art auto-scheduler. Inference performance and search speed are also significantly improved on end-to-end neural networks
    • …
    corecore