27,707 research outputs found

    On the zeros of solutions of any order of derivative of second order linear differential equations taking small functions

    Get PDF
    In this paper, we investigate the hyper-exponent of convergence of zeros of f(j)(z)βˆ’Ο†(z)(j∈N)f^{(j)}(z)-\varphi(z) (j\in N), where ff is a solution of second or k(β‰₯2)k(\geq2) order linear differential equation, Ο†(z)≑̸0\varphi(z)\not\equiv0 is an entire function satisfying Οƒ(Ο†)<Οƒ(f)\sigma(\varphi)<\sigma(f) or Οƒ2(Ο†)<Οƒ2(f)\sigma_{2}(\varphi)<\sigma_{2}(f). We obtain some precise results which improve the previous results in [3, 5] and revise the previous results in [11, 13]. More importantly, these results also provide us a method to investigate the hyper-exponent of convergence of zeros of f(j)(z)βˆ’Ο†(z)(j∈N)f^{(j)}(z)-\varphi(z)(j\in N)

    Distilling Word Embeddings: An Encoding Approach

    Full text link
    Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems. This paper addresses the problem of distilling word embeddings for NLP tasks. We propose an encoding approach to distill task-specific knowledge from a set of high-dimensional embeddings, which can reduce model complexity by a large margin as well as retain high accuracy, showing a good compromise between efficiency and performance. Experiments in two tasks reveal the phenomenon that distilling knowledge from cumbersome embeddings is better than directly training neural networks with small embeddings.Comment: Accepted by CIKM-16 as a short paper, and by the Representation Learning for Natural Language Processing (RL4NLP) Workshop @ACL-16 for presentatio
    • …
    corecore