278 research outputs found

    Comment on “A fuzzy soft set theoretic approach to decision making problems”

    Get PDF
    AbstractThe algorithm for identification of an object in a previous paper of A.R. Roy et al. [A.R. Roy, P.K. Maji, A fuzzy soft set theoretic approach to decision making problems, J. Comput. Appl. Math. 203(2007) 412–418] is incorrect. Using the algorithm the right choice cannot be obtained in general. The problem is illustrated by a counter-example

    A Survey of Document-Level Information Extraction

    Full text link
    Document-level information extraction (IE) is a crucial task in natural language processing (NLP). This paper conducts a systematic review of recent document-level IE literature. In addition, we conduct a thorough error analysis with current state-of-the-art algorithms and identify their limitations as well as the remaining challenges for the task of document-level IE. According to our findings, labeling noises, entity coreference resolution, and lack of reasoning, severely affect the performance of document-level IE. The objective of this survey paper is to provide more insights and help NLP researchers to further enhance document-level IE performance

    PaperRobot: Incremental Draft Generation of Scientific Ideas

    Full text link
    We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively.Comment: 12 pages. Accepted by ACL 2019 Code and resource is available at https://github.com/EagleW/PaperRobo

    Can Gradient Descent Provably Learn Linear Dynamic Systems?

    Full text link
    We study the learning ability of linear recurrent neural networks with gradient descent. We prove the first theoretical guarantee on linear RNNs with Gradient Descent to learn any stable linear dynamic system. We show that despite the non-convexity of the optimization loss if the width of the RNN is large enough (and the required width in hidden layers does not rely on the length of the input sequence), a linear RNN can provably learn any stable linear dynamic system with the sample and time complexity polynomial in 11ρC\frac{1}{1-\rho_C} where ρC\rho_C is roughly the spectral radius of the stable system. Our results provide the first theoretical guarantee to learn a linear RNN and demonstrate how can the recurrent structure help to learn a dynamic system.Comment: 29 page

    Decelerating Airy pulse propagation in highly non-instantaneous cubic media

    Get PDF
    The propagation of decelerating Airy pulses in non-instantaneous cubic medium is investigated both theoretically and numerically. In a Debye model, at variance with the case of accelerating Airy and Gaussian pulses, a decelerating Airy pulse evolves into a single soliton for weak and general non- instantaneous response. Airy pulses can hence be used to control soliton generation by temporal shaping. The effect is critically dependent on the response time, and could be used as a way to measure the Debye type response function. For highly non- instantaneous response, we theoretically find a decelerating Airy pulse is still transformed into Airy wave packet with deceleration. The theoretical predictions are confirmed by numerical simulations
    corecore