312 research outputs found

    Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning

    Full text link
    A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.Comment: Accepted at ICLR 201

    Adversarial Generation of Natural Language

    Full text link
    Generative Adversarial Networks (GANs) have gathered a lot of attention from the computer vision community, yielding impressive results for image generation. Advances in the adversarial generation of natural language from noise however are not commensurate with the progress made in generating images, and still lag far behind likelihood based methods. In this paper, we take a step towards generating natural language with a GAN objective alone. We introduce a simple baseline that addresses the discrete output space problem without relying on gradient estimators and show that it is able to achieve state-of-the-art results on a Chinese poem generation dataset. We present quantitative results on generating sentences from context-free and probabilistic context-free grammars, and qualitative language modeling results. A conditional version is also described that can generate sequences conditioned on sentence characteristics.Comment: 11 pages, 3 figures, 5 table

    Training Adaptations and Seasonal Health: Their Cumulative Effect on the Physical Fitness Profile of All India Inter-University Athletes

    Get PDF
    Purpose: To examine the effect of seasonal diseases on women netball player during all India inter-university Netball women’s tournament held at the central university of Haryana. Material and method: In this tournament, 40 players from different universities suffered severe illness due to the environment. Those age ranging from (18 to 25). In this competition, many players were admitted/attend to the health center due to different seasonal diseases, due to which their team performance was affected. Headache, fever, cough, cold, sore throat, vomiting, loose motion, stomach pain and weakness have been seen more often. That's why we sucked these as variables. Results: We found by our observation method that there were two players were suffered by stomach pain and vomit, one player who suffered by stomach pain and cough, five players were suffered by weakness and nausea, three player by cold and cough, three players by fever and cough, five players by sore throat and cough, two players by fever and headache, two players by fever and common cold and two players suffered by common cold and headache. Conclusion: In the last, we can say that teams from different states of India come to this tournament, which have their own culture, different environment and weather. This affects their performance because this competition was organized in Haryana in winter, so many teams whose training is not in this environment. Because of environmental adaptation many team players suffered from different types of diseases, and it impacted his team performance

    Algorithms for subgraph complementation to some classes of graphs

    Full text link
    For a class G\mathcal{G} of graphs, the objective of \textsc{Subgraph Complementation to} G\mathcal{G} is to find whether there exists a subset SS of vertices of the input graph GG such that modifying GG by complementing the subgraph induced by SS results in a graph in G\mathcal{G}. We obtain a polynomial-time algorithm for the problem when G\mathcal{G} is the class of graphs with minimum degree at least kk, for a constant kk, answering an open problem by Fomin et al. (Algorithmica, 2020). When G\mathcal{G} is the class of graphs without any induced copies of the star graph on t+1t+1 vertices (for any constant t≥3t\geq 3) and diamond, we obtain a polynomial-time algorithm for the problem. This is in contrast with a result by Antony et al. (Algorithmica, 2022) that the problem is NP-complete and cannot be solved in subexponential-time (assuming the Exponential Time Hypothesis) when G\mathcal{G} is the class of graphs without any induced copies of the star graph on t+1t+1 vertices, for every constant t≥5t\geq 5

    Formulation and In-Vitro evaluation of Nanocrystal formulation of poorly soluble drugs

    Get PDF
    Introduction:-Poor solubility of drug compounds which accounts for 40% of new molecules investigated at present is an issue of great concern in pharmaceutical industry and reducing particle size (i,e to reduce below 1000 nm )of drug candidate to be investigated is one of the simplest and efficient ways to overcome this challenge. Drug nanocrystals, solid nanosized drug particles are defined as formulation having 100% drug, which are covered by a stabilizer layer. In this study attempt was made to formulate and evaluate nanocrystals of poorly soluble drugs having low oral bioavailability. Material and method:- Nanocrystals were prepared successfully by varying concentration of different stabilizers by anti-solvent precipitation method. The formulated nanocrystals were evaluated by determining physicochemical characteristics such as physical appearance, Differential Scanning Calorimetry (DSC), scanning electron microscopy (SEM), X-ray powder diffractometry, solubility studies, particle size distribution, zeta potential, and in vitro drug release profile studies. Results:- An in-vitro study was performed on the successful formulation in comparison to drug powder using dissolution apparatus The particle size of RVT and PSNC-3 was found to be 1975.3 nm and 790.1 nm respectively. Conclusion: Precipitated Nanocrystals formulated with different stablizer’s method resultedin formation of small and uniform RVT nanocrystals with an improved saturation solubility, dissolution rate. Keywords: Nanocrystal, poorly soluble drug

    On Extractive and Abstractive Neural Document Summarization with Transformer Language Models

    Full text link
    We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We show that this extractive step significantly improves summarization results. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher rouge scores. Note: The abstract above was not written by the authors, it was generated by one of the models presented in this paper

    Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study

    Full text link
    Neural generative models have been become increasingly popular when building conversational agents. They offer flexibility, can be easily adapted to new domains, and require minimal domain engineering. A common criticism of these systems is that they seldom understand or use the available dialog history effectively. In this paper, we take an empirical approach to understanding how these models use the available dialog history by studying the sensitivity of the models to artificially introduced unnatural changes or perturbations to their context at test time. We experiment with 10 different types of perturbations on 4 multi-turn dialog datasets and find that commonly used neural dialog architectures like recurrent and transformer-based seq2seq models are rarely sensitive to most perturbations such as missing or reordering utterances, shuffling words, etc. Also, by open-sourcing our code, we believe that it will serve as a useful diagnostic tool for evaluating dialog systems in the future.Comment: To appear at ACL 2019(oral; nominated for best paper
    • …
    corecore