303 research outputs found

    Fatigue in patients with Sjögren’s syndrome and intervention of traditional herbal medicine

    Get PDF
    Background: Fatigue is the main complaint exiting in patients with primary Sjögren’s Syndrome (pSS) but rarely addressed. Patients described it as an uncontrollable symptom of lack of energy, which has negative impacts on health related quality of life. Todate, many studies have demonstrated that cytokines, depression, sleep and endocrine disturbance interrelated with pSS-related fatigue. However, the pathogenesis remains unclear. With a long history, Traditional Chinese Medicine (TCM) as alternative therapy has become increasingly popular among patients with various kinds of disease, especially in pSS. Based on the unique principle of therapy, practitioners have achieved a satisfactory effect on relieving disease related symptoms with Chinese Herbal Medicine (CHM).Materials and Methods: In this article, we succinctly reviewed the highly correlated factors to pSS-related fatigue from the standpoint of western medicine. Then, from TCM perspective, we illustrated that theoretic mechanisms lead to fatigue in patients with pSS.Results: According to the theory of TCM, we concluded that CHM as complementary and alternative medicines are attractive options to alleviate pSS-related fatigue.Conclusion: In clinic, physicians should remember to inquire whether their patients are worn out easily. Combination of Yin-tonifying and Qi-tonifying CHM may be the optimal options to pSS-related fatigue.Keywords:Sjögren’s Syndrome, fatigue, Traditional Chinese Medicine, Chinese Herbal Medicine, rheumatic disease

    Deconvolutional Latent-Variable Model for Text Sequence Matching

    Full text link
    A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting.Comment: Accepted by AAAI-201

    Towards More Efficient Insertion Transformer with Fractional Positional Encoding

    Full text link
    Auto-regressive neural sequence models have been shown to be effective across text generation tasks. However, their left-to-right decoding order prevents generation from being parallelized. Insertion Transformer (Stern et al., 2019) is an attractive alternative that allows outputting multiple tokens in a single generation step. Nevertheless, due to the incompatibility between absolute positional encoding and insertion-based generation schemes, it needs to refresh the encoding of every token in the generated partial hypothesis at each step, which could be costly. We design a novel reusable positional encoding scheme for insertion transformers called Fractional Positional Encoding (FPE), which allows reusing representations calculated in previous steps. Empirical studies on various text generation tasks demonstrate the effectiveness of FPE, which leads to floating-point operation reduction and latency improvements on batched decoding

    Learning a Hybrid Architecture for Sequence Regression and Annotation

    Full text link
    When learning a hidden Markov model (HMM), sequen- tial observations can often be complemented by real-valued summary response variables generated from the path of hid- den states. Such settings arise in numerous domains, includ- ing many applications in biology, like motif discovery and genome annotation. In this paper, we present a flexible frame- work for jointly modeling both latent sequence features and the functional mapping that relates the summary response variables to the hidden state sequence. The algorithm is com- patible with a rich set of mapping functions. Results show that the availability of additional continuous response vari- ables can simultaneously improve the annotation of the se- quential observations and yield good prediction performance in both synthetic data and real-world datasets.Comment: AAAI 201

    The Entity-Deduction Arena: A playground for probing the conversational reasoning and planning capabilities of LLMs

    Full text link
    Large language models (LLMs) are effective at answering questions that are clearly asked. However, when faced with ambiguous queries they can act unpredictably and produce incorrect outputs. This underscores the need for the development of intelligent agents capable of asking clarification questions to resolve ambiguities effectively. This capability requires complex understanding, state tracking, reasoning and planning over multiple conversational turns. However, directly measuring this can be challenging. In this paper, we offer a surrogate problem which assesses an LLMs's capability to deduce an entity unknown to itself, but revealed to a judge, by asking the judge a series of queries. This entity-deducing game can serve as an evaluation framework to probe the conversational reasoning and planning capabilities of language models. We systematically evaluate various LLMs and discover significant differences in their performance on this task. We find that strong LLMs like GPT-4 outperform human players by a large margin. We further employ Behavior Cloning (BC) to examine whether a weaker model is capable of imitating a stronger model and generalizing to data or domains, using only the demonstrations from a stronger model. We finally propose to use Reinforcement Learning to enhance reasoning and planning capacity of Vicuna models through episodes of game playing, which lead to significant performance improvement. We hope that this problem offers insights into how autonomous agents could be trained to behave more intelligently in ambiguous circumstances.Comment: 22 page
    corecore