19 research outputs found

    C. elegans VANG-1 Modulates Life Span via Insulin/IGF-1-Like Signaling

    Get PDF
    The planar cell polarity (PCP) pathway is highly conserved from Drosophila to humans and a PCP-like pathway has recently been described in the nematode Caenorhabditis elegans. The developmental function of this pathway is to coordinate the orientation of cells or structures within the plane of an epithelium or to organize cell-cell intercalation required for correct morphogenesis. Here, we describe a novel role of VANG-1, the only C. elegans ortholog of the conserved PCP component Strabismus/Van Gogh. We show that two alleles of vang-1 and depletion of the protein by RNAi cause an increase of mean life span up to 40%. Consistent with the longevity phenotype vang-1 animals also show enhanced resistance to thermal- and oxidative stress and decreased lipofuscin accumulation. In addition, vang-1 mutants show defects like reduced brood size, decreased ovulation rate and prolonged reproductive span, which are also related to gerontogenes. The germline, but not the intestine or neurons, seems to be the primary site of vang-1 function. Life span extension in vang-1 mutants depends on the insulin/IGF-1-like receptor DAF-2 and DAF-16/FoxO transcription factor. RNAi against the phase II detoxification transcription factor SKN-1/Nrf2 also reduced vang-1 life span that might be explained by gradual inhibition of insulin/IGF-1-like signaling in vang-1. This is the first time that a key player of the PCP pathway is shown to be involved in the insulin/IGF-1-like signaling dependent modulation of life span in C. elegans

    Hierarchical Bayesian Domain Adaptation

    No full text
    Multi-task learning is the problem of maximizing the performance of a system across a number of related tasks. When applied to multiple domains for the same task, it is similar to domain adaptation, but symmetric, rather than limited to improving performance on a target domain. We present a more principled, better performing model for this problem, based on the use of a hierarchical Bayesian prior. Each domain has its own domain-specific parameter for each feature but, rather than a constant prior over these parameters, the model instead links them via a hierarchical Bayesian global prior. This prior encourages the features to have similar weights across domains, unless there is good evidence to the contrary. We show that the method of (Daumé III, 2007), which was presented as a simple “preprocessing step, ” is actually equivalent, except our representation explicitly separates hyperparameters which were tied in his work. We demonstrate that allowing different values for these hyperparameters significantly improves performance over both a strong baseline and (Daumé III, 2007) within both a conditional random field sequence model for named entity recognition and a discriminatively trained dependency parser.

    Nested Named Entity Recognition

    No full text
    Many named entities contain other named entities inside them. Despite this fact, the field of named entity recognition has almost entirely ignored nested named entity recognition, but due to technological, rather than ideological reasons. In this paper, we present a new technique for recognizing nested named entities, by using a discriminative constituency parser. To train the model, we transform each sentence into a tree, with constituents for each named entity (and no other syntactic structure). We present results on both newspaper and biomedical corpora which contain nested named entities. In three out of four sets of experiments, our model outperforms a standard semi-CRF on the more traditional top-level entities. At the same time, we improve the overall F-score by up to 30 % over the flat model, which is unable to recover any nested entities.

    Incorporating non-local information into information extraction systems by gibbs sampling

    No full text
    Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9 % over state-of-the-art systems on two established information extraction tasks.

    The infinite tree

    No full text
    Historically, unsupervised learning techniques have lacked a principled technique for selecting the number of unseen components. Research into non-parametric priors, such as the Dirichlet process, has enabled instead the use of infinite models, in which the number of hidden categories is not fixed, but can grow with the amount of training data. Here we develop the infinite tree, a new infinite model capable of representing recursive branching structure over an arbitrarily large set of hidden categories. Specifically, we develop three infinite tree models, each of which enforces different independence assumptions, and for each model we define a simple direct assignment sampling inference procedure. We demonstrate the utility of our models by doing unsupervised learning of part-of-speech tags from treebank dependency skeleton structure, achieving an accuracy of 75.34%, and by doing unsupervised splitting of part-of-speech tags, which increases the accuracy of a generative dependency parser from 85.11 % to 87.35%.
    corecore