103 research outputs found

    THE REACTION OF a-BROMODIPHENYLACETONlTRILE AND POTASSIUM p-CHLORO-N-CYANOANILIDE

    Get PDF

    Probabilités, propensions, utilités

    Get PDF
    Nous montrons comment une théorie du consommateur, semblable à la théorie standard, peut être dérivée en remplaçant la notion d'utilité par une notion de propension, comprise comme la probabilité objective qu'un agent accomplisse un choix donné lorsqu'il est libre de contraintes. Nous montrons que cette propension peut se combiner avec l'existence d'une contrainte budgétaire et nous généralisons ensuite les résultats à la théorie du producteur.We show how a consumer theory, similar to the standard theory, can be derived by replacing the notion of utility with a notion of propensity, defined as an agent's objective probability of achieving a given choice in absence of constraints. We show that this propensity can be combined with budget constraints, and we generalize the results over the producer theory

    Study on the Lithiation Reaction of 3-Diisopropylcarbamoyl-N-pivaloylphenylethylamine

    Get PDF
    As a continuation of our earlier studies on the lithiation-based synthesis of 8-methoxy-, 8-fluoro- and 8-chloro-3,4-dihydroisoquinoline, a similar approach was investigated for the preparation of the 8-diisopropylcarbamoyl congener. The corresponding N-pivaloyl phenylethylamine key intermediate was prepared via four new bifunctional intermediates in high overall yield. Lithiation of this intermediate followed by quenching with dimethylformamide led to a mixture: beside the desired compound containing the formyl moiety in the common ortho position of the two aromatic substituents, the isomer formylated in the other ortho position of the carbamoyl moiety was surprisingly obtained as the major product. The crude mixture could finally be transformed under acidic conditions to the target compound, 8-diisopropylcarbamoyl-substituted 3,4-dihydroisoquinoline, albeit in a low yield

    Understanding In-Context Learning via Supportive Pretraining Data

    Full text link
    In-context learning (ICL) improves language models' performance on a variety of NLP tasks by simply demonstrating a handful of examples at inference time. It is not well understood why ICL ability emerges, as the model has never been specifically trained on such demonstrations. Unlike prior work that explores implicit mechanisms behind ICL, we study ICL via investigating the pretraining data. Specifically, we first adapt an iterative, gradient-based approach to find a small subset of pretraining data that supports ICL. We observe that a continued pretraining on this small subset significantly improves the model's ICL ability, by up to 18%. We then compare the supportive subset constrastively with random subsets of pretraining data and discover: (1) The supportive pretraining data to ICL do not have a higher domain relevance to downstream tasks. (2) The supportive pretraining data have a higher mass of rarely occurring, long-tail tokens. (3) The supportive pretraining data are challenging examples where the information gain from long-range context is below average, indicating learning to incorporate difficult long-range context encourages ICL. Our work takes a first step towards understanding ICL via analyzing instance-level pretraining data. Our insights have a potential to enhance the ICL ability of language models by actively guiding the construction of pretraining data in the future.Comment: ACL 202

    Open Vocabulary Extreme Classification Using Generative Models

    Get PDF
    The extreme multi-label classification (XMC) task aims at tagging content with a subset of labels from an extremely large label set. The label vocabulary is typically defined in advance by domain experts and assumed to capture all necessary tags. However in real world scenarios this label set, although large, is often incomplete and experts frequently need to refine it. To develop systems that simplify this process, we introduce the task of open vocabulary XMC (OXMC): given a piece of content, predict a set of labels, some of which may be outside of the known tag set. Hence, in addition to not having training data for some labels-as is the case in zero-shot classification-models need to invent some labels on-the-fly. We propose GROOV, a fine-tuned seq2seq model for OXMC that generates the set of labels as a flat sequence and is trained using a novel loss independent of predicted label order. We show the efficacy of the approach, experimenting with popular XMC datasets for which GROOV is able to predict meaningful labels outside the given vocabulary while performing on par with state-of-the-art solutions for known labels

    Text Characterization Toolkit

    Full text link
    In NLP, models are usually evaluated by reporting single-number performance scores on a number of readily available benchmarks, without much deeper analysis. Here, we argue that - especially given the well-known fact that benchmarks often contain biases, artefacts, and spurious correlations - deeper results analysis should become the de-facto standard when presenting new models or benchmarks. We present a tool that researchers can use to study properties of the dataset and the influence of those properties on their models' behaviour. Our Text Characterization Toolkit includes both an easy-to-use annotation tool, as well as off-the-shelf scripts that can be used for specific analyses. We also present use-cases from three different domains: we use the tool to predict what are difficult examples for given well-known trained models and identify (potentially harmful) biases and heuristics that are present in a dataset

    Assignment of Absolute Configuration to Enantiomers of Anti-Alzheimer Drug Candidate Blarcamesine

    Get PDF
    Blarcamesine is a promising investigational drug for the treatment of Alzheimer's disease. The international nonproprietary name blarcamesine refers to a racemic compound, although it seems likely that it will be marketed in an enantiopure form. A resolution process has been described in the literature, but the absolute configurations of the enantiomers have not yet been disclosed. In the present study, crystals of (R)-(-)- and (S)-(+)-mandelate salts of (+)- and (-)-blarcamesine and also that of (R)-(+)-blarcamesine itself, suitable for single-crystal X-ray diffraction measurement were prepared and the absolute configurations of (+)- and (-)-blarcamesine have been determined
    corecore