221 research outputs found

    Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation

    Full text link
    To acquire noun phrases from running texts is useful for many applications, such as word grouping,terminology indexing, etc. The reported literatures adopt pure probabilistic approach, or pure rule-based noun phrases grammar to tackle this problem. In this paper, we apply a probabilistic chunker to deciding the implicit boundaries of constituents and utilize the linguistic knowledge to extract the noun phrases by a finite state mechanism. The test texts are SUSANNE Corpus and the results are evaluated by comparing the parse field of SUSANNE Corpus automatically. The results of this preliminary experiment are encouraging.Comment: 8 pages, Postscript file, Unix compressed, uuencode

    Equity Building Actions of New Ventures in A High-Velocity Market: Research on Taiwan\u27s Internet Entrepreneurial Organizations

    Get PDF
    Based on the theories, such as the resources-based theory, new product development and strategic alliances, we proposed the equity-building actions of new ventures in the Internet industry. We note that the new venturesā€™ purpose in capital raising actions before going public is not simply to raise funds, but to obtain rare resources and build core competence through equity invested or conjoined. Through interviews, we discuss factors that affect the equity-building process, and propose two propositions. Firstly, the original core resources of new ventures will affect the equity-building process. Especially, on target selecting, alliance timing, and alliance preference. Secondly, equity-building actions before IPO are parts of a growing strategy for emerging firms. The findings of this research are helpful in understanding the linkage between resource endowment and equity-building actions, and for new ventures to build up competitive advantages during founding period

    Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations

    Full text link
    Large language models (LMs) have exhibited superior in-context learning (ICL) ability to adopt to target tasks by prompting with a few input-output demonstrations. Towards better ICL, different methods are proposed to select representative demonstrations from existing training corpora. However, such a setting is not aligned with real-world practices, as end-users usually query LMs without accesses to demonstration pools. Inspired by evidence suggesting LMs' zero-shot capabilities are underrated, and the role of demonstrations are primarily for exposing models' intrinsic functionalities, we introduce Self-ICL, a simple framework for zero-shot ICL. Given a test input, Self-ICL first prompts the model to generate pseudo-inputs. Next, the model predicts pseudo-labels for the pseudo-inputs via zero-shot prompting. Finally, we construct pseudo-demonstrations from pseudo-input-label pairs, and perform ICL for the test input. Evaluation on BIG-Bench Hard shows Self-ICL steadily surpasses zero-shot and zero-shot chain-of-thought baselines on head-to-head and all-task average performance. Our findings suggest the possibility to bootstrap LMs' intrinsic capabilities towards better zero-shot performance.Comment: Work in progres

    Large Language Models Perform Diagnostic Reasoning

    Full text link
    We explore the extension of chain-of-thought (CoT) prompting to medical reasoning for the task of automatic diagnosis. Motivated by doctors' underlying reasoning process, we present Diagnostic-Reasoning CoT (DR-CoT). Empirical results demonstrate that by simply prompting large language models trained only on general text corpus with two DR-CoT exemplars, the diagnostic accuracy improves by 15% comparing to standard prompting. Moreover, the gap reaches a pronounced 18% in out-domain settings. Our findings suggest expert-knowledge reasoning in large language models can be elicited through proper promptings.Comment: Accepted as a Tiny Paper at ICLR 2023 (10 pages, 5 figures

    Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation

    Full text link
    In this paper, we address the hallucination problem commonly found in natural language generation tasks. Language models often generate fluent and convincing content but can lack consistency with the provided source, resulting in potential inaccuracies. We propose a new decoding method called Fidelity-Enriched Contrastive Search (FECS), which augments the contrastive search framework with context-aware regularization terms. FECS promotes tokens that are semantically similar to the provided source while penalizing repetitiveness in the generated text. We demonstrate its effectiveness across two tasks prone to hallucination: abstractive summarization and dialogue generation. Results show that FECS consistently enhances faithfulness across various language model sizes while maintaining output diversity comparable to well-performing decoding algorithms.Comment: Accepted as a short paper at EMNLP 202

    ZARA: Improving Few-Shot Self-Rationalization for Small Language Models

    Full text link
    Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA's ability to automatically identify plausible and accurate rationale-answer pairs.Comment: Accepted as a long paper at EMNLP Findings 202

    Sorafenib for hepatocellular carcinoma patients beyond Milan criteria after orthotopic liver transplantation: a case control study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Orthotopic liver transplantation (OLT) is one of the most effective treatments for patients with hepatocellular carcinoma (HCC) within the Milan criteria. However, for patients beyond these criteria, the recurrence rate is higher and the prognosis is worse. Sorafenib is the only drug showing survival benefits in advanced HCC patients; however, its role in patients beyond the Milan criteria after OLT remains unclear and requires further investigation.</p> <p>Methods</p> <p>As a case-control study, we retrospectively analyzed 17 Chinese patients beyond Milan criteria undergoing OLT for HCC. These patients were stratified into adjuvant (n = 5), palliative (n = 6), and control groups (n = 6).</p> <p>Results</p> <p>Nine of 11 patients who received sorafenib after OLT needed dose reduction due to more than grade 2 side effects. The disease-free survival rates for patients with or without adjuvant sorafenib were 100% versus 37.5% (p = 0.034) at 6 months, 66.7% versus 9.4% (p = 0.026) at 12 months, and 66.7% versus 0.0% (p = 0.011) at 18 months, respectively. The overall survival rates for patients in palliative and control groups were 66.7% versus 40.0% (p = 0.248) at 6 months, 66.7% versus 40.0% (p = 0.248) at 12 months, and 50.0% versus 20.0% (p = 0.17) at 18 months, respectively. Patients in the adjuvant group had better overall survival rates than those in the palliative and control groups (p = 0.031) at 24-month follow-up.</p> <p>Conclusions</p> <p>Adjuvant sorafenib could possibly extend both disease-free and overall survival for HCC patients beyond Milan criteria after OLT.</p

    Further evidence on bear market predictability: The role of the external finance premium

    Get PDF
    In this paper, we revisit bear market predictability by employing a number of variables widely used in forecasting stock returns. In particular, we focus on variables related to the presence of imperfect credit markets. We evaluate prediction performance using in-sample and out-of-sample tests. Empirical evidence from the US stock market suggests that among the variables we investigate, the default yield spread, inflation, and the term spread are useful in predicting bear markets. Further, we find that the default yield spread provides superior out-of-sample predictability for bear markets one to three months ahead, which suggests that the external finance premium has an informative content on the financial market
    • ā€¦
    corecore