101 research outputs found

    Research on Business Model of Self-operated Import E-commerce Based on Value Network ——A Case Study of KAOLA.COM

    Get PDF
    From the perspective of value network, the paper takes KAOLA.COM as an example to analyze the business model of self-operated cross-border e-commerce. It tries to propose a research framework including value proposition, value creation, value realization and value support based on the summary of relevant theory. The paper analyses the development status and the value business model of KAOLA.COM,and puts forward some advices optimizing the value network of China's self-operated cross-border e-commerce,it includes : (1)segment customer , combining online and offline.(2) control the genuine strictly, strengthening the supply chain management.(3) expand the category, achieving the third party income.(4) improve customer relationship, accelerating layout of overseas warehouse. The purpose of the research is to provide reference for optimization and development of self-operated cross-border e-commerce. Keywords: Value network; Self – operated cross-border e-commerce; Business model; KAOLA.COM

    Assessing and predicting small industrial enterprises’ credit ratings:A fuzzy decision-making approach

    Get PDF
    Corporate credit-rating assessment plays a crucial role in helping financial institutions make their lending decisions and in reducing the financial constraints of small enterprises. This paper presents a new approach for small industrial enterprises’ credit-rating assessment using fuzzy decision-making methods, and tests it using real bank loan data from 1,820 small industrial enterprises in China. The procedure of the proposed rating approach includes (1) using triangular fuzzy numbers to quantify the qualitative evaluation indicators; (2) adopting a correlation analysis, univariate analysis and stepping backwards feature selection method to select the input features; (3) employing the best-worst method (BWM) combined with the entropy weight method (EWM), the fuzzy c-means algorithm and the technique for order of preference by similarity to ideal solution (TOPSIS) to classify small enterprises into rating classes; and (4) applying the lattice degree of nearness to predict a new loan applicant’s rating. We also conduct a 10-fold cross-validation to evaluate the predictive performance of our proposed approach. The predictive results demonstrate that our proposed data-processing and feature selection approaches have better accuracy than the alternative approaches in predicting default, offering bankers a new valuable rating system to assist their decision making

    TSST: A Benchmark and Evaluation Models for Text Speech-Style Transfer

    Full text link
    Text style is highly abstract, as it encompasses various aspects of a speaker's characteristics, habits, logical thinking, and the content they express. However, previous text-style transfer tasks have primarily focused on data-driven approaches, lacking in-depth analysis and research from the perspectives of linguistics and cognitive science. In this paper, we introduce a novel task called Text Speech-Style Transfer (TSST). The main objective is to further explore topics related to human cognition, such as personality and emotion, based on the capabilities of existing LLMs. Considering the objective of our task and the distinctive characteristics of oral speech in real-life scenarios, we trained multi-dimension (i.e. filler words, vividness, interactivity, emotionality) evaluation models for the TSST and validated their correlation with human assessments. We thoroughly analyze the performance of several large language models (LLMs) and identify areas where further improvement is needed. Moreover, driven by our evaluation models, we have released a new corpus that improves the capabilities of LLMs in generating text with speech-style characteristics. In summary, we present the TSST task, a new benchmark for style transfer and emphasizing human-oriented evaluation, exploring and advancing the performance of current LLMs.Comment: Working in progres

    MindLLM: Pre-training Lightweight Large Language Model from Scratch, Evaluations and Domain Applications

    Full text link
    Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial intelligence is leveraged by developing increasingly large-scale models, there could be another branch to develop lightweight custom models that better serve certain domains, taking into account the high cost of training and deploying LLMs and the scarcity of resources. In this paper, we present MindLLM, a novel series of bilingual lightweight large language models, trained from scratch, alleviating such burdens by offering models with 1.3 billion and 3 billion parameters. A thorough account of experiences accrued during large model development is given, covering every step of the process, including data construction, model architecture, evaluation, and applications. Such insights are hopefully valuable for fellow academics and developers. MindLLM consistently matches or surpasses the performance of other open-source larger models on some public benchmarks. We also introduce an innovative instruction tuning framework tailored for smaller models to enhance their capabilities efficiently. Moreover, we explore the application of MindLLM in specific vertical domains such as law and finance, underscoring the agility and adaptability of our lightweight models.Comment: Working in progres

    RetGen: A Joint framework for Retrieval and Grounded Text Generation Modeling

    Full text link
    Recent advances in large-scale pre-training such as GPT-3 allow seemingly high quality text to be generated from a given prompt. However, such generation systems often suffer from problems of hallucinated facts, and are not inherently designed to incorporate useful external information. Grounded generation models appear to offer remedies, but their training typically relies on rarely-available parallel data where information-relevant documents are provided for context. We propose a framework that alleviates this data constraint by jointly training a grounded generator and document retriever on the language model signal. The model learns to reward retrieval of the documents with the highest utility in generation, and attentively combines them using a Mixture-of-Experts (MoE) ensemble to generate follow-on text. We demonstrate that both generator and retriever can take advantage of this joint training and work synergistically to produce more informative and relevant text in both prose and dialogue generation.Comment: accepted by AAAI-22, camera ready versio
    • …
    corecore