471 research outputs found

    Information-Coupled Turbo Codes for LTE Systems

    Full text link
    We propose a new class of information-coupled (IC) Turbo codes to improve the transport block (TB) error rate performance for long-term evolution (LTE) systems, while keeping the hybrid automatic repeat request protocol and the Turbo decoder for each code block (CB) unchanged. In the proposed codes, every two consecutive CBs in a TB are coupled together by sharing a few common information bits. We propose a feed-forward and feed-back decoding scheme and a windowed (WD) decoding scheme for decoding the whole TB by exploiting the coupled information between CBs. Both decoding schemes achieve a considerable signal-to-noise-ratio (SNR) gain compared to the LTE Turbo codes. We construct the extrinsic information transfer (EXIT) functions for the LTE Turbo codes and our proposed IC Turbo codes from the EXIT functions of underlying convolutional codes. An SNR gain upper bound of our proposed codes over the LTE Turbo codes is derived and calculated by the constructed EXIT charts. Numerical results show that the proposed codes achieve an SNR gain of 0.25 dB to 0.72 dB for various code parameters at a TB error rate level of 10210^{-2}, which complies with the derived SNR gain upper bound.Comment: 13 pages, 12 figure

    Optimization of Coastal Cruise Lines in China

    Get PDF
    The paper analyzes the current state of the Chinese cruise market and presents the idea of building a business model of coastal cruising. The cruise demand of middle-income families, which includes the desired travel days, ports of call, is surveyed. The data of the previous non-cruise travels and the data of future cruises of middle-income families are used to develop a model designed to identify the maximum passenger volume with minimum operating costs while taking cruise itineraries and schedules into account. A matrix coding genetic algorithm was designed to solve the model. The case study found that a voyage of 4.79 days results in equilibrium, that the annual demand is 200,840 passengers, and that the daily voyage cost is 0.843 million Yuan

    Optimization of Coastal Cruise Lines in China

    Get PDF
    The paper analyzes the current state of the Chinese cruise market and presents the idea of building a business model of coastal cruising. The cruise demand of middle-income families, which includes the desired travel days, ports of call, is surveyed. The data of the previous non-cruise travels and the data of future cruises of middle-income families are used to develop a model designed to identify the maximum passenger volume with minimum operating costs while taking cruise itineraries and schedules into account. A matrix coding genetic algorithm was designed to solve the model. The case study found that a voyage of 4.79 days results in equilibrium, that the annual demand is 200,840 passengers, and that the daily voyage cost is 0.843 million Yuan

    InvestLM: A Large Language Model for Investment using Financial Domain Instruction Tuning

    Full text link
    We present a new financial domain large language model, InvestLM, tuned on LLaMA-65B (Touvron et al., 2023), using a carefully curated instruction dataset related to financial investment. Inspired by less-is-more-for-alignment (Zhou et al., 2023), we manually curate a small yet diverse instruction dataset, covering a wide range of financial related topics, from Chartered Financial Analyst (CFA) exam questions to SEC filings to Stackexchange quantitative finance discussions. InvestLM shows strong capabilities in understanding financial text and provides helpful responses to investment related questions. Financial experts, including hedge fund managers and research analysts, rate InvestLM's response as comparable to those of state-of-the-art commercial models (GPT-3.5, GPT-4 and Claude-2). Zero-shot evaluation on a set of financial NLP benchmarks demonstrates strong generalizability. From a research perspective, this work suggests that a high-quality domain specific LLM can be tuned using a small set of carefully curated instructions on a well-trained foundation model, which is consistent with the Superficial Alignment Hypothesis (Zhou et al., 2023). From a practical perspective, this work develops a state-of-the-art financial domain LLM with superior capability in understanding financial texts and providing helpful investment advice, potentially enhancing the work efficiency of financial professionals. We release the model parameters to the research community.Comment: Link: https://github.com/AbaciNLP/InvestL

    Specialist or Generalist? Instruction Tuning for Specific NLP Tasks

    Full text link
    The potential of large language models (LLMs) to simultaneously perform a wide range of natural language processing (NLP) tasks has been the subject of extensive research. Although instruction tuning has proven to be a data-efficient method for transforming LLMs into such generalist models, their performance still lags behind specialist models trained exclusively for specific tasks. In this paper, we investigate whether incorporating broad-coverage generalist instruction tuning can contribute to building a specialist model. We hypothesize that its efficacy depends on task specificity and skill requirements. Our experiments assess four target tasks with distinct coverage levels, revealing that integrating generalist instruction tuning consistently enhances model performance when the task coverage is broad. The effect is particularly pronounced when the amount of task-specific training data is limited. Further investigation into three target tasks focusing on different capabilities demonstrates that generalist instruction tuning improves understanding and reasoning abilities. However, for tasks requiring factual knowledge, generalist data containing hallucinatory information may negatively affect the model's performance. Overall, our work provides a systematic guide for developing specialist models with general instruction tuning. Our code and other related resources can be found at https://github.com/DavidFanzz/Generalist_or_Specialist.Comment: Accepted to EMNLP 202
    corecore