48 research outputs found

    A Novel Task of Loading and Computing Resource Scheduling Strategy in Internet of Vehicles Based on Dynamic Greedy Algorithm

    Get PDF
    Focus on the scheduling problem of distributed computing tasks in Internet of Vehicles. Firstly, based on the computing-aware network theory, a distributed computing resource model of the Internet of Vehicles is established, and the seven-dimensional QoS attributes of the computing resources in the Internet of Vehicles (reliability between computing resources, communication costs, computing speed and computing costs of the computing resources themselves , computing energy consumption, computing stability, and computing success rate) are grouped and transformed into two-dimensional comprehensive attribute priorities: computing performance priority and communication performance priority. Secondly, the weighted directed acyclic graph model of distributed computing tasks in the Internet of Vehicles and the seven-dimensional QoS attribute weighted undirected topology graph model of distributed computing resources in the Internet of Vehicles are respectively established. Moreover, a dynamic greedy algorithm-based task of loading and computing resource scheduling algorithm is proposed. Finally, the example analysis shows that the overall performance of this dynamic greedy algorithm-based task of loading and computing resource scheduling algorithm is better than the classic HEFT scheduling algorithm and round robin scheduling algorithm

    GPT-NER: Named Entity Recognition via Large Language Models

    Full text link
    Despite the fact that large-scale Language Models (LLM) have achieved SOTA performances on a variety of NLP tasks, its performance on NER is still significantly below supervised baselines. This is due to the gap between the two tasks the NER and LLMs: the former is a sequence labeling task in nature while the latter is a text-generation model. In this paper, we propose GPT-NER to resolve this issue. GPT-NER bridges the gap by transforming the sequence labeling task to a generation task that can be easily adapted by LLMs e.g., the task of finding location entities in the input text "Columbus is a city" is transformed to generate the text sequence "@@Columbus## is a city", where special tokens @@## marks the entity to extract. To efficiently address the "hallucination" issue of LLMs, where LLMs have a strong inclination to over-confidently label NULL inputs as entities, we propose a self-verification strategy by prompting LLMs to ask itself whether the extracted entities belong to a labeled entity tag. We conduct experiments on five widely adopted NER datasets, and GPT-NER achieves comparable performances to fully supervised baselines, which is the first time as far as we are concerned. More importantly, we find that GPT-NER exhibits a greater ability in the low-resource and few-shot setups, when the amount of training data is extremely scarce, GPT-NER performs significantly better than supervised models. This demonstrates the capabilities of GPT-NER in real-world NER applications where the number of labeled examples is limited

    Sim-GPT: Text Similarity via GPT Annotated Data

    Full text link
    Due to the lack of a large collection of high-quality labeled sentence pairs with textual similarity scores, existing approaches for Semantic Textual Similarity (STS) mostly rely on unsupervised techniques or training signals that are only partially correlated with textual similarity, e.g., NLI-based datasets. To tackle this issue, in this paper, we propose the strategy of measuring text similarity via GPT annotated data (Sim-GPT for short). The core idea of Sim-GPT is to generate data with STS labels using GPT-4, based on which an STS model is trained. Sim-GPT framework utilizes LLMs to provide a substantial amount of reliable annotated data filling the gap of the lack of training signals for STS. Sim-GPT is trained on a one-time generated dataset using BERT or RoBERTa as the backbone, which offers long-term savings in cost and speed compared to repeatedly invoking LLMs for each sentence pair. Trained on the examples from GPT-4 (371K), Sim-GPT yields SOTA performances on the widely-used seven STS benchmarks: +0.99 over supervised-SimCSE, and +0.42 over the current SOTA PromCSE model. To encourage further advancements of the field, we release both models and the 371K annotated examples from GPT-4. Code, models and annotated data are available at: https://github.com/ShuheWang1998/Sim-GPT

    Pupil-driven quantitative differential phase contrast imaging

    Full text link
    In this research, we reveal the inborn but hitherto ignored properties of quantitative differential phase contrast (qDPC) imaging: the phase transfer function being an edge detection filter. Inspired by this, we highlighted the duality of qDPC between optics and pattern recognition, and propose a simple and effective qDPC reconstruction algorithm, termed Pupil-Driven qDPC (pd-qDPC), to facilitate the phase reconstruction quality for the family of qDPC-based phase reconstruction algorithms. We formed a new cost function in which modified L0-norm was used to represent the pupil-driven edge sparsity, and the qDPC convolution operator is duplicated in the data fidelity term to achieve automatic background removal. Further, we developed the iterative reweighted soft-threshold algorithms based on split Bregman method to solve this modified L0-norm problem. We tested pd-qDPC on both simulated and experimental data and compare against state-of-the-art (SOTA) methods including L2-norm, total variation regularization (TV-qDPC), isotropic-qDPC, and Retinex qDPC algorithms. Results show that our proposed model is superior in terms of phase reconstruction quality and implementation efficiency, in which it significantly increases the experimental robustness while maintaining the data fidelity. In general, the pd-qDPC enables the high-quality qDPC reconstruction without any modification of the optical system. It simplifies the system complexity and benefits the qDPC community and beyond including but not limited to cell segmentation and PTF learning based on the edge filtering property

    Quantum-enhanced Electrometer based on Microwave-dressed Rydberg Atoms

    Full text link
    Rydberg atoms have been shown remarkable performance in sensing microwave field. The sensitivity of such an electrometer based on optical readout of atomic ensemble has been demonstrated to approach the photon-shot-noise limit. However, the sensitivity can not be promoted infinitely by increasing the power of probe light due to the increased collision rates and power broadening. Compared with classical light, the use of quantum light may lead to a better sensitivity with lower number of photons. In this paper, we exploit entanglement in a microwave-dressed Rydberg electrometer to suppress the fluctuation of noise. The results show a sensitivity enhancement beating the shot noise limit in both cold and hot atom schemes. Through optimizing the transmission of optical readout, our quantum advantage can be maintained with different absorptive index of atomic vapor, which makes it possible to apply quantum light source in the absorptive electrometer

    Pushing the Limits of ChatGPT on NLP Tasks

    Full text link
    Despite the success of ChatGPT, its performances on most NLP tasks are still well below the supervised baselines. In this work, we looked into the causes, and discovered that its subpar performance was caused by the following factors: (1) token limit in the prompt does not allow for the full utilization of the supervised datasets; (2) mismatch between the generation nature of ChatGPT and NLP tasks; (3) intrinsic pitfalls of LLMs models, e.g., hallucination, overly focus on certain keywords, etc. In this work, we propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks. Our proposed modules include (1) a one-input-multiple-prompts strategy that employs multiple prompts for one input to accommodate more demonstrations; (2) using fine-tuned models for better demonstration retrieval; (3) transforming tasks to formats that are more tailored to the generation nature; (4) employing reasoning strategies that are tailored to addressing the task-specific complexity; (5) the self-verification strategy to address the hallucination issue of LLMs; (6) the paraphrase strategy to improve the robustness of model predictions. We conduct experiments on 21 datasets of 10 representative NLP tasks, including question answering, commonsense reasoning, natural language inference, sentiment analysis, named entity recognition, entity-relation extraction, event extraction, dependency parsing, semantic role labeling, and part-of-speech tagging. Using the proposed assemble of techniques, we are able to significantly boost the performance of ChatGPT on the selected NLP tasks, achieving performances comparable to or better than supervised baselines, or even existing SOTA performances

    Instruction Tuning for Large Language Models: A Survey

    Full text link
    This paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs). Instruction tuning refers to the process of further training LLMs on a dataset consisting of \textsc{(instruction, output)} pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. In this work, we make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications, along with an analysis on aspects that influence the outcome of IT (e.g., generation of instruction outputs, size of the instruction dataset, etc). We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research.Comment: A Survey paper, Pre-prin

    Digital Financial Inclusion to Corporation Value: The Mediating Effect of Ambidextrous Innovation

    No full text
    Corporate innovation can be subdivided, according to its approach and novelty, into exploitative innovation and exploratory innovation, i.e., ambidextrous innovation. Defined as actions to promote financial inclusion through digital financial services, digital financial inclusion brings new opportunities for the implementation of corporate innovation projects and the improvement of corporation value. Based on the annual reports (2012–2020) released by 1604 listed SMEs in China and the index of digital financial inclusion from Peking University, this paper explores the way that digital financial inclusion affects the corporation value of SMEs, with some moderating factors such as the financial flexibility, corporate social responsibility and product market competition in ambidextrous innovation. The study shows, in SMEs: (1) digital financial inclusion has a significant positive impact on exploitative innovation, but has less effect on exploratory innovation with a time lag; (2) ambidextrous innovation plays a partial intermediary role in the effect of digital financial inclusion on corporation value; (3) financial flexibility of the enterprise positively moderates the relationship between digital financial inclusion and corporate value. In the short term, corporate social responsibility negatively moderates the relationship between digital financial inclusion and corporate value; however, in the long term, it does contribute to the growth of corporate value. (4) Product market competition positively moderates the relationship between digital financial inclusion and exploitative innovation, but does not positively moderate the relationship between digital financial inclusion and exploratory innovation
    corecore