175 research outputs found

    Complete Convergence for Moving Average Process of Martingale Differences

    Get PDF
    Under some simple conditions, by using some techniques such as truncated method for random variables (see e.g., Gut (2005)) and properties of martingale differences, we studied the moving process based on martingale differences and obtained complete convergence and complete moment convergence for this moving process. Our results extend some related ones

    Convergence Properties for Asymptotically almost Negatively Associated Sequence

    Get PDF
    We get the strong law of large numbers, strong growth rate, and the integrability of supremum for the partial sums of asymptotically almost negatively associated sequence. In addition, the complete convergence for weighted sums of asymptotically almost negatively associated sequences is also studied

    A Novel Task of Loading and Computing Resource Scheduling Strategy in Internet of Vehicles Based on Dynamic Greedy Algorithm

    Get PDF
    Focus on the scheduling problem of distributed computing tasks in Internet of Vehicles. Firstly, based on the computing-aware network theory, a distributed computing resource model of the Internet of Vehicles is established, and the seven-dimensional QoS attributes of the computing resources in the Internet of Vehicles (reliability between computing resources, communication costs, computing speed and computing costs of the computing resources themselves , computing energy consumption, computing stability, and computing success rate) are grouped and transformed into two-dimensional comprehensive attribute priorities: computing performance priority and communication performance priority. Secondly, the weighted directed acyclic graph model of distributed computing tasks in the Internet of Vehicles and the seven-dimensional QoS attribute weighted undirected topology graph model of distributed computing resources in the Internet of Vehicles are respectively established. Moreover, a dynamic greedy algorithm-based task of loading and computing resource scheduling algorithm is proposed. Finally, the example analysis shows that the overall performance of this dynamic greedy algorithm-based task of loading and computing resource scheduling algorithm is better than the classic HEFT scheduling algorithm and round robin scheduling algorithm

    GNN-SL: Sequence Labeling Based on Nearest Examples via GNN

    Full text link
    To better handle long-tail cases in the sequence labeling (SL) task, in this work, we introduce graph neural networks sequence labeling (GNN-SL), which augments the vanilla SL model output with similar tagging examples retrieved from the whole training set. Since not all the retrieved tagging examples benefit the model prediction, we construct a heterogeneous graph, and leverage graph neural networks (GNNs) to transfer information between the retrieved tagging examples and the input word sequence. The augmented node which aggregates information from neighbors is used to do prediction. This strategy enables the model to directly acquire similar tagging examples and improves the general quality of predictions. We conduct a variety of experiments on three typical sequence labeling tasks: Named Entity Recognition (NER), Part of Speech Tagging (POS), and Chinese Word Segmentation (CWS) to show the significant performance of our GNN-SL. Notably, GNN-SL achieves SOTA results of 96.9 (+0.2) on PKU, 98.3 (+0.4) on CITYU, 98.5 (+0.2) on MSR, and 96.9 (+0.2) on AS for the CWS task, and results comparable to SOTA performances on NER datasets, and POS datasets.Comment: preprin

    Convergence Rates in the Strong Law of Large Numbers for Martingale Difference Sequences

    Get PDF
    We study the complete convergence and complete moment convergence for martingale difference sequence. Especially, we get the Baum-Katz-type Theorem and Hsu-Robbins-type Theorem for martingale difference sequence. As a result, the Marcinkiewicz-Zygmund strong law of large numbers for martingale difference sequence is obtained. Our results generalize the corresponding ones of Stoica (2007, 2011)

    The Strong Consistency of the Estimator of Fixed-Design Regression Model under Negatively Dependent Sequences

    Get PDF
    We study the strong consistency of estimator of fixed design regression model under negatively dependent sequences by using the classical Rosenthal-type inequality and the truncated method. As an application, the strong consistency for the nearest neighbor estimator is obtained

    GPT-NER: Named Entity Recognition via Large Language Models

    Full text link
    Despite the fact that large-scale Language Models (LLM) have achieved SOTA performances on a variety of NLP tasks, its performance on NER is still significantly below supervised baselines. This is due to the gap between the two tasks the NER and LLMs: the former is a sequence labeling task in nature while the latter is a text-generation model. In this paper, we propose GPT-NER to resolve this issue. GPT-NER bridges the gap by transforming the sequence labeling task to a generation task that can be easily adapted by LLMs e.g., the task of finding location entities in the input text "Columbus is a city" is transformed to generate the text sequence "@@Columbus## is a city", where special tokens @@## marks the entity to extract. To efficiently address the "hallucination" issue of LLMs, where LLMs have a strong inclination to over-confidently label NULL inputs as entities, we propose a self-verification strategy by prompting LLMs to ask itself whether the extracted entities belong to a labeled entity tag. We conduct experiments on five widely adopted NER datasets, and GPT-NER achieves comparable performances to fully supervised baselines, which is the first time as far as we are concerned. More importantly, we find that GPT-NER exhibits a greater ability in the low-resource and few-shot setups, when the amount of training data is extremely scarce, GPT-NER performs significantly better than supervised models. This demonstrates the capabilities of GPT-NER in real-world NER applications where the number of labeled examples is limited
    corecore