277 research outputs found

    A Unified Optimization Approach for Sparse Tensor Operations on GPUs

    Full text link
    Sparse tensors appear in many large-scale applications with multidimensional and sparse data. While multidimensional sparse data often need to be processed on manycore processors, attempts to develop highly-optimized GPU-based implementations of sparse tensor operations are rare. The irregular computation patterns and sparsity structures as well as the large memory footprints of sparse tensor operations make such implementations challenging. We leverage the fact that sparse tensor operations share similar computation patterns to propose a unified tensor representation called F-COO. Combined with GPU-specific optimizations, F-COO provides highly-optimized implementations of sparse tensor computations on GPUs. The performance of the proposed unified approach is demonstrated for tensor-based kernels such as the Sparse Matricized Tensor- Times-Khatri-Rao Product (SpMTTKRP) and the Sparse Tensor- Times-Matrix Multiply (SpTTM) and is used in tensor decomposition algorithms. Compared to state-of-the-art work we improve the performance of SpTTM and SpMTTKRP up to 3.7 and 30.6 times respectively on NVIDIA Titan-X GPUs. We implement a CANDECOMP/PARAFAC (CP) decomposition and achieve up to 14.9 times speedup using the unified method over state-of-the-art libraries on NVIDIA Titan-X GPUs

    TGSum: Build Tweet Guided Multi-Document Summarization Dataset

    Full text link
    The development of summarization research has been significantly hampered by the costly acquisition of reference summaries. This paper proposes an effective way to automatically collect large scales of news-related multi-document summaries with reference to social media's reactions. We utilize two types of social labels in tweets, i.e., hashtags and hyper-links. Hashtags are used to cluster documents into different topic sets. Also, a tweet with a hyper-link often highlights certain key points of the corresponding document. We synthesize a linked document cluster to form a reference summary which can cover most key points. To this aim, we adopt the ROUGE metrics to measure the coverage ratio, and develop an Integer Linear Programming solution to discover the sentence set reaching the upper bound of ROUGE. Since we allow summary sentences to be selected from both documents and high-quality tweets, the generated reference summaries could be abstractive. Both informativeness and readability of the collected summaries are verified by manual judgment. In addition, we train a Support Vector Regression summarizer on DUC generic multi-document summarization benchmarks. With the collected data as extra training resource, the performance of the summarizer improves a lot on all the test sets. We release this dataset for further research.Comment: 7 pages, 1 figure in AAAI 201

    LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models

    Full text link
    In this work, we present a novel method to tackle the token generation challenge in Vision Language Models (VLMs) for video and image understanding, called LLaMA-VID. Current VLMs, while proficient in tasks like image captioning and visual question answering, face computational burdens when processing long videos due to the excessive visual tokens. LLaMA-VID addresses this issue by representing each frame with two distinct tokens, namely context token and content token. The context token encodes the overall image context based on user input, whereas the content token encapsulates visual cues in each frame. This dual-token strategy significantly reduces the overload of long videos while preserving critical information. Generally, LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. It is proved to surpass previous methods on most of video- or image-based benchmarks. Code is available https://github.com/dvlab-research/LLaMA-VID}{https://github.com/dvlab-research/LLaMA-VIDComment: Code is available at https://github.com/dvlab-research/LLaMA-VI

    Sentiment Lexicon Induction and Interpretable Multiple-instance Learning in Financial Markets

    Get PDF
    Sentiment analysis has been widely used in the domain of finance. There are two most common textual sentiment analysis methods in finance: \textit{dictionary-based approach} and \textit{machine learning approach}. The dictionary-based method is the most convenient and efficient method to extract sentiments from the text, but the words in the dictionary are limited and cannot capture the full scope of a particular domain. Additionally, it is expensive and unsustainable to manually create and maintain domain-specific dictionary using expert opinions. Deep learning models become mainstream methods in sentiment analysis because of their better performance by utilizing extra information on a larger corpus and more complex model structures. However, deep learning models often suffer from the interpretability problem. This thesis is an attempt to address the issues of both methods. It proposes a machine learning method to do a corpus-based sentiment lexicon induction, which extends the sentiment dictionary that is customized to analyze corporate conference calls. The new extended dictionary is shown to have a better performance than the original dictionary in terms of the three-day returns of the companies in the MSCI universe. It also proposes a highly interpretable attention-based multiple-instance learning model to perform sentiment classification. It also shows that the newly proposed model has comparable accuracy performance to the state-of-the-art sequential models with better interpretability. A keyword ranking is also generated by the model as a by-product. A new sentiment dictionary is also generated by the deep learning method and shows even better performance than both the extended dictionary and the original dictionary

    Exploring Evaluation Factors and Framework for the Object of Automated Trading System

    Get PDF
    Automated trading system (ATS) is a computer program that combines different trading rules to find optimal trading opportunities. The objects of ATS, which are financial assets, need evaluation because that is of great significance for stakeholders and market orders. From the perspectives of dealers, agents, external environment, and objects themselves, this study explored factors in evaluating and choosing the object of ATS. Based on design science research (DSR), we presented a preliminary evaluation framework and conducted semi-structured interviews with twelve trading participants engaged in different occupations. By analyzing the data collected, we validated eight factors from literatures and found four new factors and fifty-four sub-factors. Additionally, this paper developed a relationship model of factors. The results could be used in future work to explore and validate more evaluation factors by using data mining
    corecore