3 research outputs found

    Co-occurrences using Fasttext embeddings for word similarity tasks in Urdu

    Full text link
    Urdu is a widely spoken language in South Asia. Though immoderate literature exists for the Urdu language still the data isn't enough to naturally process the language by NLP techniques. Very efficient language models exist for the English language, a high resource language, but Urdu and other under-resourced languages have been neglected for a long time. To create efficient language models for these languages we must have good word embedding models. For Urdu, we can only find word embeddings trained and developed using the skip-gram model. In this paper, we have built a corpus for Urdu by scraping and integrating data from various sources and compiled a vocabulary for the Urdu language. We also modify fasttext embeddings and N-Grams models to enable training them on our built corpus. We have used these trained embeddings for a word similarity task and compared the results with existing techniques

    Cache Bypassing for Machine Learning Algorithms

    Full text link
    Graphics Processing Units (GPUs) were once used solely for graphical computation tasks but with the increase in the use of machine learning applications, the use of GPUs to perform general-purpose computing has increased in the last few years. GPUs employ a massive amount of threads, that in turn achieve a high amount of parallelism, to perform tasks. Though GPUs have a high amount of computation power, they face the problem of cache contention due to the SIMT model that they use. A solution to this problem is called "cache bypassing". This paper presents a predictive model that analyzes the access patterns of various machine learning algorithms and determines whether certain data should be stored in the cache or not. It presents insights on how well each model performs on different datasets and also shows how minimizing the size of each model will affect its performance The performance of most of the models were found to be around 90% with KNN performing the best but not with the smallest size. We further increase the features by splitting the addresses into chunks of 4 bytes. We observe that this increased the performance of the neural network substantially and increased the accuracy to 99.9% with three neurons

    Few Shot Learning for Information Verification

    Full text link
    Information verification is quite a challenging task, this is because many times verifying a claim can require picking pieces of information from multiple pieces of evidence which can have a hierarchy of complex semantic relations. Previously a lot of researchers have mainly focused on simply concatenating multiple evidence sentences to accept or reject claims. These approaches are limited as evidence can contain hierarchical information and dependencies. In this research, we aim to verify facts based on evidence selected from a list of articles taken from Wikipedia. Pretrained language models such as XLNET are used to generate meaningful representations and graph-based attention and convolutions are used in such a way that the system requires little additional training to learn to verify facts
    corecore