3 research outputs found

    Towards textual data augmentation for neural networks: synonyms and maximum loss

    Get PDF
    Data augmentation is one of the ways of dealing with labeled data scarcity and overfitting. Both these problems are crucial for modern deep learning聽algorithms which require massive amounts of data. The problem is better聽explored in the context of image analysis than for text. This work is a step聽forward to close this gap. We propose a method for augmenting textual data聽when training convolutional neural networks for sentence classification. The聽augumentation is based on the substitution of words using a thesaurus as well聽as the Princeton WordNet. Our method improves upon the baseline in almost聽all cases. In terms of accuracy the best of the variants is 1.2% (pp.) better聽 than the baseline

    Answering Polish Trivia Questions with the help of dense passage retriever

    Get PDF
    This paper discusses the problem of Question answering using Dense Passage Retriever in Task 4 during 2021 edition of PolEval. Our goal was to show the process of automatic answering trivia questions using language models and Wikipedia database. The best solution created by the authors utilized Dense Passage Retrieval approach for extractive question answering combined with Natural Language Inference for boolean questions. The training data for document retrieval and extractive question answering were obtained by employing distant supervision. The obtained solution reached 50.96% accurracy giving second place in the competition

    Towards Automatic Points of Interest Matching

    No full text
    Complementing information about particular points, places, or institutions, i.e., so-called Points of Interest (POIs) can be achieved by matching data from the growing number of geospatial databases; these include Foursquare, OpenStreetMap, Yelp, and Facebook Places. Doing this potentially allows for the acquisition of more accurate and more complete information about POIs than would be possible by merely extracting the information from each of the systems alone. Problem: The task of Points of Interest matching, and the development of an algorithm to perform this automatically, are quite challenging problems due to the prevalence of different data structures, data incompleteness, conflicting information, naming differences, data inaccuracy, and cultural and language differences; in short, the difficulties experienced in the process of obtaining (complementary) information about the POI from different sources are due, in part, to the lack of standardization among Points of Interest descriptions; a further difficulty stems from the vast and rapidly growing amount of data to be assessed on each occasion. Research design and contributions: To propose an efficient algorithm for automatic Points of Interest matching, we: (1) analyzed available data sources—their structures, models, attributes, number of objects, the quality of data (number of missing attributes), etc.—and defined a unified POI model; (2) prepared a fairly large experimental dataset consisting of 50,000 matching and 50,000 non-matching points, taken from different geographical, cultural, and language areas; (3) comprehensively reviewed metrics that can be used for assessing the similarity between Points of Interest; (4) proposed and verified different strategies for dealing with missing or incomplete attributes; (5) reviewed and analyzed six different classifiers for Points of Interest matching, conducting experiments and follow-up comparisons to determine the most effective combination of similarity metric, strategy for dealing with missing data, and POIs matching classifier; and (6) presented an algorithm for automatic Points of Interest matching, detailing its accuracy and carrying out a complexity analysis. Results and conclusions: The main results of the research are: (1) comprehensive experimental verification and numerical comparisons of the crucial Points of Interest matching components (similarity metrics, approaches for dealing with missing data, and classifiers), indicating that the best Points of Interest matching classifier is a combination of random forest algorithm coupled with marking of missing data and mixing different similarity metrics for different POI attributes; and (2) an efficient greedy algorithm for automatic POI matching. At a cost of just 3.5% in terms of accuracy, it allows for reducing POI matching time complexity by two orders of magnitude in comparison to the exact algorithm
    corecore