350 research outputs found
Text Classification: A Review, Empirical, and Experimental Evaluation
The explosive and widespread growth of data necessitates the use of text
classification to extract crucial information from vast amounts of data.
Consequently, there has been a surge of research in both classical and deep
learning text classification methods. Despite the numerous methods proposed in
the literature, there is still a pressing need for a comprehensive and
up-to-date survey. Existing survey papers categorize algorithms for text
classification into broad classes, which can lead to the misclassification of
unrelated algorithms and incorrect assessments of their qualities and behaviors
using the same metrics. To address these limitations, our paper introduces a
novel methodological taxonomy that classifies algorithms hierarchically into
fine-grained classes and specific techniques. The taxonomy includes methodology
categories, methodology techniques, and methodology sub-techniques. Our study
is the first survey to utilize this methodological taxonomy for classifying
algorithms for text classification. Furthermore, our study also conducts
empirical evaluation and experimental comparisons and rankings of different
algorithms that employ the same specific sub-technique, different
sub-techniques within the same technique, different techniques within the same
category, and categorie
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Ансамблевий класифікатор на основі бустінгу
Робота публікується згідно наказу Ректора НАУ від 27.05.2021 р. №311/од "Про розміщення кваліфікаційних робіт здобувачів вищої освіти в репозиторії університету". Керівник роботи: д.т.н., професор, зав. кафедри авіаційних комп’ютерно-інтегрованих комплексів, Синєглазов Віктор МихайловичThis paper considers the construction of a classifier based on neural networks, nowadays AI is a major global trend, as an element of AI, as a rule, an artificial neural network is used. One of the main tasks that solves the neural network is the problem of classification. For a neural network to become a tool, it must be trained. To train a neural network you must use a training sample. Since the marked training sample is expensive, the work uses semi-supervised learning, to solve the problem we use ensemble approach based on boosting. Speaking of unlabeled data, we can move on to the topic of semi-supervised learning. This is due to the need to process hard-to-access, limited data. Despite many problems, the first algorithms with similar structures have proven successful on a number of basic tasks in applications, conducting functional testing experiments in AI testing. There are enough variations to choose marking, where training takes place on a different set of information, the possible validation eliminates the need for robust method comparison. Typical areas where this occurs are speech processing (due to slow transcription), text categorization. Choosing labeled and unlabeled data to improve computational power leads to the conclusion that semi-supervised learning can be better than teacher-assisted learning. Also, it can be on an equal efficiency factor as supervised learning. Neural networks represent global trends in the fields of language search, machine vision with great cost and efficiency. The use of "Hyper automation" allows the necessary tasks to be processed to introduce speedy and simplified task execution. Big data involves the introduction of multi-threading, something that large companies in the artificial intelligence industry are doing.У даній роботі розглядається побудова класифікатора на основі нейронних мереж, на сьогоднішній день AI є основним світовим трендом, як елемент AI, як правило, використовується штучна нейронна мережа. Однією з основних задач, яку вирішує нейронна мережа, є проблема класифікації. Щоб нейронна мережа стала інструментом, її потрібно навчити. Для навчання нейронної мережі необхідно використовувати навчальну вибірку. Оскільки позначена навчальна вибірка є дорогою, у роботі використовується напівконтрольоване навчання, для вирішення проблеми ми використовуємо ансамблевий підхід на основі бустингу. Говорячи про немарковані дані, ми можемо перейти до теми напівконтрольованого навчання. Це пов’язано з необхідністю обробки важкодоступних обмежених даних. Незважаючи на багато проблем, перші алгоритми з подібними структурами виявилися успішними в ряді основних завдань у додатках, проводячи експерименти функціонального тестування в тестуванні ШІ. Є достатньо варіацій для вибору маркування, де навчання відбувається на іншому наборі інформації, можлива перевірка усуває потребу в надійному порівнянні методів. Типовими областями, де це відбувається, є обробка мовлення (через повільну транскрипцію), категоризація тексту. Вибір мічених і немічених даних для підвищення обчислювальної потужності призводить до висновку, що напівкероване навчання може бути кращим, ніж навчання за допомогою вчителя. Крім того, воно може мати такий же коефіцієнт ефективності, як навчання під наглядом. Нейронні мережі представляють глобальні тенденції в області мовного пошуку, машинного зору з великою вартістю та ефективністю. Використання «Гіперавтоматизації» дозволяє обробляти необхідні завдання для впровадження швидкого та спрощеного виконання завдань. Великі дані передбачають впровадження багатопоточності, чим займаються великі компанії в індустрії штучного інтелекту
Learning with Low-Quality Data: Multi-View Semi-Supervised Learning with Missing Views
The focus of this thesis is on learning approaches for what we call ``low-quality data'' and in particular data in which only small amounts of labeled target data is available. The first part provides background discussion on low-quality data issues, followed by preliminary study in this area. The remainder of the thesis focuses on a particular scenario: multi-view semi-supervised learning. Multi-view learning generally refers to the case of learning with data that has multiple natural views, or sets of features, associated with it. Multi-view semi-supervised learning methods try to exploit the combination of multiple views along with large amounts of unlabeled data in order to learn better predictive functions when limited labeled data is available. However, lack of complete view data limits the applicability of multi-view semi-supervised learning to real world data. Commonly, one data view is readily and cheaply available, but additionally views may be costly or only available in some cases. This thesis work aims to make multi-view semi-supervised learning approaches more applicable to real world data specifically by addressing the issue of missing views through both feature generation and active learning, and addressing the issue of model selection for semi-supervised learning with limited labeled data. This thesis introduces a unified approach for handling missing view data in multi-view semi-supervised learning tasks, which applies to both data with completely missing additional views and data only missing views in some instances. The idea is to learn a feature generation function mapping one view to another with the mapping biased to encourage the features generated to be useful for multi-view semi-supervised learning algorithms. The mapping is then used to fill in views as pre-processing. Unlike previously proposed single-view multi-view learning approaches, the proposed approach is able to take advantage of additional view data when available, and for the case of partial view presence is the first feature-generation approach specifically designed to take into account the multi-view semi-supervised learning aspect. The next component of this thesis is the analysis of an active view completion scenario. In some tasks, it is possible to obtain missing view data for a particular instance, but with some associated cost. Recent work has shown an active selection strategy can be more effective than a random one. In this thesis, a better understanding of active approaches is sought, and it is demonstrated that the effectiveness of an active selection strategy over a random one can depend on the relationship between the views. Finally, an important component of making multi-view semi-supervised learning applicable to real world data is the task of model selection, an open problem which is often avoided entirely in previous work. For cases of very limited labeled training data the commonly used cross-validation approach can become ineffective. This thesis introduces a re-training alternative to the method-dependent approaches similar in motivation to cross-validation, that involves generating new training and test data by sampling from the large amount of unlabeled data and estimated conditional probabilities for the labels. The proposed approaches are evaluated on a variety of multi-view semi-supervised learning data sets, and the experimental results demonstrate their efficacy
An imperative for soil spectroscopic modelling is to think global but fit local with transfer learning
Soil spectroscopy with machine learning (ML) can estimate soil properties. Extensive soil spectral libraries (SSLs) have been developed for this purpose. However, general models built with those SSLs do not generalize well on new ‘unseen’ local data. The main reason is the different characteristics of the observations in the SSL and the local data, which cause their conditional and marginal distributions to differ. This makes the modelling of soil properties with spectra challenging. General models developed using large ‘global’ SSLs offer broad, systematic information on the soil-spectra relationships. However, to accurately generalize in a local situation, they must be adjusted to capture the site-specific characteristics of the local observations. Most current methods for ‘localizing’ spectroscopic modelling report inconsistent results. An understanding of spectroscopic ‘localization’ is lacking, and there is no framework to guide further developments. Here, we review current localization methods and propose their reformulation as a transfer learning (TL) undertaking. We then demonstrate the implementation of instance-based TL with RS-LOCAL 2.0 for modelling the soil organic carbon (SOC) content of 12 sites representing fields, farms and regions from 10 countries on the seven continents. The method uses a small number of instances or observations (measured soil property values and corresponding spectra) from the local site to transfer relevant information from a large and diverse global SSL (GSSL 2.0) with more than 50,000 records. We found that with ≤ 30 local observations, RS-LOCAL 2.0 produces more accurate and stable estimates of SOC than modelling with only the local data. Using the information in the GSSL 2.0 and reducing the number of samples for laboratory analysis, the method improves the cost-efficiency and practicality of soil spectroscopy. We interpreted the transfer by analysing the data, models, and soil and environmental relationships of the local and the ‘transferred’ data to gain insight into the approach. Transferring instances from the GSSL 2.0 to the local sites helped to align their conditional and marginal distributions, making the spectra-SOC relationships in the models more robust. Finally, we propose directions for future research. The guiding principle for developing practical and cost-effective spectroscopy should be to think globally but fit locally. By reformulating the localization problem within a TL framework, we hope to have acquainted the soil science community with a set of methodologies that can inspire the development of new, innovative algorithms for soil spectroscopic modelling
Deep Learning with Partially Labeled Data for Radio Map Reconstruction
In this paper, we address the problem of Received Signal Strength map
reconstruction based on location-dependent radio measurements and utilizing
side knowledge about the local region; for example, city plan, terrain height,
gateway position. Depending on the quantity of such prior side information, we
employ Neural Architecture Search to find an optimized Neural Network model
with the best architecture for each of the supposed settings. We demonstrate
that using additional side information enhances the final accuracy of the
Received Signal Strength map reconstruction on three datasets that correspond
to three major cities, particularly in sub-areas near the gateways where larger
variations of the average received signal power are typically observed.Comment: 42 pages, 39 figure
- …