98 research outputs found

    Algorithm selection on data streams

    Get PDF
    We explore the possibilities of meta-learning on data streams, in particular algorithm selection. In a first experiment we calculate the characteristics of a small sample of a data stream, and try to predict which classifier performs best on the entire stream. This yields promising results and interesting patterns. In a second experiment, we build a meta-classifier that predicts, based on measurable data characteristics in a window of the data stream, the best classifier for the next window. The results show that this meta-algorithm is very competitive with state of the art ensembles, such as OzaBag, OzaBoost and Leveraged Bagging. The results of all experiments are made publicly available in an online experiment database, for the purpose of verifiability, reproducibility and generalizability

    A Review of Meta-level Learning in the Context of Multi-component, Multi-level Evolving Prediction Systems.

    Get PDF
    The exponential growth of volume, variety and velocity of data is raising the need for investigations of automated or semi-automated ways to extract useful patterns from the data. It requires deep expert knowledge and extensive computational resources to find the most appropriate mapping of learning methods for a given problem. It becomes a challenge in the presence of numerous configurations of learning algorithms on massive amounts of data. So there is a need for an intelligent recommendation engine that can advise what is the best learning algorithm for a dataset. The techniques that are commonly used by experts are based on a trial and error approach evaluating and comparing a number of possible solutions against each other, using their prior experience on a specific domain, etc. The trial and error approach combined with the expert’s prior knowledge, though computationally and time expensive, have been often shown to work for stationary problems where the processing is usually performed off-line. However, this approach would not normally be feasible to apply on non-stationary problems where streams of data are continuously arriving. Furthermore, in a non-stationary environment the manual analysis of data and testing of various methods every time when there is a change in the underlying data distribution would be very difficult or simply infeasible. In that scenario and within an on-line predictive system, there are several tasks where Meta-learning can be used to effectively facilitate best recommendations including: 1) pre processing steps, 2) learning algorithms or their combination, 3) adaptivity mechanisms and their parameters, 4) recurring concept extraction, and 5) concept drift detection. However, while conceptually very attractive and promising, the Meta-learning leads to several challenges with the appropriate representation of the problem at a meta-level being one of the key ones. The goal of this review and our research is, therefore, to investigate Meta learning in general and the associated challenges in the context of automating the building, deployment and adaptation of multi-level and multi-component predictive system that evolve over time

    An Ensemble Model for Multiclass Classification and Outlier Detection Method in Data Mining

    Get PDF
    Real life world datasets exhibit a multiclass classification structure characterized by imbalance classes. Minority classes are treated as outliers’ classes. The study used cross-industry process for data mining methodology. A heterogeneous multiclass ensemble was developed by combining several strategies and ensemble techniques. The datasets used were drawn from UCI machine learning repository. Experiments for validating the model were conducted and represented in form of tables and figures. An ensemble filter selection method was developed and used for preprocessing datasets. Point-outliers were filtered using Inter quartile range filter algorithm. Datasets were resampled using Synthetic minority oversampling technique (SMOTE) algorithm. Multiclass datasets were transformed to binary classes using OnevsOne decomposing technique. An Ensemble model was developed using adaboost and random subspace algorithms utilizing random forest as the base classifier. The classifiers built were combined using voting methodology. The model was validated with classification and outlier metric performance measures such as Recall, Precision, F-measure and AUCROC values. The classifiers were evaluated using 10 fold stratified cross validation. The model showed better performance in terms of outlier detection and classification prediction for multiclass problem. The model outperformed other well-known existing classification and outlier detection algorithms such as Naïve bayes, KNN, Bagging, JRipper, Decision trees, RandomTree and Random forest. The study findings established ensemble techniques, resampling datasets and decomposing multiclass results in an improved detection of minority outlier (rare) classes. Keywords: Multiclass, Outlier, Ensemble, Model, Classification DOI: 10.7176/JIEA/9-2-04 Publication date: April 30th 2019

    Meta-level learning for the effective reduction of model search space.

    Get PDF
    The exponential growth of volume, variety and velocity of the data is raising the need for investigation of intelligent ways to extract useful patterns from the data. It requires deep expert knowledge and extensive computational resources to find the mapping of learning methods that leads to the optimized performance on a given task. Moreover, numerous configurations of these learning algorithms add another level of complexity. Thus, it triggers the need for an intelligent recommendation engine that can advise the best learning algorithm and its configurations for a given task. The techniques that are commonly used by experts are; trial-and-error, use their prior experience on the specific domain, etc. These techniques sometimes work for less complex tasks that require thousands of parameters to learn. However, the state-of-the-art models, e.g. deep learning models, require well-tuned hyper-parameters to learn millions of parameters which demand specialized skills and numerous computationally expensive and time-consuming trials. In that scenario, Meta-level learning can be a potential solution that can recommend the most appropriate options efficiently and effectively regardless of the complexity of data. On the contrary, Meta-learning leads to several challenges; the most critical ones being model selection and hyper-parameter optimization. The goal of this research is to investigate model selection and hyper-parameter optimization approaches of automatic machine learning in general and the challenges associated with them. In machine learning pipeline there are several phases where Meta-learning can be used to effectively facilitate the best recommendations including 1) pre-processing steps, 2) learning algorithm or their combination, 3) adaptivity mechanism parameters, 4) recurring concept extraction, and 5) concept drift detection. The scope of this research is limited to feature engineering for problem representation, and learning strategy for algorithm and its hyper-parameters recommendation at Meta-level. There are three studies conducted around the two different approaches of automatic machine learning which are model selection using Meta-learning and hyper-parameter optimization. The first study evaluates the situation in which the use of additional data from a different domain can improve the performance of a meta-learning system for time-series forecasting, with focus on cross- domain Meta-knowledge transfer. Although the experiments revealed limited room for improvement over the overall best base-learner, the meta-learning approach turned out to be a safe choice, minimizing the risk of selecting the least appropriate base-learner. There are only 2% of cases recommended by meta- learning that are the worst performing base-learning methods. The second study proposes another efficient and accurate domain adaption approach but using a different meta-learning approach. This study empirically confirms the intuition that there exists a relationship between the similarity of the two different tasks and the depth of network needed to fine-tune in order to achieve accuracy com- parable with that of a model trained from scratch. However, the approach is limited to a single hyper-parameter which is fine-tuning of the network depth based on task similarity. The final study of this research has expanded the set of hyper-parameters while implicitly considering task similarity at the intrinsic dynamics of the training process. The study presents a framework to automatically find a good set of hyper-parameters resulting in reasonably good accuracy, by framing the hyper-parameter selection and tuning within the reinforcement learning regime. The effectiveness of a recommended tuple can be tested very quickly rather than waiting for the network to converge. This approach produces accuracy close to the state-of-the-art approach and is found to be comparatively 20% less computationally expensive than previous approaches. The proposed methods in these studies, belonging to different areas of automatic machine learning, have been thoroughly evaluated on a number of benchmark datasets which confirmed the great potential of these methods

    Classifier of astrophysics data

    Get PDF
    Cílem této práce je implementace algoritmu pro dolování z dat pro použítí v astrofyzice. V práci jsou představeny základní pojmy a principy dolování z dat. Zejména jeho obecná definice, rozlišení mezi klasifikací a regresí a vyhodnocování přesnosti modelu. Text se zabývá převážně učením s učitelem. Blíže představeny jsou algoritmy založené na rozhodovacích stromech. Je definován rozhodovací strom jako model a uveden obecný algoritmus pro tvorbu rozhodovacích stromů z dat. Jsou diskutována různá kritéria dělení v uzlech (zejména založená na etropii), kritéria pro ukončení růstu a ořezávání stromů. Pro ilustraci jsou uvedeny vybrané algoritmy - ID3, CART, RainForest a BOAT. Na dříve uvedených informacích je založena kapitola o souborech rozhodovacích stromů. Zabývá se základními způsoby jejich kombinací (bagging a arcing) . Detailněji je popsán obecný algoritmus náhodných lesů a RandomForest TM jako příklad jeho praktické realizace. Na základě srovnání algoritmů a provedených experimentů v literatuře jsou k implementaci vybrány náhodné lesy. Implementovaný algoritmus je detailněji popsán - k dělení uzlů používá Gini entropie a průměrnou kvadratickou chybu, ignoruje chybějící hodnoty a pro kombinaci výstupů jednotlivých stromů používá většinové hlasování / průměr. Jako formát vstupních a výstupních dat je zvolena podmnožina ARFF formátu. Architektura implementace je ilustrována UML diagramy s popisujícím komentářem. Jednotlivé aspekty implementace jsou stručně popsány - implementačním jazykem je C++11, je využívána knihovna Boost (zejména chytré ukazatele, serializace, nastavení parametrů a konfigurační soubory, ...) společně s dalšími volně dostupnými knihovnami (google-glog pro logování, googletest pro jednotkové testování, ...). Grafického výstupu je dosaženo tiskem modelu náhodného lesu do XML souboru a jeho transformací skriptem do jazyka DOT. Pro oveření validity a vlastností implementace a jejího srovnání s jinými implementacemi náhodných stromů (Waffles, RF-ACE a R - balíček randomForest) jsou navrženy, popsány a provedeny exprimenty: klasifikace astronomických těles na základě barevných indexů, regrese rudého posuvu na základě barevných indexů, osm klasifikačních a pět regresních experimentů na datech z UCI repository. Průběh experimentů je plně automatizován skripty (Bash, Python a R) a je měřena doba učení modelů. Z výsledků experimentů vyplývá, že autorova implementace si vedla výborně při klasifikaci a průměrně při regresi; z časového hlediska měla problémy při datech s mnoha instancemi. Výsledkem práce je zdokumentovaná, snadno rozšiřitelná implementace náhodných lesů v jazyce C++ s grafickým znázorněním modelu, mnoha možnostmi nastavení a experimentálně ověřenou funkčností. Diskuze o dalším možném pokračování projektu se zabývá zejména odstraněním problemů s časovou náročností a přídáním nových funkcionalit.This bachelor thesis describes selection, design and implementation of a data mining algoritm for astrophysical usage.     The implementation of the random decision forests algorithm in C++ is evaluated on two astrophysical and some general experiments. Experiments are both classification and regression with time measuring. For comparison another three implementations are evaluated.     The resulting implementation shows good results mainly in classification.

    Data mining with neural networks and support vector machines using the R/rminer tool

    Get PDF
    We present rminer, our open source library for the R tool that facilitates the use of data mining (DM) algorithms, such as neural Networks (NNs) and support vector machines (SVMs), in classification and regression tasks. Tutorial examples with real-world problems (i.e. satellite image analysis and prediction of car prices) were used to demonstrate the rminer capabilities and NN/SVM advantages. Additional experiments were also held to test the rminer predictive capabilities, revealing competitive performances.Fundação para a Ciência e a Tecnologia (FCT) - PTDC/EIA/64541/200

    Visualization as a guidance to classification for large datasets

    Get PDF
    Data visualization has gained a lot of attention after the stressing need to make sense of the huge amounts of data that we collect every day. Lower dimensional embedding techniques such as IsoMap, Locally Linear Embedding and t-SNE help us visualize high dimensional data by projecting it on a two or three-dimensional space. t-SNE, or t-Distributed Stochastic Neighbor Embedding proved to be successful in providing lower dimensional data mappings that makes interpreting the underlying structure of data easier for our human brains. We wanted to test the hypothesis that this simple visualization that human beings can easily understand will also simplify the job of the classification models and boost their performance. In order to test this hypothesis, we reduce the dimensionality of a student performance dataset using t-SNE into 2D and 3D and feed the calculated 2D and 3D feature vectors into a classifier to classify students according to their predicted performance. We compare the classifier performance before and after the dimensionality reduction. Our experiments showed that t-SNE helps improve classification accuracy of NN and KNN on a benchmarking dataset as well as a user-curated dataset on performance of students at our home institution. We also visually compared the 2D and 3D mapping of t-SNE and PCA. Our comparison favored t-SNE\u27s visualization over PC\u27s. This was also reflected in the classification accuracy of all classifiers used, scoring higher on t-SNE\u27s mapping than on the PCA\u27s mapping

    Optimal constraint-based decision tree induction from itemset lattices

    No full text
    International audienceIn this article we show that there is a strong connection between decision tree learning and local pattern mining. This connection allows us to solve the computationally hard problem of finding optimal decision trees in a wide range of applications by post-processing a set of patterns: we use local patterns to construct a global model. We exploit the connection between constraints in pattern mining and constraints in decision tree induction to develop a framework for categorizing decision tree mining constraints. This framework allows us to determine which model constraints can be pushed deeply into the pattern mining process, and allows us to improve the state-of-the-art of optimal decision tree induction
    corecore