1,372 research outputs found

    k-Nearest Neighbour Classifiers: 2nd Edition (with Python examples)

    Get PDF
    Perhaps the most straightforward classifier in the arsenal or machine learning techniques is the Nearest Neighbour Classifier -- classification is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance because issues of poor run-time performance is not such a problem these days with the computational power that is available. This paper presents an overview of techniques for Nearest Neighbour classification focusing on; mechanisms for assessing similarity (distance), computational issues in identifying nearest neighbours and mechanisms for reducing the dimension of the data. This paper is the second edition of a paper previously published as a technical report. Sections on similarity measures for time-series, retrieval speed-up and intrinsic dimensionality have been added. An Appendix is included providing access to Python code for the key methods.Comment: 22 pages, 15 figures: An updated edition of an older tutorial on kN

    Reputation-based maintenance in case-based reasoning

    Get PDF
    Case Base Maintenance algorithms update the contents of a case base in order to improve case-based reasoner performance. In this paper, we introduce a new case base maintenance method called Reputation-Based Maintenance (RBM) with the aim of increasing the classification accuracy of a Case-Based Reasoning system while reducing the size of its case base. The proposed RBM algorithm calculates a case property called Reputationfor each member of the case base, the value of which reflects the competence of the related case. Based on this case property, several removal policies and maintenance methods have been designed, each focusing on different aspects of the case base maintenance. The performance of the RBM method was compared with well-known state-of-the-art algorithms. The tests were performed on 30 datasets selected from the UCI repository. The results show that the RBM method in all its variations achieves greater accuracy than a baseline CBR, while some variations significantly outperform the state-of-the-art methods. We particularly highlight theRBM_ACBR algorithm, which achieves the highest accuracy among the methods in the comparison to a statistically significant degree, and the RBMcr algorithm, which increases the baseline accuracy while removing, on average, over half of the case basehis work has been partially supported by the SpanishMinistry of Science and Innovation with project MISMIS-LANGUAGE (grantnumber PGC2018-096212-B-C33), by the Catalan Agency of University andResearch Grants Management (AGAUR) (grants number 2017 SGR 341 and 2017SGR 574), by Spanish Network ‘‘Learning Machines for Singular Problems andApplications (MAPAS)’’ (TIN2017-90567-REDT, MINECO/FEDER EU) and by theEuropean Union’s Horizon 2020 research and innovation programme under theMarie Sklodowska-Curie grant agreement No. 860843Peer ReviewedPostprint (author's final draft

    Profiling Instances in Noise Reduction

    Get PDF
    The dependency on the quality of the training data has led to significant work in noise reduction for instance-based learning algorithms. This paper presents an empirical evaluation of current noise reduction techniques, not just from the perspective of their comparative performance, but from the perspective of investigating the types of instances that they focus on for re- moval. A novel instance profiling technique known as RDCL profiling allows the structure of a training set to be analysed at the instance level cate- gorising each instance based on modelling their local competence properties. This profiling approach o↵ers the opportunity of investigating the types of instances removed by the noise reduction techniques that are currently in use in instance-based learning. The paper also considers the e↵ect of removing instances with specific profiles from a dataset and shows that a very simple approach of removing instances that are misclassified by the training set and cause other instances in the dataset to be misclassified is an e↵ective noise reduction technique

    Dynamic Ensemble Selection with Regional Expertise

    Get PDF
    Many recent works have shown that ensemble methods yield better generalizability over single classifier approach by aggregating the decisions of all base learners in machine learning tasks. To address the redundancy and inaccuracy issues with the base learners in ensemble methods, classifier/ensemble selection methods have been proposed to select one single classifier or an ensemble (a subset of all base learners) to classify a query pattern. This final classifier or ensemble is determined either statically before prediction or dynamically for every query pattern during prediction. Static selection approaches select classifier and ensemble by evaluating classifiers in terms of accuracy and diversity. While dynamic classifier/ensemble selection (DCS, DES) methods incorporate local information for a dedicated classifier/ensemble to each query pattern. Our work focuses on DES by proposing a new DES framework — DES with Regional Expertise (DES-RE). The success of a DES system lies in two factors: the quality of base learners and the optimality of ensemble selection. DES-RE proposed in our work addresses these two challenges respectively. 1) Local expertise enhancement. A novel data sampling and weighting strategy that combines the advantages of bagging and boosting is employed to increase the local expertise of the base learners in order to facilitate the later ensemble selection. 2) Competence region optimization. DES-RE tries to learn a distance metric to form better competence regions (aka neighborhood) that promote strong base learners with respect to a specific query pattern. In addition to perform local expertise enhancement and competence region optimization independently, we proposed an expectation–maximization (EM) framework that combines the two procedures. For all the proposed algorithms, extensive simulations are conducted to validate their performances

    A Principled Methodology: A Dozen Principles of Software Effort Estimation

    Get PDF
    Software effort estimation (SEE) is the activity of estimating the total effort required to complete a software project. Correctly estimating the effort required for a software project is of vital importance for the competitiveness of the organizations. Both under- and over-estimation leads to undesirable consequences for the organizations. Under-estimation may result in overruns in budget and schedule, which in return may cause the cancellation of projects; thereby, wasting the entire effort spent until that point. Over-estimation may cause promising projects not to be funded; hence, harming the organizational competitiveness.;Due to the significant role of SEE for software organizations, there is a considerable research effort invested in SEE. Thanks to the accumulation of decades of prior research, today we are able to identify the core issues and search for the right principles to tackle pressing questions. For example, regardless of decades of work, we still lack concrete answers to important questions such as: What is the best SEE method? The introduced estimation methods make use of local data, however not all the companies have their own data, so: How can we handle the lack of local data? Common SEE methods take size attributes for granted, yet size attributes are costly and the practitioners place very little trust in them. Hence, we ask: How can we avoid the use of size attributes? Collection of data, particularly dependent variable information (i.e. effort values) is costly: How can find an essential subset of the SEE data sets? Finally, studies make use of sampling methods to justify a new method\u27s performance on SEE data sets. Yet, trade-off among different variants is ignored: How should we choose sampling methods for SEE experiments? ;This thesis is a rigorous investigation towards identification and tackling of the pressing issues in SEE. Our findings rely on extensive experimentation performed with a large corpus of estimation techniques on a large set of public and proprietary data sets. We summarize our findings and industrial experience in the form of 12 principles: 1) Know your domain 2) Let the Experts Talk 3) Suspect your data 4) Data Collection is Cyclic 5) Use a Ranking Stability Indicator 6) Assemble Superior Methods 7) Weighting Analogies is Over-elaboration 8) Use Easy-path Design 9) Use Relevancy Filtering 10) Use Outlier Pruning 11) Combine Outlier and Synonym Pruning 12) Be Aware of Sampling Method Trade-off

    Simple but Not Simplistic: Reducing the Complexity of Machine Learning Methods

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Resumo] A chegada do Big Data e a explosión do Internet das cousas supuxeron un gran reto para os investigadores en Aprendizaxe Automática, facendo que o proceso de aprendizaxe sexa mesmo roáis complexo. No mundo real, os problemas da aprendizaxe automática xeralmente teñen complexidades inherentes, como poden ser as características intrínsecas dos datos, o gran número de mostras, a alta dimensión dos datos de entrada, os cambios na distribución entre o conxunto de adestramento e test, etc. Todos estes aspectos son importantes, e requiren novoS modelos que poi dan facer fronte a estas situacións. Nesta tese, abordáronse todos estes problemas, tratando de simplificar o proceso de aprendizaxe automática no escenario actual. En primeiro lugar, realízase unha análise de complexidade para observar como inflúe esta na tarefa de clasificación, e se é posible que a aplicación dun proceso previo de selección de características reduza esta complexidade. Logo, abórdase o proceso de simplificación da fase de aprendizaxe automática mediante a filosofía divide e vencerás, usando un enfoque distribuído. Seguidamente, aplicamos esa mesma filosofía sobre o proceso de selección de características. Finalmente, optamos por un enfoque diferente seguindo a filosofía do Edge Computing, a cal permite que os datos producidos polos dispositivos do Internet das cousas se procesen máis preto de onde se crearon. Os enfoques propostos demostraron a súa capacidade para reducir a complexidade dos métodos de aprendizaxe automática tradicionais e, polo tanto, espérase que a contribución desta tese abra as portas ao desenvolvemento de novos métodos de aprendizaxe máquina máis simples, máis robustos, e máis eficientes computacionalmente.[Resumen] La llegada del Big Data y la explosión del Internet de las cosas han supuesto un gran reto para los investigadores en Aprendizaje Automático, haciendo que el proceso de aprendizaje sea incluso más complejo. En el mundo real, los problemas de aprendizaje automático generalmente tienen complejidades inherentes) como pueden ser las características intrínsecas de los datos, el gran número de muestras, la alta dimensión de los datos de entrada, los cambios en la distribución entre el conjunto de entrenamiento y test, etc. Todos estos aspectos son importantes, y requieren nuevos modelos que puedan hacer frente a estas situaciones. En esta tesis, se han abordado todos estos problemas, tratando de simplificar el proceso de aprendizaje automático en el escenario actual. En primer lugar, se realiza un análisis de complejidad para observar cómo influye ésta en la tarea de clasificación1 y si es posible que la aplicación de un proceso previo de selección de características reduzca esta complejidad. Luego, se aborda el proceso de simplificación de la fase de aprendizaje automático mediante la filosofía divide y vencerás, usando un enfoque distribuido. A continuación, aplicamos esa misma filosofía sobre el proceso de selección de características. Finalmente, optamos por un enfoque diferente siguiendo la filosofía del Edge Computing, la cual permite que los datos producidos por los dispositivos del Internet de las cosas se procesen más cerca de donde se crearon. Los enfoques propuestos han demostrado su capacidad para reducir la complejidad de los métodos de aprendizaje automático tnidicionales y, por lo tanto, se espera que la contribución de esta tesis abra las puertas al desarrollo de nuevos métodos de aprendizaje máquina más simples, más robustos, y más eficientes computacionalmente.[Abstract] The advent of Big Data and the explosion of the Internet of Things, has brought unprecedented challenges to Machine Learning researchers, making the learning task more complexo Real-world machine learning problems usually have inherent complexities, such as the intrinsic characteristics of the data, large number of instauces, high input dimensionality, dataset shift, etc. AH these aspects matter, and can fOI new models that can confront these situations. Thus, in this thesis, we have addressed aH these issues) simplifying the machine learning process in the current scenario. First, we carry out a complexity analysis to see how it inftuences the classification models, and if it is possible that feature selection might result in a deerease of that eomplexity. Then, we address the proeess of simplifying learning with the divide-and-conquer philosophy of the distributed approaeh. Later, we aim to reduce the complexity of the feature seleetion preprocessing through the same philosophy. FinallYl we opt for a different approaeh following the eurrent philosophy Edge eomputing, whieh allows the data produeed by Internet of Things deviees to be proeessed closer to where they were ereated. The proposed approaehes have demonstrated their eapability to reduce the complexity of traditional maehine learning algorithms, and thus it is expeeted that the eontribution of this thesis will open the doors to the development of new maehine learning methods that are simpler, more robust, and more eomputationally efficient

    A real-time data mining technique applied for critical ECG rhythm on handheld device

    Get PDF
    Sudden cardiac arrest is often caused by ventricular arrhythmias and these episodes can lead to death for patients with chronic heart disease. Hence, detection of such arrhythmia is crucial in mobile ECG monitoring. In this research, a systematic study is carried out to investigate the possible limitations that are preventing the realisation of a real-time ECG arrhythmia data-mining algorithm suitable for application on mobile devices. Based on the findings, a computationally lightweight algorithm is devised and tested. Ventricular tachycardia (VT) is the most common type of ventricular arrhythmias and is also the deadliest.. A ventricular tachycardia (VT) episode is due to a disorder ofthe regular contractions ofthe heart. It occurs when the human heart ventricles generate a rapid heartbeat which disrupts the regular physiology cycle. The normal sinus rhythm (NSR) of a regular human heart beat signal has its signature PQRST waveform and in regular pattern. Whereas, the characteristics of a ventricular tachycardia (VT) signal waveforms are short R-R intervals, widen QRS duration and the absence of P-waves. Each type of ECG arrhythmia previously mentioned has a unique waveform signature that can be exploited as features to be used for the realization of an automated ECG analysis application. In order to extract this known ECG waveform feature, a time-domain analysis is proposed for feature extraction. Cross-correlation allows the computation of a co-efficient that quantifies the similarity between two times-series. Hence, by cross-correlating known ECG waveform templates with an unknown ECG signal, the coefficient can indicate the similarities. In previous published work, a preliminary study was carried out. The cross-correlation coefficient wave (CCW) technique was introduced for feature extraction. The outcome ofthis work presents CCW as a promising feature to differentiate between NSR, VT and Vfib signals. Moreover, cross-correlation computation does not require high computational overhead. Next, an automated detection algorithm requires a classification mechanism to make sense of the feature extracted. A further study is conducted and published, a fuzzy set k-NN classifier was introduced for the classification of CCW feature extracted from ECG signal segments. A training set of size 180 is used. The outcome of the study indicates that the computationally light-weight fuzzy k-NN classifier can reliably classify between NSR and VT signals, the class detection rate is low for classifying Vfib signal using the fuzzy k-NN classifier. Hence, a modified algorithm known as fuzzy hybrid classifier is proposed. By implementing an expert knowledge based fuzzy inference system for classification of ECG signal; the Vfib signal detection rate was improved. The comparison outcome was that the hybrid fuzzy classifier is able to achieve 91.1% correct rate, 100% sensitivity and 100% specificity. The previously mentioned result outperforms the compared classifiers. The proposed detection and classification algorithm is able to achieve high accuracy in analysing ECG signal feature of NSR, VT and Vfib nature. Moreover, the proposed classifier is successfully implemented on a smart mobile device and it is able to perform data-mining of the ECG signal with satisfiable results

    Intelligent instance selection techniques for support vector machine speed optimization with application to e-fraud detection.

    Get PDF
    Doctor of Philosophy in Computer Science. University of KwaZulu-Natal, Durban 2017.Decision-making is a very important aspect of many businesses. There are grievous penalties involved in wrong decisions, including financial loss, damage of company reputation and reduction in company productivity. Hence, it is of dire importance that managers make the right decisions. Machine Learning (ML) simplifies the process of decision making: it helps to discover useful patterns from historical data, which can be used for meaningful decision-making. The ability to make strategic and meaningful decisions is dependent on the reliability of data. Currently, many organizations are overwhelmed with vast amounts of data, and unfortunately, ML algorithms cannot effectively handle large datasets. This thesis therefore proposes seven filter-based and five wrapper-based intelligent instance selection techniques for optimizing the speed and predictive accuracy of ML algorithms, with a particular focus on Support Vector Machine (SVM). Also, this thesis proposes a novel fitness function for instance selection. The primary difference between the filter-based and wrapper-based technique is in their method of selection. The filter-based techniques utilizes the proposed fitness function for selection, while the wrapper-based technique utilizes SVM algorithm for selection. The proposed techniques are obtained by fusing SVM algorithm with the following Nature Inspired algorithms: flower pollination algorithm, social spider algorithm, firefly algorithm, cuckoo search algorithm and bat algorithm. Also, two of the filter-based techniques are boundary detection algorithms, inspired by edge detection in image processing and edge selection in ant colony optimization. Two different sets of experiments were performed in order to evaluate the performance of the proposed techniques (wrapper-based and filter-based). All experiments were performed on four datasets containing three popular e-fraud types: credit card fraud, email spam and phishing email. In addition, experiments were performed on 20 datasets provided by the well-known UCI data repository. The results show that the proposed filter-based techniques excellently improved SVM training speed in 100% (24 out of 24) of the datasets used for evaluation, without significantly affecting SVM classification quality. Moreover, experimental results also show that the wrapper-based techniques consistently improved SVM predictive accuracy in 78% (18 out of 23) of the datasets used for evaluation and simultaneously improved SVM training speed in all cases. Furthermore, two different statistical tests were conducted to further validate the credibility of the results: Freidman’s test and Holm’s post-hoc test. The statistical test results reveal that the proposed filter-based and wrapper-based techniques are significantly faster, compared to standard SVM and some existing instance selection techniques, in all cases. Moreover, statistical test results also reveal that Cuckoo Search Instance Selection Algorithm outperform all the proposed techniques, in terms of speed. Overall, the proposed techniques have proven to be fast and accurate ML-based e-fraud detection techniques, with improved training speed, predictive accuracy and storage reduction. In real life application, such as video surveillance and intrusion detection systems, that require a classifier to be trained very quickly for speedy classification of new target concepts, the filter-based techniques provide the best solutions; while the wrapper-based techniques are better suited for applications, such as email filters, that are very sensitive to slight changes in predictive accuracy

    Knowledge Discovery and Management within Service Centers

    Get PDF
    These days, most enterprise service centers deploy Knowledge Discovery and Management (KDM) systems to address the challenge of timely delivery of a resourceful service request resolution while efficiently utilizing the huge amount of data. These KDM systems facilitate prompt response to the critical service requests and if possible then try to prevent the service requests getting triggered in the first place. Nevertheless, in most cases, information required for a request resolution is dispersed and suppressed under the mountain of irrelevant information over the Internet in unstructured and heterogeneous formats. These heterogeneous data sources and formats complicate the access to reusable knowledge and increase the response time required to reach a resolution. Moreover, the state-of-the art methods neither support effective integration of domain knowledge with the KDM systems nor promote the assimilation of reusable knowledge or Intellectual Capital (IC). With the goal of providing an improved service request resolution within the shortest possible time, this research proposes an IC Management System. The proposed tool efficiently utilizes domain knowledge in the form of semantic web technology to extract the most valuable information from those raw unstructured data and uses that knowledge to formulate service resolution model as a combination of efficient data search, classification, clustering, and recommendation methods. Our proposed solution also handles the technology categorization of a service request which is very crucial in the request resolution process. The system has been extensively evaluated with several experiments and has been used in a real enterprise customer service center
    • …
    corecore