423 research outputs found

    Discretisation of Data in a Binary Neural k-Nearest Neighbour Algorithm

    Get PDF
    This paper evaluates several methods of discretisation (binning) within a k-Nearest Neighbour predictor. Our k-NN is constructed using binary neural networks which require continuous-valued data to be discretised to allow it to be mapped to the binary neural framework. Our approach uses discretisation coupled with robust encoding to map data sets onto the binary neural network. In this paper, we compare seven unsupervised discretisation methods for retrieval accuracy (prediction accuracy) across a range of well-known prediction data sets comprising time-series data. We analyse whether there is an optimal discretisation configuration for our k-NN. The analyses demonstrate that the configuration is data specific. Hence, we recommend running evaluations of a number of configurations, varying both the discretisation methods and the number of discretisation bins, using a test data set. This evaluation will pinpoint the optimum configuration for new data sets

    Computed tomography image analysis for the detection of obstructive lung diseases

    Get PDF
    Damage to the small airways resulting from direct lung injury or associated with many systemic disorders is not easy to identify. Non-invasive techniques such as chest radiography or conventional tests of lung function often cannot reveal the pathology. On Computed Tomography (CT) images, the signs suggesting the presence of obstructive airways disease are subtle, and inter- and intra-observer variability can be considerable. The goal of this research was to implement a system for the automated analysis of CT data of the lungs. Its function is to help clinicians establish a confident assessment of specific obstructive airways diseases and increase the precision of investigation of structure/function relationships. To help resolve the ambiguities of the CT scans, the main objectives of our system were to provide a functional description of the raster images, extract semi-quantitative measurements of the extent of obstructive airways disease and propose a clinical diagnosis aid using a priori knowledge of CT image features of the diseased lungs. The diagnostic process presented in this thesis involves the extraction and analysis of multiple findings. Several novel low-level computer vision feature extractors and image processing algorithms were developed for extracting the extent of the hypo-attenuated areas, textural characterisation of the lung parenchyma, and morphological description of the bronchi. The fusion of the results of these extractors was achieved with a probabilistic network combining a priori knowledge of lung pathology. Creating a CT lung phantom allowed for the initial validation of the proposed methods. Performance of the techniques was then assessed with clinical trials involving other diagnostic tests and expert chest radiologists. The results of the proposed system for diagnostic decision-support demonstrated the feasibility and importance of information fusion in medical image interpretation.Open acces

    Time series data mining: preprocessing, analysis, segmentation and prediction. Applications

    Get PDF
    Currently, the amount of data which is produced for any information system is increasing exponentially. This motivates the development of automatic techniques to process and mine these data correctly. Specifically, in this Thesis, we tackled these problems for time series data, that is, temporal data which is collected chronologically. This kind of data can be found in many fields of science, such as palaeoclimatology, hydrology, financial problems, etc. TSDM consists of several tasks which try to achieve different objectives, such as, classification, segmentation, clustering, prediction, analysis, etc. However, in this Thesis, we focus on time series preprocessing, segmentation and prediction. Time series preprocessing is a prerequisite for other posterior tasks: for example, the reconstruction of missing values in incomplete parts of time series can be essential for clustering them. In this Thesis, we tackled the problem of massive missing data reconstruction in SWH time series from the Gulf of Alaska. It is very common that buoys stop working for different periods, what it is usually related to malfunctioning or bad weather conditions. The relation of the time series of each buoy is analysed and exploited to reconstruct the whole missing time series. In this context, EANNs with PUs are trained, showing that the resulting models are simple and able to recover these values with high precision. In the case of time series segmentation, the procedure consists in dividing the time series into different subsequences to achieve different purposes. This segmentation can be done trying to find useful patterns in the time series. In this Thesis, we have developed novel bioinspired algorithms in this context. For instance, for paleoclimate data, an initial genetic algorithm was proposed to discover early warning signals of TPs, whose detection was supported by expert opinions. However, given that the expert had to individually evaluate every solution given by the algorithm, the evaluation of the results was very tedious. This led to an improvement in the body of the GA to evaluate the procedure automatically. For significant wave height time series, the objective was the detection of groups which contains extreme waves, i.e. those which are relatively large with respect other waves close in time. The main motivation is to design alert systems. This was done using an HA, where an LS process was included by using a likelihood-based segmentation, assuming that the points follow a beta distribution. Finally, the analysis of similarities in different periods of European stock markets was also tackled with the aim of evaluating the influence of different markets in Europe. When segmenting time series with the aim of reducing the number of points, different techniques have been proposed. However, it is an open challenge given the difficulty to operate with large amounts of data in different applications. In this work, we propose a novel statistically-driven CRO algorithm (SCRO), which automatically adapts its parameters during the evolution, taking into account the statistical distribution of the population fitness. This algorithm improves the state-of-the-art with respect to accuracy and robustness. Also, this problem has been tackled using an improvement of the BBPSO algorithm, which includes a dynamical update of the cognitive and social components in the evolution, combined with mathematical tricks to obtain the fitness of the solutions, which significantly reduces the computational cost of previously proposed coral reef methods. Also, the optimisation of both objectives (clustering quality and approximation quality), which are in conflict, could be an interesting open challenge, which will be tackled in this Thesis. For that, an MOEA for time series segmentation is developed, improving the clustering quality of the solutions and their approximation. The prediction in time series is the estimation of future values by observing and studying the previous ones. In this context, we solve this task by applying prediction over high-order representations of the elements of the time series, i.e. the segments obtained by time series segmentation. This is applied to two challenging problems, i.e. the prediction of extreme wave height and fog prediction. On the one hand, the number of extreme values in SWH time series is less with respect to the number of standard values. In this way, the prediction of these values cannot be done using standard algorithms without taking into account the imbalanced ratio of the dataset. For that, an algorithm that automatically finds the set of segments and then applies EANNs is developed, showing the high ability of the algorithm to detect and predict these special events. On the other hand, fog prediction is affected by the same problem, that is, the number of fog events is much lower tan that of non-fog events, requiring a special treatment too. A preprocessing of different data coming from sensors situated in different parts of the Valladolid airport are used for making a simple ANN model, which is physically corroborated and discussed. The last challenge which opens new horizons is the estimation of the statistical distribution of time series to guide different methodologies. For this, the estimation of a mixed distribution for SWH time series is then used for fixing the threshold of POT approaches. Also, the determination of the fittest distribution for the time series is used for discretising it and making a prediction which treats the problem as ordinal classification. The work developed in this Thesis is supported by twelve papers in international journals, seven papers in international conferences, and four papers in national conferences

    A "non-parametric" version of the naive Bayes classifier

    Get PDF
    Many algorithms have been proposed for the machine learning task of classication. One of the simplest methods, the naive Bayes classifyer, has often been found to give good performance despite the fact that its underlying assumptions (of independence and a Normal distribution of the variables) are perhaps violated. In previous work, we applied naive Bayes and other standard algorithms to a breast cancer database from Nottingham City Hospital in which the variables are highly non-Normal and found that the algorithm performed well when predicting a class that had been derived from the same data. However, when we then applied naive Bayes to predict an alternative clinical variable, it performed much worse than other techniques. This motivated us to propose an alternative method, based on naive Bayes, which removes the requirement for the variables to be Normally distributed, but retains the essential structure and other underlying assumptions of the method. We tested our novel algorithm on our breast cancer data and on three UCI datasets which also exhibited strong violations of Normality. We found our algorithm outperformed naive Bayes in all four cases and outperformed multinomial logistic regression (MLR) in two cases. We conclude that our method offers a competitive alternative to MLR and naive Bayes when dealing with data sets in which non-Normal distributions are observed

    Clinical decision support system for early detection and diagnosis of dementia

    Get PDF
    Dementia is a syndrome caused by a chronic or progressive disease of the brain, which affects memory, orientation, thinking, calculation, learning ability and language. Until recently, early diagnosis of dementia was not a high priority, since the related diseases were considered untreatable and irreversible. However, more effective treatments are becoming available, which can slow the progress of dementia if they are used in the early stages of the disease. Therefore, early diagnosis is becoming more important. The Clock Drawing Test (CDT) and Mini Mental State Examination (MMSE) are well-known cognitive assessment tests. A known obstacle to the wider usage of the CDT assessments is the scoring and interpretation of the results. This thesis introduces a novel diagnostic Clinical Decision Support System (CDSS) based on CDT which can help in the diagnosis of three stages of dementia. It also introduces the advanced methods developed for the interpretation and analysis of CDTs. The data used in this research consist of 604 clock drawings produced by dementia patients and healthy individuals. A comprehensive catalogue of 47 visual features within CDT drawings is proposed to enhance the sensitivity of the CDT in diagnosing the early stages of dementia. These features are selected following a comprehensive analysis of the available data and the most common CDT scoring systems reported in the medical literature. These features are used to build a new digitised dataset necessary for training and validating the proposed CDSS. In this thesis, a novel feature selection method is proposed for the study of CDT feature significance and to define the most important features in diagnosing dementia. iii A new framework is also introduced to analyse the temporal changes in the CDT features corresponding to the progress of dementia over time, and to define the first onset symptoms. The proposed CDSS is designed to differentiate between four cognitive function statuses: (i) normal; (ii) mild cognitive impairment or mild dementia; (iii) moderate or severe dementia; and (vi) functional. This represents a new application of the CDT, as it was previously used only to detect the positive dementia cases. Diagnosing mild cognitive impairment or early stage dementia using CDT as a standalone tool is a very challenging task. To address this, a novel cascade classifier is proposed, which benefits from combining CDT and MMSE to enhance the overall performance of the system. The proposed CDSS diagnoses the CDT drawings and places them into one of three cognitive statuses (normal or functional, mild cognitive impairment or mild dementia, and moderate or severe dementia) with an accuracy of 78.34 %. Moreover, the proposed CDSS can distinguish between the normal and the abnormal cases with accuracy of 89.54 %. The achieved results are good and outperform most of CDT scoring systems in discriminating between normal and abnormal cases as reported in existing literature. Moreover, the system shows a good performance in diagnosing the CDT drawings into one of the three cognitive statuses, even comparing well with the performance of dementia specialists. The research has been granted ethical approval from the South East Wales Research Ethics Committee to employ anonymised copies of clock drawings and copies of Mini Mental State Examination made by patients during their examination by the memory team in Llandough hospital, Cardif

    Learning reliable representations when proxy objectives fail

    Get PDF
    Representation learning involves using an objective to learn a mapping from data space to a representation space. When the downstream task for which a mapping must be learned is unknown, or is too costly to cast as an objective, we must rely on proxy objectives for learning. In this Thesis I focus on representation learning for images, and address three cases where proxy objectives fail to produce a mapping that performs well on the downstream tasks. When learning neural network mappings from image space to a discrete hash space for fast content-based image retrieval, a proxy objective is needed which captures the requirement for relevant responses to be nearer to the hash of any query than irrelevant ones. At the same time, it is important to ensure an even distribution of image hashes across the whole hash space for efficient information use and high discrimination. Proxy objectives fail when they do not meet these requirements. I propose composing hash codes in two parts. First a standard classifier is used to predict class labels that are converted to a binary representation for state-of-the-art performance on the image retrieval task. Second, a binary deep decision tree layer (DDTL) is used to model further intra-class differences and produce approximately evenly distributed hash codes. The DDTL requires no discretisation during learning and produces hash codes that enable better discrimination between data in the same class when compared to previous methods, while remaining robust to real-world augmentations in the data space. In the scenario where we require a neural network to partition the data into clusters that correspond well with ground-truth labels, a proxy objective is needed to define how these clusters are formed. One such proxy objective involves maximising the mutual information between cluster assignments made by a neural network from multiple views. In this context, views are different augmentations of the same image and the cluster assignments are the representations computed by a neural network. I demonstrate that this proxy objective produces parameters for the neural network that are sub-optimal in that a better set of parameters can be found using the same objective and a different training method. I introduce deep hierarchical object grouping (DHOG) as a method to learn a hierarchy (in the sense of easy-to-hard orderings, not structure) of solutions to the proxy objective and show how this improves performance on the downstream task. When there are features in the training data from which it is easier to compute class predictions (e.g., background colour), when compared to features for which it is relatively more difficult to compute class predictions (e.g., digit type), standard classification objectives (e.g., cross-entropy) fail to produce robust classifiers. The problem is that if a model learns to rely on `easy' features it will also ignore `complex' features (easy versus complex are purely relative in this case). I introduce latent adversarial debiasing (LAD) to decouple easy features from the class labels by first modelling the underlying structure of the training data as a latent representation using a vector-quantised variational autoencoder, and then I use a gradient-based procedure to adjust the features in this representation to confuse the predictions of a constrained classifier trained to predict class labels from the same representation. The adjusted representations of the data are then decoded to produce an augmented training dataset that can be used for training in a standard manner. I show in the aforementioned scenarios that proxy objectives can fail and demonstrate that alternative approaches can mitigate against the associated failures. I suggest an analytic approach to understanding the limits of proxy objectives for every use case in order to make the adjustments to the data or the objectives and ensure good performance on downstream tasks

    Automated design of genetic programming of classification algorithms.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Pietermaritzburg.Over the past decades, there has been an increase in the use of evolutionary algorithms (EAs) for data mining and knowledge discovery in a wide range of application domains. Data classification, a real-world application problem is one of the areas EAs have been widely applied. Data classification has been extensively researched resulting in the development of a number of EA based classification algorithms. Genetic programming (GP) in particular has been shown to be one of the most effective EAs at inducing classifiers. It is widely accepted that the effectiveness of a parameterised algorithm like GP depends on its configuration. Currently, the design of GP classification algorithms is predominantly performed manually. Manual design follows an iterative trial and error approach which has been shown to be a menial, non-trivial time-consuming task that has a number of vulnerabilities. The research presented in this thesis is part of a large-scale initiative by the machine learning community to automate the design of machine learning techniques. The study investigates the hypothesis that automating the design of GP classification algorithms for data classification can still lead to the induction of effective classifiers. This research proposes using two evolutionary algorithms,namely,ageneticalgorithm(GA)andgrammaticalevolution(GE)toautomatethe design of GP classification algorithms. The proof-by-demonstration research methodology is used in the study to achieve the set out objectives. To that end two systems namely, a genetic algorithm system and a grammatical evolution system were implemented for automating the design of GP classification algorithms. The classification performance of the automated designed GP classifiers, i.e., GA designed GP classifiers and GE designed GP classifiers were compared to manually designed GP classifiers on real-world binary class and multiclass classification problems. The evaluation was performed on multiple domain problems obtained from the UCI machine learning repository and on two specific domains, cybersecurity and financial forecasting. The automated designed classifiers were found to outperform the manually designed GP classifiers on all the problems considered in this study. GP classifiers evolved by GE were found to be suitable for classifying binary classification problems while those evolved by a GA were found to be suitable for multiclass classification problems. Furthermore, the automated design time was found to be less than manual design time. Fitness landscape analysis of the design spaces searched by a GA and GE were carried out on all the class of problems considered in this study. Grammatical evolution found the search to be smoother on binary classification problems while the GA found multiclass problems to be less rugged than binary class problems
    corecore